Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
7th June 2007, 12:23 | #61 | Link |
Registered User
Join Date: Dec 2005
Location: Denmark
Posts: 52
|
Gunnar, I have a request for the Use previous and future frames to fill in borders option. Could you add an option to start pass2 output immediately without waiting for Future frames to be read, and to continue until all frames of the input file are output?
It would make life alot easier if the output file frames matched the input frames exactly, with audio in sync. I know how to resync the audio via VDub's audio interlace dialog, but having the frames shifted 30 frames later in the AVI isn't too handy. Could you process the first n (30) input frames with the number of future/previous frames starting at 0, and incrementing to n (30) by input frame 30. By that I mean to process input frame 1 (the second frame of the input) as if the user had specified only 1 previous/future frames. At the third frame you would have 2 previous frames in the buffer, and use only those to fill borders. Similarly, to decrement the number of future frames at the end, to enable all of the input frames to be processed, although with fewer previous/future frames for the last few frames. I usually have at least a second of leader and tail on my videos, so I wouldn't be bothered by degraded border fill in the first few and last few frames. But having the Deshake'd video a frame-for-frame match with the original would definitely make some editing steps easier, with respect to original timecode for example. Does that make sense to you? Edit: Perhaps the easiest programming solution would be to initialize the the entire previous frames buffer with copies of the first input frame, and at the end to append copies of the last input frame to the future frames buffer, until the last input frame is finally output. Last edited by Fjord; 7th June 2007 at 13:00. Reason: Added idea for filling previous/future buffer. |
7th June 2007, 14:15 | #62 | Link |
Registered User
Join Date: Dec 2005
Location: Denmark
Posts: 52
|
I have a Canon S3 IS digicam, which records 640x480 30.000 fps progressive MJPEG video. When I Deshake these video clips, with 30 previous/future fill frames, the text message in the first output frames says "Deshaker: output is delayed 30 frames to collect future frames; Audio should be delayed 999 ms to maintain audio/video sync". I understand that message, but it should say 1000 ms rather than 999, since the input video is 30.000 fps. 999 would be right if the video was 29.97 fps. If I use a delay of 999 ms, my audio starts 45 samples before the video frame. 45/44100 = 0.001 = 1 ms. If I use a delay of 1000 ms I get perfect sync.
Does Deshaker internally use an NTSC framerate of 29.97 even if the input clip is 30.000? (VDub's file information dialog does show correctly "640x480, 30.000 fps (33333 us)" for the input file. |
7th June 2007, 16:02 | #64 | Link | |
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,496
|
Quote:
The only solution would be to store all frames during pass 1 for use during pass 2. David |
|
7th June 2007, 16:53 | #65 | Link |
Registered User
Join Date: Apr 2003
Location: Uppsala, Sweden
Posts: 157
|
Fjord, about your first question David is absolutely correct. The video simply has to be delayed. Otherwise Deshaker won't be able to access the future frames.
As for your second question, you are correct. It should say 1000, but due to a rounding error it says 999. I'll fix it in the next release. I doubt many people will be able to detect a 1ms out of sync, though. (Btw, 29.97 will actually give you 1001ms, not 999.) |
7th June 2007, 21:47 | #66 | Link |
Registered User
Join Date: Dec 2005
Location: Denmark
Posts: 52
|
Thanks David and Gunnar, now I understand the problem with future frames. I guess I'll have to figure out a script to pad the input video with 30 frames, and then to delete the first 30 "delay" frames from the output file. Has anybody worked out a script for that?
As for the 999 vs 1000....you're absolutely right. Nobody could hear a 1 ms lag. I frequently synchronize clips by matching waveforms on the timeline, so I notice it by eye. By the way, Deshaker is fantastic. You're fantastic, Guth. |
8th June 2007, 13:11 | #67 | Link | |
brontosaurusrex
Join Date: Oct 2001
Posts: 2,392
|
Quote:
|
|
11th June 2007, 19:17 | #68 | Link |
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,496
|
Gunnar, have you thought any more about adding true 3D correction to Deshaker? I have a sequence of shots that would look great, except I used a wide angle lens.
I've done similar stuff in the past if you think you might need any assistance David PS Have you any idea why Deshaker would have such a problem with these two images? Even if I force it to accept all blocks, or increase the search, Deshaker draws tails in the wrong direction. This pair of images doesn't show much more more movement than any other pair in the sequence. David Last edited by wonkey_monkey; 11th June 2007 at 20:05. |
25th June 2007, 21:27 | #69 | Link |
Registered User
Join Date: Apr 2003
Location: Uppsala, Sweden
Posts: 157
|
Yeah, I thought about it for two minutes, until my head exploded.
I'm afraid it would mean a lot of hard work. More than I'm up for atm. But we'll see... As for your two images, I can't see them. |
25th June 2007, 22:51 | #70 | Link | ||
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,496
|
Quote:
Quote:
+ = The originals are 4x larger in both dimensions, but the scale doesn't seem to matter as Deshaker still fails badly. I can manually rotate the first image to be closer so Deshaker succeeds, but I've seen it match much worse pairs before without any problem. David |
||
26th June 2007, 16:31 | #71 | Link |
Registered User
Join Date: Apr 2003
Location: Uppsala, Sweden
Posts: 157
|
I've never seen this behaviour before. For some reason deshaker finds a completely incorrect initial match and can't "get out of it". But if you limit the initial search range to 5% or so, it works.
|
26th June 2007, 17:48 | #72 | Link | |
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,496
|
Quote:
Guth, would you mind sharing the "secret" of how Deshaker smooths the pan/rotate/zoom variables it records in the log? From what I can see, they are all relative, i.e., they measure the difference between one frame and the previous frame. Do you translate these to "absolute" rotations/translations and average them over time, or am I missing a nicer way of doing it? I'm having a tough time visualising the process in my head (or on a bit of paper). David |
|
27th June 2007, 07:04 | #73 | Link |
C64
Join Date: Apr 2002
Location: Austria
Posts: 830
|
guth described it here in this post: http://forum.doom9.org/showthread.ph...933#post448933
|
27th June 2007, 17:51 | #74 | Link |
Registered User
Join Date: Apr 2003
Location: Uppsala, Sweden
Posts: 157
|
Indeed, but I'm not sure that post answered everything you asked.
You are correct in that I translate the values to "absolute" values, simply by adding them together (for zoom I add the logarithm of the zoom factors). And then I smooth that curve as described in my other post. Then I "move" each frame according to the difference between the smooth curve and the original curve. Simple as that... |
3rd July 2007, 14:07 | #76 | Link |
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,496
|
Here's a gif showing the kind of distortion that can be fixed by using 3D correction (Deshaker on the left, my AVS plugin on the right - the difference is subtle):
The plugin's still in development. It relies on Deshaker's log file and won't be anywhere near as comprehensive, but it solves my problem guth, a while ago I ran Deshaker on almost 40 minutes of video. As I was still experimenting, I set very high motion smoothness parameters, and the calculations at the start of pass 2 took an extremely long time, so I assume that the speed is dependent on the amount of smoothing, as well as the number of frames in the clip. How do you smooth the variables? David |
3rd July 2007, 14:32 | #77 | Link |
Registered User
Join Date: Apr 2003
Location: Uppsala, Sweden
Posts: 157
|
Nice, it still shakes a lot but the perspective distortion seems to be a lot better. To make it perfect I guess you have to do something about pass 1 too. It can only take panning, rotation and zoom into account today, so it will get confused by perspectives.
The smoothing process is described in the post that warp linked to above. The equation systems take longer time to solve when the smoothness is high, because each frame will then have a bigger impact on frames far away in the past or future. And since the equation systems are solved numerically, it takes a while for these low frequency changes to propagate over the equations, so to speak. Also, if you use max. correction limits, they can be very time consuming. |
3rd July 2007, 16:12 | #78 | Link | ||
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,496
|
Quote:
Some residual shakiness is to be expected anyway, as the frames weren't taken from the same position (I was walking along the top of a damn with my finger held on the camera's shutter release). Quote:
* Add each rotation from the logfile sequentially so that it becomes an "absolute" figure for that frame. * Smooth these absolute numbers out - essentially a blur. (a sequence of 0,1,1,0 might become 0.333,0.666,0.666,0.333 for example) * Use the difference between the smoothed and absolute figures and translate the images. But Deshaker does something a lot more complex, by the sound of things...? I get similar results though. At the moment my "blurring" process is just a simple average over a specified number of frames, but applied multiple times (even just 3) this can approximate a Gaussian, err, thing (I want to say blur). This can be performed in a time that is only dependent on the number of frames, completely independent of the amount of smoothness. But I haven't considered how the maximum correction limits would mess things up, since I've never used them David |
||
3rd July 2007, 17:21 | #79 | Link |
Registered User
Join Date: Apr 2003
Location: Uppsala, Sweden
Posts: 157
|
My algorithm might sound a little complex, but it doesn't require a lot of coding. Blurring probably works pretty well too. One difference is that I try to minimize the *squared* correction amounts in order to minimize the worst cases, whereas your approach seems more linear. And you don't really have much control over the process, for example to fix max. corrections. But that might be fixable, I don't know. Most importantly, though, my approach just feels more mathematically correct to me.
Or maybe I'm just jealous I didn't come up with such an easy solution. |
|
|