Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
10th June 2010, 00:40 | #1 | Link |
Registered User
Join Date: Jun 2009
Location: UK
Posts: 263
|
Inverse of Decimate() ...?
I have a video that has been wrongly decimated from 30fps to 24fps. My aim is to restore the missing frames using MVtools. The tricky bit is how to identify (automatically) the frames either side of the missing one, and get interpolation to only happen at that point.
My first thought was to just double the frame rate using MVTools2 (from 24 to 48fps), and then apply Decimate() (from the Decomb package) three times to remove the surplus frames (going from a cycle of 8 down to a cycle of 5)... But the problem is sometimes Decimate will pick the *original* frames to delete, and keep the interpolated ones, which is not ideal. When going the other way (from 30 to 24fps), Decimate uses an algorithm to detect and delete the frame in each cycle which is most similar to its predecessor. Does anyone have a filter that can detect and [do something - ideally, create a new frame via MVTools] with the frame in each cycle that is *least* like its predecessor? If not, I guess I can try canibalising the code from Decomb/Decimate to create my own... Last edited by pbristow; 10th June 2010 at 01:02. Reason: Fixing typos |
10th June 2010, 07:01 | #5 | Link |
Registered User
Join Date: Jun 2009
Location: UK
Posts: 263
|
Hmmm... Looks like you're testing me for a possible rule 6 violation?
It's band footage, owned by the performer, shot by an inexperienced friend of theirs some years ago on a domestic, analog NTSC camcorder, and now passed to me to see if I can "make it look nicer". They plan to use clips and/or stills from it in their promotional material / website. At some point before it got to me, someone decided to convert it to PAL by dropping frames and speeding it up. I dunno what software they used, and I don't have access to the original 30fps source. What I do know is that the pictures are reasonably clean (considering), but the motion jerks after every fourth frame, which is how I figured out what had been done to it. So, having slowed the thing down to 24fps, and I just need to fill in those missing frames to recover the original motion. After that, I'll try doing an MVTools-based conversion to PAL in case they ever want it for a DVD later, but for the web we'll probably stick with the 30fps version. (The reason we're talking in frames rather than fields per second is that for some reason, every 2nd field of the footage is much poorer quality than the first one, so I'm deinterlacing and throwing away the muddy fields first). Feel free to Google the name Talis Kimberley. |
10th June 2010, 07:37 | #6 | Link |
Registered User
Join Date: Jun 2009
Location: UK
Posts: 263
|
Ah! Just had a brainwave.
If I stick with the original "Decimate three times" idea, but change the motion interpolation to create the moment 55% past each frame, rather than 50%, then each interpolated frame will be more like its successor than its predecessor. That should bias Decimate towards dropping the interpolated frames, rather than the originals. I'll give that a try later today. Last edited by pbristow; 10th June 2010 at 14:37. Reason: Fixing typos (I get a lot of those!) |
11th June 2010, 19:27 | #7 | Link |
Registered User
Join Date: Jun 2009
Location: UK
Posts: 263
|
Hmmm... I had the logic backwards, of course. I settled in the end for a mocomp to the point 42% ahead of each source frame, (using MFlowInter) and then multiple Decimate calls.
It sort of works, but still isn't great. (E.g. static sequences get artificially shortened in favour of keeping all the frames with significant movement.) Looks like I have another filter-hacking project to play with, then... (Maybe one day I'll get something working well enough to publish here!) |
11th June 2010, 21:51 | #8 | Link |
ангел смерти
Join Date: Nov 2004
Location: Lost
Posts: 9,556
|
What about MFlowFPS(.... fps*2 ....).selectevery(10,0,2,4,6,8,9).Assumefps(fps*5/4)? You already have a constant rhythm, so you can drop interpolated frames consistently.
Last edited by foxyshadis; 11th June 2010 at 22:25. |
12th June 2010, 20:36 | #10 | Link | |
Registered User
Join Date: Jun 2009
Location: UK
Posts: 263
|
Quote:
The idea was to exploit the cycle mechanism of Decimate to find the correct place to add an extra frame each time (there would still be the *occasional* extra frame or missing frame, but not often enough or regular enough to be so annoying). At the moment I'm running a test using TDecimate, which can directly prune N from M, instead of using Decimate three times. I'll compare that with the file I got from using Decimate and see if it's good enough; If not, then its time to fire up my C compiler. :) What I have in mind is a filter based on either Decimate or TDecimate (to do the detectionof "least similar pair of frames"), which accepts a second clip as "fill-in source". Wherever a big jump is found between frame n and frame n+1 of the input clip, then frame n of the fill-in source is inserted between them. That leaves the choice of how to create the fill-in frames entirely up to the user, which should make it a nice general purpose filter (you could use mvtools, or depan, or a simple blend of n with n+1, or even some completely unrelated source clip to create a freaky stobe/intercut effect!). Whether I'll actually knuckle down and get it coded is another matter. I'm always big on ideas, slow on implementation... =:o{ |
|
12th June 2010, 20:46 | #11 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
It would definetly by nice to have such a filter. I've faced the problems a few times, too. And while it is more-or-less possible to solve that problem with currently available arms, it is quite some hassle to script that up.
One possible method was posted here, which was an even nastier case (decimated interlaced frames). Though I think the not-scriptclip based method worked more stable - but that one never has been posted.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
13th June 2010, 08:11 | #12 | Link |
Registered User
Join Date: Apr 2008
Location: St. Petersburg, Russia
Posts: 334
|
I think it is important here to protect original frames from decimation using the fact that they all belong to either Even or Odd frames in case of framerate duplication with mvtools. Probably this could be done within text processing engine of MultiDecimate plugin. During pass 1 it creates mfile.txt with frame difference values, then according to selected type of decimation cfile.txt is created where [in my understanding] the frames are evaluated in weighted numbers for decimation and finally the list of remaining frame numbers dfile.txt is created. Not sure but probably something like post-processing of cfile.txt could be done for protecting even frames (maybe by forced setting those values to above-threshold with re-addressing decimation flag to another candidate frame). Looks like this would need including some new 'protect Even/Odd' option in the plugin. Could neuron2 comment? Thanks.
|
16th June 2010, 23:04 | #13 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,685
|
I have had to deal with all manner of badly done frame-rate conversions, usually dealing with NTSC -> PAL -> NTSC. I end up with video that has big jumps where too many frames (or fields) were deleted. Sometime, strangely, the video also has duplicates. So, you end up with video that stutters and hesitates, something I assume you are seeing in your video.
There is no universal solution, because the solution depends on the actual cadence of your footage. However, here is a code snippet I found a long time ago here in the forum that may come in handy: Code:
function filldrops (clip c) { vf=c.mvanalyse(truemotion=true,pel=2,isb=false,delta=1,idx=1) vb=c.mvanalyse(truemotion=true,pel=2,isb=true,delta=1,idx=1) global filldrops_d = c.mvflowinter(vb,vf,time=50,idx=1) global filldrops_c = c c.scriptclip("""ydifferencefromprevious()==0? filldrops_d : filldrops_c""") } usage: FWIW, here is the interlaced version I created which uses MVTools2 instead of MVTools. The above code was created by someone else, but only works on progressive. Also, I slightly increased the comparison parameter so the algorithm will synthesize a motion-estimated field even if the two adjacent fields are not perfectly identical. This leads to an important concept for you to understand: It is generally OK if your script occasionally unecessarily creates a synthesized frame (or field, for interlaced video).As you already know, you'd rather use the original frames wherever possible, but if once in awhile your script goofs and substitutes an interpolated frame, as long as this doesn't happen often, it won't matter. After all, the video is going to have lots of interpolated frames, so if you have a few extra, it is usually not a big deal. Code:
function filldropsI (clip c) { even = c.SeparateFields().SelectEven() super_even=MSuper(even,pel=2) vfe=manalyse(super_even,truemotion=true,isb=false,delta=1) vbe=manalyse(super_even,truemotion=true,isb=true,delta=1) filldrops_e = mflowinter(even,super_even,vbe,vfe,time=50) odd = c.SeparateFields().SelectOdd() super_odd=MSuper(odd,pel=2) vfo=manalyse(super_odd,truemotion=true,isb=false,delta=1) vbo=manalyse(super_odd,truemotion=true,isb=true,delta=1) filldrops_o = mflowinter(odd,super_odd,vbo,vfo,time=50) evenfixed = ConditionalFilter(even, filldrops_e, even, "YDifferenceFromPrevious()", "lessthan", "0.1") oddfixed = ConditionalFilter(odd, filldrops_o, odd, "YDifferenceFromPrevious()", "lessthan", "0.1") Interleave(evenfixed,oddfixed) Weave() # Following line removed after suggestion by Gavino # AssumeFieldBased() AssumeBFF() } Code:
function filldropsnext (clip c) { previous = Loop(c,2,0,0) super=MSuper(previous,pel=2) vfe=manalyse(super,truemotion=true,isb=false,delta=1) vbe=manalyse(super,truemotion=true,isb=true,delta=1) filldrops = mflowinter(previous,super,vbe,vfe,time=50) fixed = ConditionalFilter(c, filldrops, c, "YDifferenceToNext()", "lessthan", "0.1") return fixed } Last edited by johnmeyer; 17th June 2010 at 17:33. Reason: Added cfile information; Later edit to remove AssumeFieldBased() |
16th June 2010, 23:56 | #14 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
Relying on YDifferenceToPrev/Next will get you pretty much nowhere. (At least not for an "automated" solution ... and spending hours or days on manual analysis of spreadsheet data is not what you usually want to do.) The valus for "typical" differences are both scene-dependent and motion-dependent, i.e. that value will change all the time. No way to catch that with one single arbitrary threshold. You really need to examine the differences throughout a temporal window, and search for the one difference that is bigger than the rest.
Once you've figured the positions of missing frames, the rest is rather trivial. Via script: double framerate by frame doubling, put interpolations in the figured positions, decimate 47.952 -> 29.97, finished.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 17th June 2010 at 00:01. |
17th June 2010, 00:36 | #15 | Link |
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
@johnmeyer
Perhaps I have misunderstood, but shouldn't all your functions be using motion vectors with delta=2 rather than delta=1 if you are trying to replace a duplicate frame by an interpolation between its neighbours? Also, AssumeFieldBased() should not normally be used after Weave(), as this describes a field-separated clip. Last edited by Gavino; 17th June 2010 at 00:41. |
17th June 2010, 03:03 | #17 | Link | |||
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,685
|
Quote:
So, if the function isn't 100% perfect and a few extra "good" frames get replaced with interpolated frames, it shouldn't make much difference, especially compared to the horrible stutter that you generally get with what the OP described. Quote:
But like I said, perhaps I don't understand correctly. I can tell you, for absolute certain, that the function I posted works fantastically well at replacing the second duplicate with a near-perfect motion-estimated synthesized frame. I have used it many, many times on dozens of hours of video (long story how I came into possession of so much lousy video ...) Quote:
This is a mistake. What is the proper etiquette on this board? Should I edit my original post to correct the mistake? Or, is this discussion in this post sufficient? |
|||
17th June 2010, 16:09 | #18 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
A 4-years-old script, with some minor adjustments of today.
Code:
# mt_masktools-25.dll # mvtools2.dll # tivtc.dll Source( "VIDEO_that_has_been_decimated_to_FILM" ) o = assumefps(1.0) ox=o.width() oy=o.height() showdot = true # false # shows a "dot" in the interpolated frames of the result super = showdot ? o.subtitle(".").MSuper(pel=2) : o.MSuper(pel=2) bvec = MAnalyse(super, overlap=4, isb = true, search=4, dct=5) fvec = MAnalyse(super, overlap=4, isb = false, search=4, dct=5) double = o.MFlowFps(super, bvec, fvec, num=2, den=1, blend=false) diff2next = mt_makediff(o,o.selectevery(1,1)).mt_lut("x 128 - abs 32 / 1 2.0 / ^ 128 *",U=-128,V=-128) diff2next = mt_lutf(diff2next,diff2next,yexpr="x",mode="average").pointresize(32,32) diff2next = interleave(diff2next.selectevery(4,0).tsg(2),diff2next.selectevery(4,1).tsg(2), \ diff2next.selectevery(4,2).tsg(2),diff2next.selectevery(4,3).tsg(2)) max = diff2next.mt_logic(diff2next.selectevery(1,-3),"max") \ .mt_logic(diff2next.selectevery(1,-2),"max") \ .mt_logic(diff2next.selectevery(1,-1),"max") \ .mt_logic(diff2next.selectevery(1, 1),"max") \ .mt_logic(diff2next.selectevery(1, 2),"max") \ .mt_logic(diff2next.selectevery(1, 3),"max") ismax = mt_lutxy(diff2next,max,"x y < 0 255 ?",U=-128,V=-128).pointresize(ox,oy) themask = interleave(o.mt_lut("0"),ismax) interleave(o,o).mt_merge(double,themask,luma=true,U=3,V=3) tdecimate(mode=1,cycleR=3,cycle=8) assumefps(30000,1001) return( last ) #=========================== function tsg(clip c, int t) { c t<5?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25) t<4?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25) t<3?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25) t<2?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25) t<1?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25) }
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
17th June 2010, 18:43 | #19 | Link | |||||
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
Quote:
Quote:
More precisely, what this means is that frame n of MFlowInter is an interpolation between input frames n and n+delta (using forward vector n->n+delta and backward vector n+delta->n). So in general, if you want to replace a 'bad' frame n by interpolating between its immediate neighbours n-1 and n+1, you need to use delta=2 (and take the result from frame n-1 of MFlowInter). See the example in the MVTools doc, "recreate bad frames by interpolation with MFlowInter". Quote:
What happens is that if a duplicate is found at frame n, it interpolates between n and n+1, since delta=1. At first sight, this is wrong, but since frame n is a duplicate of frame n-1, it is equivalent to the desired interpolation between n-1 and n+1, so the final result comes out OK. (Whether by accident or by design, I'm not sure. ) Quote:
Quote:
Last edited by Gavino; 17th June 2010 at 18:48. |
|||||
17th June 2010, 23:29 | #20 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,685
|
Quote:
Unfortunately, this created a new source of confusion, and now makes me think that both the filldrops function and even the example given in the MVTools2 documentation are incorrect. See if you agree. Here is my thinking: MAnalyze is used to create motion vectors between the current frame and adjacent frames. However, if the current frame is totally screwed up (blank, duplicate, full of noise, etc.) then won't MAnalyze create vectors that are of absolutely no value, regardless of what number I assign to "delta?" In other words, if I have one frame that is completely, totally different than the frame before and the frame after, then motion vectors which use the pixels in this current, bad, reference frame will have no useful relationship to anything. Thus, it would seem to me that even the example in the documentation is wrong, and that to make this work, I must actually use, as the input to MSuper and MFlowInter, the good frame immediately before the bad frame and generate all vectors with reference to that frame. I should then use delta=1 for the backward vector, and delta=2 for the forward vector (to skip over the bad frame). Finally, the call to MFlowinter should specify time=66.67. Here is an attempt to explain the problem with a diagram: F1 F2 F3 F4 F3 is the bad frame. What follows is pseudo code of what I think is actually the correct way to interpolate to replace a truly bad (garbage) frame: Code:
super=MSuper(F2,pel=2) vfe=manalyse(super,truemotion=true,isb=false,delta=2) vbe=manalyse(super,truemotion=true,isb=true,delta=1) F3 = mflowinter(F2,super,vbe,vfe,time=66.67) Now that I understand this, I can see how the filldrops code I quoted above (I think "MugFunky" invented it) does generate satisfactory results even though it actually isn't correct. It sort of works because the "bad frame" is a duplicate of a perfectly good frame, and therefore the backward vector created between it and a duplicate of itself will still be valid, and the vector created between it and the next frame will be valid. In other words, because the previous frame is a duplicate, the backward vectors will not contribute anything useful to the interpolation, but they won't cause the thing to blow up. Therefore, I get an interpolated frame that is useful, but which I now suspect is not as good as it might be. So, I now believe that the original documentation and also the original filldrops code are both incorrect, and that the proper approach is the one I discovered when I had to replace the first of the two frames rather than the second. If you go back to my initial post in this thread, that is the code in the third code block that uses the Loop function to return the previous frame to the MSuper and MFlowInter function. Last edited by johnmeyer; 17th June 2010 at 23:34. Reason: Corrected minor, but misleading, typo immediately after posting |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|