Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 10th June 2010, 00:40   #1  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Inverse of Decimate() ...?

I have a video that has been wrongly decimated from 30fps to 24fps. My aim is to restore the missing frames using MVtools. The tricky bit is how to identify (automatically) the frames either side of the missing one, and get interpolation to only happen at that point.

My first thought was to just double the frame rate using MVTools2 (from 24 to 48fps), and then apply Decimate() (from the Decomb package) three times to remove the surplus frames (going from a cycle of 8 down to a cycle of 5)... But the problem is sometimes Decimate will pick the *original* frames to delete, and keep the interpolated ones, which is not ideal.

When going the other way (from 30 to 24fps), Decimate uses an algorithm to detect and delete the frame in each cycle which is most similar to its predecessor. Does anyone have a filter that can detect and [do something - ideally, create a new frame via MVTools] with the frame in each cycle that is *least* like its predecessor?

If not, I guess I can try canibalising the code from Decomb/Decimate to create my own...

Last edited by pbristow; 10th June 2010 at 01:02. Reason: Fixing typos
pbristow is offline   Reply With Quote
Old 10th June 2010, 00:52   #2  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Can't you just buy the DVD?
Guest is offline   Reply With Quote
Old 10th June 2010, 01:01   #3  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Quote:
Originally Posted by neuron2 View Post
Can't you just buy the DVD?
Heh. No, because there isn't one. It's not commercially released material.
pbristow is offline   Reply With Quote
Old 10th June 2010, 01:18   #4  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
What is it then and where did you get it?
Guest is offline   Reply With Quote
Old 10th June 2010, 07:01   #5  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Hmmm... Looks like you're testing me for a possible rule 6 violation?

It's band footage, owned by the performer, shot by an inexperienced friend of theirs some years ago on a domestic, analog NTSC camcorder, and now passed to me to see if I can "make it look nicer". They plan to use clips and/or stills from it in their promotional material / website.

At some point before it got to me, someone decided to convert it to PAL by dropping frames and speeding it up. I dunno what software they used, and I don't have access to the original 30fps source. What I do know is that the pictures are reasonably clean (considering), but the motion jerks after every fourth frame, which is how I figured out what had been done to it. So, having slowed the thing down to 24fps, and I just need to fill in those missing frames to recover the original motion. After that, I'll try doing an MVTools-based conversion to PAL in case they ever want it for a DVD later, but for the web we'll probably stick with the 30fps version. (The reason we're talking in frames rather than fields per second is that for some reason, every 2nd field of the footage is much poorer quality than the first one, so I'm deinterlacing and throwing away the muddy fields first).

Feel free to Google the name Talis Kimberley.
pbristow is offline   Reply With Quote
Old 10th June 2010, 07:37   #6  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Ah! Just had a brainwave.

If I stick with the original "Decimate three times" idea, but change the motion interpolation to create the moment 55% past each frame, rather than 50%, then each interpolated frame will be more like its successor than its predecessor. That should bias Decimate towards dropping the interpolated frames, rather than the originals.

I'll give that a try later today.

Last edited by pbristow; 10th June 2010 at 14:37. Reason: Fixing typos (I get a lot of those!)
pbristow is offline   Reply With Quote
Old 11th June 2010, 19:27   #7  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Hmmm... I had the logic backwards, of course. I settled in the end for a mocomp to the point 42% ahead of each source frame, (using MFlowInter) and then multiple Decimate calls.

It sort of works, but still isn't great. (E.g. static sequences get artificially shortened in favour of keeping all the frames with significant movement.)

Looks like I have another filter-hacking project to play with, then... (Maybe one day I'll get something working well enough to publish here!)
pbristow is offline   Reply With Quote
Old 11th June 2010, 21:51   #8  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,556
What about MFlowFPS(.... fps*2 ....).selectevery(10,0,2,4,6,8,9).Assumefps(fps*5/4)? You already have a constant rhythm, so you can drop interpolated frames consistently.

Last edited by foxyshadis; 11th June 2010 at 22:25.
foxyshadis is offline   Reply With Quote
Old 11th June 2010, 23:24   #9  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
Hint:- For Assumefps(fps*5/4) use AssumeScaledFPS(5, 4) to avoid rounding errors
IanB is offline   Reply With Quote
Old 12th June 2010, 20:36   #10  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Quote:
Originally Posted by foxyshadis View Post
What about MFlowFPS(.... fps*2 ....).selectevery(10,0,2,4,6,8,9).Assumefps(fps*5/4)? You already have a constant rhythm, so you can drop interpolated frames consistently.
That would work if the pattern of missing frames was entirely consistent, but it isn't (probably because the source was decimated from 30*1000/1001 to exactly 24). As soon it slips by one frame from the pattern you're xpecting, you're then adding an extra frame to each cycle where it's not needed and still have a missing one elsewhere.

The idea was to exploit the cycle mechanism of Decimate to find the correct place to add an extra frame each time (there would still be the *occasional* extra frame or missing frame, but not often enough or regular enough to be so annoying).

At the moment I'm running a test using TDecimate, which can directly prune N from M, instead of using Decimate three times. I'll compare that with the file I got from using Decimate and see if it's good enough; If not, then its time to fire up my C compiler. :)

What I have in mind is a filter based on either Decimate or TDecimate (to do the detectionof "least similar pair of frames"), which accepts a second clip as "fill-in source". Wherever a big jump is found between frame n and frame n+1 of the input clip, then frame n of the fill-in source is inserted between them. That leaves the choice of how to create the fill-in frames entirely up to the user, which should make it a nice general purpose filter (you could use mvtools, or depan, or a simple blend of n with n+1, or even some completely unrelated source clip to create a freaky stobe/intercut effect!).

Whether I'll actually knuckle down and get it coded is another matter. I'm always big on ideas, slow on implementation... =:o{
pbristow is offline   Reply With Quote
Old 12th June 2010, 20:46   #11  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
It would definetly by nice to have such a filter. I've faced the problems a few times, too. And while it is more-or-less possible to solve that problem with currently available arms, it is quite some hassle to script that up.

One possible method was posted here, which was an even nastier case (decimated interlaced frames). Though I think the not-scriptclip based method worked more stable - but that one never has been posted.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 13th June 2010, 08:11   #12  |  Link
Alex_ander
Registered User
 
Alex_ander's Avatar
 
Join Date: Apr 2008
Location: St. Petersburg, Russia
Posts: 334
I think it is important here to protect original frames from decimation using the fact that they all belong to either Even or Odd frames in case of framerate duplication with mvtools. Probably this could be done within text processing engine of MultiDecimate plugin. During pass 1 it creates mfile.txt with frame difference values, then according to selected type of decimation cfile.txt is created where [in my understanding] the frames are evaluated in weighted numbers for decimation and finally the list of remaining frame numbers dfile.txt is created. Not sure but probably something like post-processing of cfile.txt could be done for protecting even frames (maybe by forced setting those values to above-threshold with re-addressing decimation flag to another candidate frame). Looks like this would need including some new 'protect Even/Odd' option in the plugin. Could neuron2 comment? Thanks.
Alex_ander is offline   Reply With Quote
Old 16th June 2010, 23:04   #13  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,685
I have had to deal with all manner of badly done frame-rate conversions, usually dealing with NTSC -> PAL -> NTSC. I end up with video that has big jumps where too many frames (or fields) were deleted. Sometime, strangely, the video also has duplicates. So, you end up with video that stutters and hesitates, something I assume you are seeing in your video.

There is no universal solution, because the solution depends on the actual cadence of your footage. However, here is a code snippet I found a long time ago here in the forum that may come in handy:
Code:
function filldrops (clip c)
{
vf=c.mvanalyse(truemotion=true,pel=2,isb=false,delta=1,idx=1)
vb=c.mvanalyse(truemotion=true,pel=2,isb=true,delta=1,idx=1)
global filldrops_d = c.mvflowinter(vb,vf,time=50,idx=1)
global filldrops_c = c
c.scriptclip("""ydifferencefromprevious()==0? filldrops_d : filldrops_c""")

}
usage:
This code finds all duplicated frames and replaces the second instance of two duplicate frames with a motion-estimated frame. As you can see from the code, it only does this if the "ydifferencefromprevious" value is exactly zero. I think you might be able to adapt this, perhaps, to do exactly the opposite, namely to look for unusually large differences between frames, and synthesize a new frame from this. By itself, this code isn't going to solve your problem, but it may be a useful building block.

FWIW, here is the interlaced version I created which uses MVTools2 instead of MVTools. The above code was created by someone else, but only works on progressive. Also, I slightly increased the comparison parameter so the algorithm will synthesize a motion-estimated field even if the two adjacent fields are not perfectly identical.

This leads to an important concept for you to understand:
It is generally OK if your script occasionally unecessarily creates a synthesized frame (or field, for interlaced video).
As you already know, you'd rather use the original frames wherever possible, but if once in awhile your script goofs and substitutes an interpolated frame, as long as this doesn't happen often, it won't matter. After all, the video is going to have lots of interpolated frames, so if you have a few extra, it is usually not a big deal.

Code:
function filldropsI (clip c)
{
  even = c.SeparateFields().SelectEven()
  super_even=MSuper(even,pel=2)
  vfe=manalyse(super_even,truemotion=true,isb=false,delta=1)
  vbe=manalyse(super_even,truemotion=true,isb=true,delta=1)
  filldrops_e = mflowinter(even,super_even,vbe,vfe,time=50)

  odd  = c.SeparateFields().SelectOdd()
  super_odd=MSuper(odd,pel=2)
  vfo=manalyse(super_odd,truemotion=true,isb=false,delta=1)
  vbo=manalyse(super_odd,truemotion=true,isb=true,delta=1)
  filldrops_o = mflowinter(odd,super_odd,vbo,vfo,time=50)

  evenfixed = ConditionalFilter(even, filldrops_e, even, "YDifferenceFromPrevious()", "lessthan", "0.1")
  oddfixed  = ConditionalFilter(odd,  filldrops_o, odd,  "YDifferenceFromPrevious()", "lessthan", "0.1")

  Interleave(evenfixed,oddfixed)
  Weave()
# Following line removed after suggestion by Gavino
#  AssumeFieldBased()
  AssumeBFF()
}
Oh, if you want to replace the first frame rather than the second frame, use this code (this is for progressive frames using MVTools2):
Code:
function filldropsnext (clip c)
{
  previous = Loop(c,2,0,0)
  super=MSuper(previous,pel=2)
  vfe=manalyse(super,truemotion=true,isb=false,delta=1)
  vbe=manalyse(super,truemotion=true,isb=true,delta=1)
  filldrops = mflowinter(previous,super,vbe,vfe,time=50)
  fixed = ConditionalFilter(c, filldrops, c, "YDifferenceToNext()", "lessthan", "0.1")
  return fixed
}
P.S. Just after I posted I read the post suggesting using cfile, etc. I spent several years developing a technique to do exactly what he was suggesting. Once you do this as a two-pass operation, you can spend time with the parameter file and use a spreadsheet to figure out exactly which frames you want to decimate. So, you could use motion interpolation to increase the frame rate; run the software to get the cfile.txt file; put this in Excel and decide which pattern of frames or fields you want to decimate; create a new file that has the decimation frames; and then run the second pass. You can get exactly what you want with this approach, but it isn't exactly automatic.

Last edited by johnmeyer; 17th June 2010 at 17:33. Reason: Added cfile information; Later edit to remove AssumeFieldBased()
johnmeyer is offline   Reply With Quote
Old 16th June 2010, 23:56   #14  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Relying on YDifferenceToPrev/Next will get you pretty much nowhere. (At least not for an "automated" solution ... and spending hours or days on manual analysis of spreadsheet data is not what you usually want to do.) The valus for "typical" differences are both scene-dependent and motion-dependent, i.e. that value will change all the time. No way to catch that with one single arbitrary threshold. You really need to examine the differences throughout a temporal window, and search for the one difference that is bigger than the rest.
Once you've figured the positions of missing frames, the rest is rather trivial. Via script: double framerate by frame doubling, put interpolations in the figured positions, decimate 47.952 -> 29.97, finished.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 17th June 2010 at 00:01.
Didée is offline   Reply With Quote
Old 17th June 2010, 00:36   #15  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
@johnmeyer
Perhaps I have misunderstood, but shouldn't all your functions be using motion vectors with delta=2 rather than delta=1 if you are trying to replace a duplicate frame by an interpolation between its neighbours?

Also, AssumeFieldBased() should not normally be used after Weave(), as this describes a field-separated clip.

Last edited by Gavino; 17th June 2010 at 00:41.
Gavino is offline   Reply With Quote
Old 17th June 2010, 02:57   #16  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,556
delta=1 means the two nearest neighbors. delta=2 is the next closest, ie, 2 back & 2 forward.
foxyshadis is offline   Reply With Quote
Old 17th June 2010, 03:03   #17  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,685
Quote:
Relying on YDifferenceToPrev/Next will get you pretty much nowhere. (At least not for an "automated" solution ... and spending hours or days on manual analysis of spreadsheet data is not what you usually want to do.) The valus for "typical" differences are both scene-dependent and motion-dependent, i.e. that value will change all the time.
You are correct that the values change all the time. However, in a local setting -- that is within a few frames on either side of the current frame -- they tend to not vary much unless there is a discontinuity of the type we are talking about. Thus, to make my idea work, you have to compile a moving average and then compare the current frame vectors with the changes between the last few frames and the changes in the next few frames. This is what I did with the spreadsheets I developed to analyze the cfile.txt vectors. This will, in my experience, generally give you a correct result. And, as I stated in my last post, if your script creates a few interpolated frames that aren't needed, all you will do is replace an original frame with an interpolated one. If you do everything correctly, this frame should be spatially and temporally very similar to the original frame. After all, the point of this is to create frames to replace those that are missing, and if those don't look pretty close to the missing original, then the whole "exercise" is pointless.

So, if the function isn't 100% perfect and a few extra "good" frames get replaced with interpolated frames, it shouldn't make much difference, especially compared to the horrible stutter that you generally get with what the OP described.

Quote:
Perhaps I have misunderstood, but shouldn't all your functions be using motion vectors with delta=2 rather than delta=1 if you are trying to replace a duplicate frame by an interpolation between its neighbours?
Well, I always defer here to people who know more than I do. However, this filldrop function was originally created (not by me) for the situation where a capture card drops a frame and, instead of eliminating that frame and screwing up the audio sync, the card duplicates the previous frame. Thus, if you look at the second of the two duplicate frames, the frame before (which is a duplicate) is the correct frame for that time slot. The frame after is also correct. Therefore, you simply want to interpolate a new frame at the current location using the information from the two adjacent frames. Perhaps I don't understand correctly what the "delta" parameter does, but I thought it specified which frame to look at as the reference for MAnalyze, with delta=1 and isb=true looking at the last frame and delta=1 and isb=false looking at the next frame. If I use delta=2, I thought that meant the algorithm would look two frames ahead and two frames behind and therefore would not create the current frame as accurately. I thought the point of being able to look several frames away is to accumulate a more accurate estimate by averaging the vectors from the one frame away estimate with those from two frames away, etc, using a lesser weight for the estimates derived from frames which are further away.

But like I said, perhaps I don't understand correctly. I can tell you, for absolute certain, that the function I posted works fantastically well at replacing the second duplicate with a near-perfect motion-estimated synthesized frame. I have used it many, many times on dozens of hours of video (long story how I came into possession of so much lousy video ...)

Quote:
Also, AssumeFieldBased() should not normally be used after Weave(), as this describes a field-separated clip.
Darn, I think you may have pointed this out in a post I made six months ago. I corrected that code, but somehow this snuck back in.

This is a mistake.

What is the proper etiquette on this board? Should I edit my original post to correct the mistake? Or, is this discussion in this post sufficient?
johnmeyer is offline   Reply With Quote
Old 17th June 2010, 16:09   #18  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
A 4-years-old script, with some minor adjustments of today.

Code:
# mt_masktools-25.dll
# mvtools2.dll
# tivtc.dll

Source( "VIDEO_that_has_been_decimated_to_FILM" )

o = assumefps(1.0) ox=o.width() oy=o.height()


showdot = true # false #   shows a "dot" in the interpolated frames of the result


super = showdot ? o.subtitle(".").MSuper(pel=2) : o.MSuper(pel=2) 
bvec  = MAnalyse(super, overlap=4, isb = true, search=4, dct=5) 
fvec  = MAnalyse(super, overlap=4, isb = false, search=4, dct=5) 
double = o.MFlowFps(super, bvec, fvec, num=2, den=1, blend=false)


diff2next = mt_makediff(o,o.selectevery(1,1)).mt_lut("x 128 - abs 32 / 1 2.0 / ^ 128 *",U=-128,V=-128)
diff2next = mt_lutf(diff2next,diff2next,yexpr="x",mode="average").pointresize(32,32)
diff2next = interleave(diff2next.selectevery(4,0).tsg(2),diff2next.selectevery(4,1).tsg(2),
 \                     diff2next.selectevery(4,2).tsg(2),diff2next.selectevery(4,3).tsg(2))

max = diff2next.mt_logic(diff2next.selectevery(1,-3),"max")
 \             .mt_logic(diff2next.selectevery(1,-2),"max")
 \             .mt_logic(diff2next.selectevery(1,-1),"max")
 \             .mt_logic(diff2next.selectevery(1, 1),"max")
 \             .mt_logic(diff2next.selectevery(1, 2),"max")
 \             .mt_logic(diff2next.selectevery(1, 3),"max")
ismax = mt_lutxy(diff2next,max,"x y < 0 255 ?",U=-128,V=-128).pointresize(ox,oy)
themask = interleave(o.mt_lut("0"),ismax)
interleave(o,o).mt_merge(double,themask,luma=true,U=3,V=3)

tdecimate(mode=1,cycleR=3,cycle=8)
assumefps(30000,1001)

return( last )

#===========================

function tsg(clip c, int t) { c
t<5?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25)
t<4?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25)
t<3?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25)
t<2?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25)
t<1?last:last.temporalsoften(1,255,0,255,2).merge(last,0.25) }
Worx.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 17th June 2010, 18:43   #19  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Quote:
Originally Posted by foxyshadis View Post
delta=1 means the two nearest neighbors. delta=2 is the next closest, ie, 2 back & 2 forward.
That's true as far as the vectors are concerned, but read on...
Quote:
Originally Posted by johnmeyer View Post
... you simply want to interpolate a new frame at the current location using the information from the two adjacent frames. Perhaps I don't understand correctly what the "delta" parameter does, but I thought it specified which frame to look at as the reference for MAnalyze, with delta=1 and isb=true looking at the last frame and delta=1 and isb=false looking at the next frame. If I use delta=2, I thought that meant the algorithm would look two frames ahead and two frames behind and therefore would not create the current frame as accurately.
Per MVTools doc, MFlowInter interpolates "between current and next (by delta) frame".
More precisely, what this means is that frame n of MFlowInter is an interpolation between input frames n and n+delta (using forward vector n->n+delta and backward vector n+delta->n).
So in general, if you want to replace a 'bad' frame n by interpolating between its immediate neighbours n-1 and n+1, you need to use delta=2 (and take the result from frame n-1 of MFlowInter).
See the example in the MVTools doc, "recreate bad frames by interpolation with MFlowInter".
Quote:
I can tell you, for absolute certain, that the function I posted works fantastically well at replacing the second duplicate with a near-perfect motion-estimated synthesized frame.
You're right, the function certainly does work, and now I see why.
What happens is that if a duplicate is found at frame n, it interpolates between n and n+1, since delta=1.
At first sight, this is wrong, but since frame n is a duplicate of frame n-1, it is equivalent to the desired interpolation between n-1 and n+1, so the final result comes out OK. (Whether by accident or by design, I'm not sure. )
Quote:
Darn, I think you may have pointed this out in a post I made six months ago. I corrected that code, but somehow this snuck back in.
I'd forgotten about that myself. At least I'm consistent in my advice.
Quote:
What is the proper etiquette on this board? Should I edit my original post to correct the mistake? Or, is this discussion in this post sufficient?
It varies. If it's a function that someone else might find later on and want to use, I think it's better to correct the original, with a note saying it's been changed (so that the subsequent discussion still makes sense).

Last edited by Gavino; 17th June 2010 at 18:48.
Gavino is offline   Reply With Quote
Old 17th June 2010, 23:29   #20  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,685
Quote:
What happens is that if a duplicate is found at frame n, it interpolates between n and n+1, since delta=1.
At first sight, this is wrong, but since frame n is a duplicate of frame n-1, it is equivalent to the desired interpolation between n-1 and n+1, so the final result comes out OK. (Whether by accident or by design, I'm not sure. )
Thank you VERY much for taking the time to challenge my thinking. This has been very useful for me because it forced me to go back and look at several of my scripts that use this "filldrops" script code to replace bad frames. I also researched every post on this forum that uses the "mflowinter" call.

Unfortunately, this created a new source of confusion, and now makes me think that both the filldrops function and even the example given in the MVTools2 documentation are incorrect.

See if you agree.

Here is my thinking: MAnalyze is used to create motion vectors between the current frame and adjacent frames. However, if the current frame is totally screwed up (blank, duplicate, full of noise, etc.) then won't MAnalyze create vectors that are of absolutely no value, regardless of what number I assign to "delta?" In other words, if I have one frame that is completely, totally different than the frame before and the frame after, then motion vectors which use the pixels in this current, bad, reference frame will have no useful relationship to anything.

Thus, it would seem to me that even the example in the documentation is wrong, and that to make this work, I must actually use, as the input to MSuper and MFlowInter, the good frame immediately before the bad frame and generate all vectors with reference to that frame. I should then use delta=1 for the backward vector, and delta=2 for the forward vector (to skip over the bad frame). Finally, the call to MFlowinter should specify time=66.67.

Here is an attempt to explain the problem with a diagram:

F1 F2 F3 F4

F3 is the bad frame. What follows is pseudo code of what I think is actually the correct way to interpolate to replace a truly bad (garbage) frame:

Code:
super=MSuper(F2,pel=2)
vfe=manalyse(super,truemotion=true,isb=false,delta=2)
vbe=manalyse(super,truemotion=true,isb=true,delta=1)
F3 = mflowinter(F2,super,vbe,vfe,time=66.67)
So, what I am trying to understand is how the bad frame "F3" can be used to generate useful vectors and, if it cannot, how the demonstration code given in the documentation can be correct.

Now that I understand this, I can see how the filldrops code I quoted above (I think "MugFunky" invented it) does generate satisfactory results even though it actually isn't correct. It sort of works because the "bad frame" is a duplicate of a perfectly good frame, and therefore the backward vector created between it and a duplicate of itself will still be valid, and the vector created between it and the next frame will be valid. In other words, because the previous frame is a duplicate, the backward vectors will not contribute anything useful to the interpolation, but they won't cause the thing to blow up. Therefore, I get an interpolated frame that is useful, but which I now suspect is not as good as it might be.

So, I now believe that the original documentation and also the original filldrops code are both incorrect, and that the proper approach is the one I discovered when I had to replace the first of the two frames rather than the second. If you go back to my initial post in this thread, that is the code in the third code block that uses the Loop function to return the previous frame to the MSuper and MFlowInter function.

Last edited by johnmeyer; 17th June 2010 at 23:34. Reason: Corrected minor, but misleading, typo immediately after posting
johnmeyer is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:46.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.