Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 17th October 2005, 15:16   #1  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
Double-blend IVTC Removal

MOmonster's FixBlendIVTC 0.9, part of R_pack, has definitely surpassed my function on speed, reliability, quality, and tweakability, so I advise you all to grab a copy of it and let my old version rot. And it now includes a readme!

The same caveat applies: Any form of pre-processing (other than a simple blur or resize) will disrupt the blend detection.

Requires 2.0a22+, Average, Removegrain, and Decomb for decimating.


-------------------

Old News


v0.4, now with automatic pattern-matching! MOmonster and mg262 contributed greatly to helping me achieve that.

Uses difference masks and edge masks to improve unghosting. The less filtering you use beforehand, the better detection and reversal works. Unblend/removeblend is faster and about as accurate with single blends, especially those without a pattern.

Requires masktools and Average. Note that Momonster is actively developing a higher quality and faster version, I'll probably post that here when he finishes.

Code:
function fx_unblendTC(clip input, bool "nopp") {
nopp = default(nopp,false)
## Non-volatile variables ##
global source = input
global greymain = source.levels( 16, 1, 200, 128, 128, coring = false)
global test = Average(greymain, 1, source,0.25, source.trim(1,0),-0.5, source.trim(2,0),0.5, \
  source.trim(3,0),-0.25).reduceby2().YV12Lut("x 128 - 3 ^ 4 / 128 +")
global grey = greymain.reduceby2
global framez5 = blankclip(test,length=5) + test
global framez4 = blankclip(test,length=4) + test
global framez3 = blankclip(test,length=3) + test
global framez2 = blankclip(test,length=2) + test
global framez1 = blankclip(test,length=1) + test
global frame0 = test
global frame1 = test.trim(1,0) + blankclip(test,length=1)
global frame2 = test.trim(2,0) + blankclip(test,length=2)
global frame3 = test.trim(3,0) + blankclip(test,length=3)
global frame4 = test.trim(4,0) + blankclip(test,length=4)
global frame5 = test.trim(5,0) + blankclip(test,length=5)

# frames to use for output
sframe3 = input.duplicateframe(0)
sframe4 = input
sframe5 = input.trim(1,0)
sframe6 = input.trim(2,0)

# unblend output and various post-processing measures
unblend1 = Average(sframe3, -1.0, sframe4, 2.0)
unblend2 = Average(sframe5, 2.0, sframe6, -1.0)
edgemerge = fx_unblend_edges(unblend1,unblend2)
unblend1 = edgemerge.selectevery(2,0)
unblend2 = edgemerge.selectevery(2,1)
diffframe1 = overlay(sframe4, sframe3, mode="difference") \
  .levels(120,1,192,0,255,coring=false).binarize(threshold=30,upper=true).blur(.5)
diffframe2 = overlay(sframe5, sframe6, mode="difference") \
  .levels(120,1,192,0,255,coring=false).binarize(threshold=30,upper=true).blur(.5)
unblend1 = MaskedMerge(unblend1,sframe6,diffframe2)
unblend2 = MaskedMerge(unblend2,sframe3,diffframe1)

# one-step deblend, use for speed
global output = Average(sframe3, -0.5, sframe4, 1.0, sframe5, 1.0, sframe6, -0.5)

global unblended = nopp ? output : Average(unblend1, .5, unblend2, .5)

## Volatile variables ##
global isblend = false
global isblend1 = false
global isblend2 = false
global isblend3 = false

## Conditional Chain ##

d99=scriptclip(source, """(isblend1 == true ? unblended : \
		(isblend2 == true ? unblended.duplicateframe(0) : source))""")

	# new pattern must start minimum of 4 from last.
d5=FrameEvaluate(d99, "isblend = (!isblend1 && !isblend2 && !isblend3) && \ 
		(residue / div < .2)")
	# closer the pattern match (max 8) the higher the allowed threshold
d4=FrameEvaluate(d5, "global div = frametesta + frametestz >= 7.5 ? 500 : \
		(frametesta + frametestz >= 7 ? 5 : (frametesta + frametestz >= 6 ? 3 : \
		(frametesta + frametestz >= 4 ? 1 : .01)))")
	# assign weights to pattern positions
d2=FrameEvaluate(d4, "  global frametestz = (( min(diffz1,diffz2) > residue ?2:-4 ) + \
		( diffz3 > residue ?.5:-.5 ) + ( diffz4 > residue ?.5:0 ) + \
		( diffz5 < min(diffz1,diffz2,diffz3,diffz4) ?1:0 ))
			global frametesta = (( min(diff1,diff2)   > residue ?2:-4 ) + \
		( diff3  > residue ?.5:-.5 ) + ( diff4  > residue ?.5:0 ) + \
		( diff5 < min(diff1,diff2,diff3,diff4) ?1:0 ))")
	# set up differences
d1=FrameEvaluate(d2, "global residue = LumaDifference(frame0,grey)
			global diffz1 = LumaDifference(framez1,grey)
			global diffz2 = LumaDifference(framez2,grey)
			global diffz3 = LumaDifference(framez3,grey)
			global diffz4 = LumaDifference(framez4,grey)
			global diffz5 = LumaDifference(framez5,grey)
			global diff1  = LumaDifference(frame1,grey)
			global diff2  = LumaDifference(frame2,grey)
			global diff3  = LumaDifference(frame3,grey)
			global diff4  = LumaDifference(frame4,grey)
			global diff5  = LumaDifference(frame5,grey)
			isblend3 = isblend2
			isblend2 = isblend1
			isblend1 = isblend")
return(d1) 
}

function fx_unblend_edges(clip sup1, clip sup2) {
  edgemask1 = sup1.edgemask(thY1=2,thy2=2,thC1=4,thC2=4,type="special").levels(96,1,160,0,255,coring=false)
  edgemask2 = sup2.edgemask(thY1=2,thy2=2,thC1=4,thC2=4,type="special").levels(96,1,160,0,255,coring=false)
  diffedgemask1 = overlay(edgemask1, edgemask2, mode="subtract").inflate().blur(.7)
  diffedgemask2 = overlay(edgemask2, edgemask1, mode="subtract").inflate().blur(.7)
  edgemerge1 = MaskedMerge(sup1,sup2,diffedgemask1)
  edgemerge2 = MaskedMerge(sup2,sup1,diffedgemask2)
  return interleave(edgemerge1,edgemerge2)
}
Because it improves the deblending with a difference mask, the stronger the luma difference the less artifacts will show in the output.

A few clips for your perusal below: (Xvid q4)
2: blend unblend
3: blend unblend
4: blend unblend
5: blend unblend
6: blend unblend

Usage is just fx_unblendTC(). If there are long stretches of no blends, it shouldn't hurt to leave it on but it will slow it down. It typically won't get the very first and last blends of a clip, so a few still show in the samples even though they were caught fine when I used the full video.

After reversal use TDecimate or FDecimate as required. Random access is unreliable so make sure that every frame is touched in order (tdec mode 5 won't). It is recommended for now to write the output to a lossless temp file and working from there. (Inline Fdec worked fine for 4-6 but messed up the patterns for 2-3.)

v0.2 - I realized I was using subtract and it was throwing off some calulations. Also, added static area detection, much better on static parts (both forward and backward). Removed unnecessary conditionalfilters.
v0.3 - yv12subtract for full reversals, a few additional options.
v0.4 - rewritten as conditional, automated pattern detection.

Last edited by foxyshadis; 18th June 2007 at 23:39.
foxyshadis is offline   Reply With Quote
Old 18th October 2005, 08:18   #2  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
With all respect for the work you put into this ... when I tried it on one source with "blend-IVTC" I've lying around, your function acted as a perfect NOP. Not one single pixel was changed. Input==Output, bit-identical.

Are you sure everything's okay with that function? Perhaps some hardcoded values should be parametrized?
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 18th October 2005, 09:24   #3  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
I'm not actually sure. The first frame number does have to be the start of a 5-cycle of 3 clear and 2 blends - even if it's across a scenechange, it won't hurt it. Can I see a piece of your clip? Having more than the three or four I have (which are mostly very similar in their blending patterns) would be really nice to have, although the effect is definitely there on the ones I tried.

I'll probably go back in with a much wider test body for the next iteration, I've had a few ideas percolating.

And don't worry about my feelings, it's just a personal quest for perfection that I hope might be of use to others someday as well. =p
foxyshadis is offline   Reply With Quote
Old 18th October 2005, 11:07   #4  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
I couldn´t test the function till now. I think it is only for NTSC footage (right?). But also if Didee tested a pal-clip, I think there should happen something. It looks a bit komplex for me, but therefore it avoids the evaluate enviroment. How good get it clear with Edits.

EDIT:
I´m a bit confused . I have seen pal to ntcs blend-conversions (cycle=6) and of course ntsc to pal blend-conversions (cycle=25), but you work on the cycle 5?
Can anybody explain me, what is mean with blend-IVTC (or better blend-TC)?
MOmonster is offline   Reply With Quote
Old 18th October 2005, 13:05   #5  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
Telecined footage, but instead of being IVTC'd like any sane method, frame-blended to remain 30p without being wildly interlaced every few frames. That's my sources. If pal->ntsc clips are much different, I'd have to see a few. (Just changing the cycle to 6 would be simple though.)

I'm afraid it wouldn't be any good with all blends through a whole film, it's meant to use clean frames around the blends to wipe away blend-artifacts. I'm still working on different algorithms on the hunt for something better. Maybe I should use some of your guys' really complex scripts for cribbing.

Edit: Oh geez I'm such a huge idiot, I need to redo a few things. Give me a bit to sort things out.

Also, I hate it when inspiration strikes at 4 in the morning.

Edit2: Okay, I modified it to use difference rather than subtract. Also got my static area inspiration working.

I suppose my examples aren't great. What I do is:
x=mpeg2source("file")
x=x.fx_unblend(316,343)
return x

This returns it in 30fps with the unblended frame duplicated. Once I have the ranges I want, I run tdecimate on it. But I included 24 in case someone wants it. Once I get it more thorough and more efficient, I'm going to add parameters for noise thresholds too; right now it's made for really noisy sources, but it shouldn't hurt clean ones much. Cycle shouldn't be hard, I just need to think about how to apply it with selectevery/interleaveevery.

Edit3: Changed the thread title to be clearer as to its purpose.

Last edited by foxyshadis; 18th October 2005 at 15:59.
foxyshadis is offline   Reply With Quote
Old 19th October 2005, 11:48   #6  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
Yes, this is what I also thought yesterday evening. Here I can only play this file with vlc (slow motion), because it´s not my pc, but of course that doesn´t give me so much informations.
If I understand it right, this is the pattern:
example1: A AB B BC C D DE E EF G
And we know, the blendweights are all 50:50, because it was a result of deinterlacing. The Blends doesn´t seem to be bad catchable (good wight and clear structures because it´s a Anime), so Cdeblend or restore24 should also are able to kill the most blends.
But if I´m right with the blendweight, why not just using a simple differencing? If we had the pattern example (example1), then the differences between B and BC (B->BC) should be really similar to the difference BC->C and:
D->DE ~ DE->E
E->EF ~ EF->F
...
And in general the difference C->D would be significantly bigger than the differences around. So if:
C->D * thresh > BC->C (and then of course also B->BC) &&
C->D * thresh > D->DE (and then of course also DE->E)
with a thresh around 0.7 or 0.75, we know the pattern. Its the only Difference, that can be significantly bigger than the differences around, because all other differences has a really similar difference.
If C->D isn´t the biggest Difference for example because of a scenechange, then we just keep the last pattern till the next clear decission.

That´s the way, I would solve the problem, but of course this can fail and is maybe worse than your way, but I like to make it a bit more easier and faster.
If I was wrong with the blendweight or the pattern, please tell me. I never has sawn such sources before.
MOmonster is offline   Reply With Quote
Old 19th October 2005, 13:45   #7  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
If it was that easy, I'd have never started this. Sorry I'm not very good at explaining though, I guess I'd hoped it was more common but it doesn't seem that it is.

The pattern is A B C CD DE | E F G GH HI | I... where my filter returns A B C D D | E F.... So for every group of five frames to be returned to 4 requires extracting the last one from the frames around it somehow. I've seen premiere barf up something like this before, and it came up again while working on a set of videos someone would like me to restore from low-quality mpegs, so I made this up.

I know one other person posted recently with a problem like this, so I put it out in case it was a larger problem to see if there was any demand for developing it beyond my immediate need. It is rather a special case, I know.

I did spend some time before I started experimenting with crestore, restore24, and restorefps, and I got the best results out of the latter, but not great. I didn't want to clutter up other threads with my bizarre blending patterns.

Edit: Wow, I feel tremendously stupid having done all that just to figure out it could be boiled down to several add/subtract statements. ;_; It'd be even simpler if overlay would return negative differences as well as positive.

# experimental
subframe1 = overlay(frame4, sframe3, mode="subtract")
subframe1swap = overlay(sframe3, frame4, mode="subtract")
subframe2 = overlay(frame5, sframe6, mode="subtract")
subframe2swap = overlay(sframe6, frame5, mode="subtract")
frame1add = overlay(frame4, subframe1, mode="add").overlay(subframe1swap, mode="subtract")
frame2add = overlay(frame5, subframe2, mode="add").overlay(subframe2swap, mode="subtract")

# finally, merge. naive will be 50% orig, 25% each blend, mine hopefully rather better.
deblended = merge(frame1add,frame2add)

The output could still be quite improved, maybe with some of the other stuff I wrote, but... sigh. What a way to waste a weekend. I'll chalk it up to a learning experience and rusty math.

Edit2: (I'm editing all over this thread.) Actually, a straight subtract brings out some horrific mpeg artifacting, since it doesn't play nicely with mpeg's psychovisual model. Heavy pre-filtering helps but not really enough. Wow. I'll have to ponder this new wrench.

Last edited by foxyshadis; 19th October 2005 at 16:33.
foxyshadis is offline   Reply With Quote
Old 20th October 2005, 09:50   #8  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
Ok, I finally get it. Now, your function is also more clear for me. Maybe in a week, I can have a closer look on the source and your script, but now I have no suggestions for you.
Keep up the good work.

Edit: Is it right that the blendweight is for all blends 50:50?

Last edited by MOmonster; 20th October 2005 at 10:11.
MOmonster is offline   Reply With Quote
Old 20th October 2005, 10:04   #9  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Quote:
It'd be even simpler if overlay would return negative differences as well as positive.
I haven't tested it in this incarnation, but the Average plug-in I recently released should be able to cope with negative weights, if it is any use...

http://forum.doom9.org/showthread.ph...hlight=Average

(In fact the code was just pulled straight out of the experimental fieldblending reversal plugin, which computes plenty of weighted averages to reverse blending. What all the maths in that is for is to figure out the correct add/subtract weights for arbitrary patterns... and to detect/cope with pattern breaks. RestoreFPS was hardwired to reverse a specific blending pattern, which is why it couldn't deal with your case that well... )

Last edited by mg262; 20th October 2005 at 10:10.
mg262 is offline   Reply With Quote
Old 20th October 2005, 10:20   #10  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
@mg262
So your Average Filter take also negative Weights. This of course would make it much more effective.
Is this possible? (I can´t test it here)
Code:
Average(alpha, -1.0, beta, 1.0, gamma, 1.0)

Last edited by MOmonster; 20th October 2005 at 10:39.
MOmonster is offline   Reply With Quote
Old 20th October 2005, 10:40   #11  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Yes. If you use weights which are (IIRC) < -2 or >= 2, it will use C instead of assembly, with a corresponding slowdown -- but I don't think there are any other limitations. It does exactly what it says, namely "average" = sum with given weights, so if you use it to subtract a clip from itself you will get 0 in both luma and chroma (not 128)... obviously you can get around this by adding an appropriate BlankClip as an extra argument. (In the case of fieldblend reversal, I don't think this should ever be an issue.)

I've just run a brief test and everything seemed to be fine, but if you have any problems, poke me...

foxyshadis, you can have your thread back now
mg262 is offline   Reply With Quote
Old 21st October 2005, 16:13   #12  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
Ok, because I want to learn, too, I created also a small function against this problem:
Code:
find last version on page 3
The function is fast and easy, but there are three problems, two because of Average (stability and a green right). The other is the scenechanging. The function don´t work on the first five frames and can have problems on scenechanges.
If it is not stable enough on your pc, I can also use Lutxy instead of average (but that wouldn´t be that amasingly fast).

Edit: mg262 has fixed the bug. Use the newest Average version and everything is running fine.

Last edited by MOmonster; 7th November 2005 at 14:09.
MOmonster is offline   Reply With Quote
Old 22nd October 2005, 03:14   #13  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
I actually started on a rewrite using yv12subtract, it was much simpler than the overlay mess, but I think I hit a bug in avisynth's caching. (It's weird, if I returned side-by-side the difference behind and in front of the blend frames, it worked like it should, but if I showed the output in the left the right pane changed! It became a compare to two frames forward instead of the next one, and it seems to cause the output to mess up. And it happens so rarely but consistently that I can't make a testcase without going "here's my function and here's the frames it screws up on", even minor function tweaks changes where it shows up.)

I'm going to give yours a shot, it looks promising. I really need to get more comfortable in conditional environments, I mostly get a lot of "I don't know what 'blah' means" even when I work from the doc's examples.
foxyshadis is offline   Reply With Quote
Old 22nd October 2005, 07:25   #14  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
I changed the code a little bit for you.
Because your sample is an anime source, maybe there are 12fps scenes. For this I changed the pattern conditions a bit and added a alternative code. Because for 12fps parts there is only one blend of course we can´t use the same method. I´m not sure if "Cowboy Bebop" has 12fps sequences, because your sample is not typical for that, but there are many other sources, that has still many 12fps sequences.
MOmonster is offline   Reply With Quote
Old 22nd October 2005, 08:15   #15  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
It has 12 and 24 fps sequences, so thanks for considering that. (Although with a single blend the recovered frame is obviously the same as the frame where the other blend would've been, you just have to figure out which is which. In testing mine I noticed that if you chose the wrong offset it would be very obvious even though it should have been roughly equivelent, in a single pattern, whichever way you went.)
foxyshadis is offline   Reply With Quote
Old 22nd October 2005, 12:05   #16  |  Link
MOmonster
Registered User
 
Join Date: May 2005
Location: Germany
Posts: 495
So the pattern of your function is really constant? My function try to find the pattern with the LumaDifference, if it is not clear the function decimate the some frame like in the last pattern but it doesn´t try to restore a clear frame, because he could use the wrong frames and that will look worse then the source.
MOmonster is offline   Reply With Quote
Old 22nd October 2005, 23:40   #17  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
I should add some more samples, that way you'll have more to test against if you want. The pattern changes every scene change and occasionally within a scene (I manually shifted it in those cases).

2, 3, 4, 5, 6

(They should be uploaded within 10 minutes, all 3-5 megs and about a hundred frames.)

I'm thinking that with less clean sources difference squared would be better than plain lumadifference, but I'm not sure how to do that in pure avs script. (mg, since you're so awesome at whipping up quick filters, would it be possible to make a ssd/satd/whatever filter with your restorefps code? Unless one already exists that I don't know about.)

Last edited by foxyshadis; 23rd October 2005 at 00:25.
foxyshadis is offline   Reply With Quote
Old 23rd October 2005, 08:55   #18  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
More than happy to! (But I'm out all day today, so you will have to wait a little -- sorry about that.) I exposed my blend detection method for use yesterday (check the RestoreFPS thread)... that might be what you need? Otherwise I can easily expose the SSD as well.
mg262 is offline   Reply With Quote
Old 23rd October 2005, 11:13   #19  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
Wow, that's hot! I hadn't seen your edits in the other thread. I'll test it out right now.

(I think Y/U/VDifferenceToNextSquared functions might be useful to other authors, but I don't know how much. I have no idea how useful satd would be, but that's mvtools land.)
foxyshadis is offline   Reply With Quote
Old 23rd October 2005, 20:56   #20  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
So, for anyone else reading this thread, foxyshadis and I sent a couple of PM's back and forth... the upshot being that although the BlendAngle is not meant for consecutive blends, although it seems not to be as bad as I thought. I also promised to give the error measure I would use for this particular problem (if you don't want to wade through the maths, skip to Conclusion)...

We start with this model for how our blended frames A,B,C,D are produced:
A=x
B=0.5x+0.5y
C=0.5y+0.5z
D=z

And we are trying to recover the original unblended pictures x,y,z. There are two methods you can use, one simple one where you just use x=A and z=D, and a more complicated one where you try and use the information in B to give a better guess for x. I'm going to stick with the simple method.

The least squares method I use in RestoreFPS gives the following estimate y' for y:
y'=-0.5A + B + C -0.5D
also as noted
x'=A
z'=D
(And MoMonster has independently come up with the same thing.)

Now, we can plug these guesses for the original frames back in to our model to see what they would have actually given when blended.

B' = 0.5x'+0.5y' = 0.5A + 0.5(-0.5A + B + C -0.5D) = 0.25A + 0.5B + 0.5C - 0.25D
C' = 0.5y'+0.5z' = 0.5(-0.5A + B + C -0.5D) + 0.5D = -0.25A + 0.5B + 0.5C + 0.25D

Now, if the original model was sensible, B' and C' should be very similar to B and C respectively. So the sum squared differences between B and B', and between C and C', should be small. I.e., the following two quantities should be small:

(B'-B)^2 = (0.25A - 0.5B + 0.5C - 0.25D)^2
and
(C'-C)^2 = (-0.25A + 0.5B - 0.5C + 0.25D)^2

These are the same quantity... so to determine whether in the model is a good one (i.e. blends lie in this particular pattern), we compute (0.25A - 0.5B + 0.5C - 0.25D)^2 and see if it is small -- i.e. we compute (0.25A - 0.5B + 0.5C - 0.25D) for each pixel, square the results and add.

Note that we can multiply this quantity, (0.25A - 0.5B + 0.5C - 0.25D), by any constant we choose (it just affects the blend-detection threshold). So we could use A-2B+2C-D... but the original (or a version with even smaller coefficients) may be preferable to prevent overflow.

Probably the easiest way to implement this is to use Average or some other plug-in to compute (0.25A - 0.5B + 0.5C - 0.25D) (offset at 128), and I will throw together a plug-in that subtracts 128 from each pixel, squares, and adds together the results to provide a quantity you can threshold on.

I try to stay away from the actual equations where possible... I hope that wasn't too unreadable. In any case,

Conclusion:
To check whether you have this pattern:
Code:
Frame n  =A=x
Frame n+1=B=0.5x+0.5y
Frame n+2=C=0.5y+0.5z
Frame n+3=D=z
Compute (0.25A - 0.5B + 0.5C - 0.25D), and see if the average squared value of this is small. [You can try using other averages, like LumaDifference.]

Edit: the "complicated" estimate of x is 0.9A+0.2B-0.2C+0.1D (and z similarly). But calculating something like this may magnify the errors due to MPEG encoding... so be careful if you try it.

Last edited by mg262; 23rd October 2005 at 21:47.
mg262 is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:03.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.