Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
17th February 2008, 14:28 | #1 | Link | |
Registered User
Join Date: Jun 2004
Location: Chicago
Posts: 19
|
Another frame duplication function
Edit: Other members have since fixed some problems with this script. Check out their contributions here: http://forum.doom9.org/showthread.ph...95#post1589995
-------------- I posted about this at animemusicvdieos.org but the only person I really know that frequents that place from doom9 is Zarxrax... I'm not sure how useful it really is to amv editors, but perhaps someone here may find better a use for it. This function acts much like Donald Graft's dup() in that it attempts to find frames that are exactly the same as the last frame that changed, a "new frame" so to speak, and then overwrite them with that "new frame". In other words it works like this: (Assume captial letters represent frames with actual changes) AbcdeFGhiJkLmnop This filter will produce: AAAAAFGGGJJLLLLL When the adjusted video is played back it should look exactly the same as the original. This works because most anime does not have new movement in every single frame. What this ultimately means is that when the filtered footage is sent to the encoder, it is much easier to compress because a lot of the frames are mathematically identical. The noise that occurs between each frame is now limited to the frames that actually change. Why not just use dup()? Well, firstly I wanted to give myself a challenge and only learned about his plugin until after I had started work. However, I found that his did not accurately identify very small changes in the video. If I adjusted the threshold to identify very slow pans, zooms, or other motion, it would start picking up on the noise in my source. (Honestly I thought this after only a minimal amount of testing, so there may be a better way to use it than I did. In other words, I'm not trying to say that the filter is no good or that it sucks... <3 DG >_>) Therefore my goal with this was to create a function that required little effort to configure assuming a relatively clean source, and the ability to detect small changes accurately. So here are the details that probably matter to anyone reading this... How can I use this practically? The biggest benefit will probably be realized if you are the kind of person that uses avisynth to create lossless filtered AVI files and then use those, not the scripts themselves. The Lagarith codec has a nifty option called null frames. Quote:
The first encode was 10.9GB The second one was 6.3GB. For a 98 minute lossless encode... It may also prove useful for fansubbers that use softsubs I guess... To use this function: Right Click, Save as, and place in your avisynth plugins directory dupped.avsi Currently there are only three arguments. Thresh=20 will determine that a new frame exists when YMinMax(a,b) is >= 20. The value of YMinMax(a,b) can be found by setting stats=true. I've currently set the default for this parameter to 16. I've only used this on one clean source so it probably needs a little tweaking. panthresh=1.65 is used in calculating the dynamic threshold of low motion scenes. The first two YMinMax values shown in the stats are averaged and then multiplied by panthresh. If the third YMinMax is equal to or greater than this calculation then the current frame is considered a new frame. The dynamic threshold is now displayed in the stats to the right of YMinMax(last,next). stats=True will display the stats used to determine if there was a change in the frame. Showme=True will show the video that the stats are created from. I suggest using this function at the end of your script and only after you have cleaned up your source. Also, the video needs to be in a YUV colorspace. In otherwords, if get an error saying the "Image must be planar" then you need to put converttoyv12() right before the dupped() function. Here is an example of such usage if needed. Code:
AVISource("Videofile.avi") converttoyv12() dupped() Code:
AVISource("Videofile.avi") MT("ttempsmooth()",4) fft3dgpu() dupped() Here are some screenshots of the stats and showme parameters in use. Code:
dupped(stats=true) Code:
dupped(stats=true,showme=true) Code:
# Dupped() attempts to make duplicate frames mathematically identical. # # This filter requires a YUV source. Use converttoyv12 if needed # before calling dupped() # ## Parameters # thresh = Used to determine when to delare a frame as new # Use stats=true to help determine the best value for your source # Lower number = more sensitive. (Default=16) # # panthresh=1.65 This is used to determine when to consider a frame # part of a low motion scene. (Default=1.65) # # stats = Enable/disable display of stats. (Default=false | Not shown) # (Last = last frame, this = this frame, next = next frame) # # showme = Enable/disable display of video that most stats are # derived from. (Default=false | Not shown) # ## Notes # Do not use with SetMTMode(). This function requires the frames to be # accessed sequentially. Instead, use MT() to mulit-thread on a per-filter basis. # # Example: # dupped(thresh=20,panthresh=1.65,stats=true,showme=true) Function Dupped(clip c, int "thresh", float "panThresh", bool "stats", bool "showme") { c Global dupThresh = default(thresh, 16) Global panThresh = Float(default(panThresh, 1.65)) Global dupStats = default(stats, false) Showme = default(showme, false) Global lastNewFrame=0 scriptclip(""" c = Last n=current_frame FC = FrameCount prev_frame = (n > 0) ? n-1 : 0 next_frame = (n < FC-1) ? n+1 : FC-1 #Get Y related stats as needed prevCurrentYMinMax = Subtract(Trim(prev_frame,-1),c).YPlaneMinMaxDifference currentNextYMinMax = prevCurrentYMinMax < dupThresh ? Subtract(Trim(next_frame,-1),c).YPlaneMinMaxDifference : 256 prevNextYMinMax = currentNextYMinMax < dupThresh ? Subtract(Trim(prev_frame,-1),Trim(next_frame,-1)).YPlaneMinMaxDifference : 0 #calculate dynamic pan threshold. Set to 256 if looking for a pan is pointless neighborAvgMinMax = (prevCurrentYMinMax + currentNextYMinMax) / 2.0 dynPanThresh = prevNextYMinMax != 0 ? Abs(neighborAvgMinMax * panthresh) : 256.0 #determine if the current frame is new IsNewFrame = (prevCurrentYMinMax >= dupThresh || prevNextYMinMax >= dynPanThresh) Global lastNewFrame = IsNewFrame ? n : lastNewFrame c = Trim(lastNewFrame,-1) c = dupStats ? c.subtitle("prevframe: "+string(prev_frame)+" currentframe: "+string(n)+" nextframe: "+string(next_frame)) : c c = dupStats ? c.subtitle("Last new frame: "+string(lastNewFrame)+" IsNewframe: "+string(IsNewFrame),y=15) : c c = dupStats ? c.subtitle("prevCurrentYMinMax: "+string(prevCurrentYMinMax)+" (Threshold:"+string(dupThresh)+")",y=45) : c c = dupStats && currentNextYMinMax != 256 ? c.subtitle("currentNextYMinMax: "+string(currentNextYMinMax),y=60) : c c = dupStats && prevNextYMinMax >= dynPanThresh ? c.subtitle("prevNextYMinMax: "+string(prevNextYMinMax),y=90) : c c = dupStats && prevNextYMinMax >= dynPanThresh ? c.subtitle("Pan Threshold: "+string(dynPanThresh)+ \ " ("+string(neighborAvgMinMax)+" * "+string(panThresh)+")",y=105) : c c = dupStats && prevNextYMinMax >= dynPanThresh ? c.subtitle("Slow pan detected",y=120) : c return c """) Return showme ? stackvertical(Last,subtract(c.duplicateframe(0),c)) : Last } Last edited by Corran; 21st April 2015 at 06:48. Reason: Fixed broken image urls and linked to other contributions |
|
18th February 2008, 12:16 | #2 | Link |
AviSynth Enthusiast
Join Date: Jul 2002
Location: California, U.S.
Posts: 1,267
|
I didn't look at your script too carefully, but some comments:
Nested functions are misleading. AviSynth scoping is pretty basic, and all functions are in the global namespace. Beware of using Trim when its arguments are variable. Expressions like: Code:
booleanExpression ? true : false |
18th February 2008, 22:25 | #3 | Link | |
Registered User
Join Date: Jun 2004
Location: Chicago
Posts: 19
|
Quote:
|
|
19th February 2008, 07:59 | #6 | Link | ||
AviSynth Enthusiast
Join Date: Jul 2002
Location: California, U.S.
Posts: 1,267
|
Quote:
Maybe you can make that guarantee. I haven't looked at your script carefully enough to tell. Personally I'd rather use my safer versions of Trim() and not have to bother thinking about that stuff. But if you can make that guarantee, you should enforce it with Assert(). A hard failure at the beginning is much better than allowing people to waste hours encoding videos that they later realize don't contain the correct frames. Quote:
If you want to save a boolean expression into a variable, that's perfectly fine. The "? true : false" part is completely useless. Code:
foo = booleanExpression foo = booleanExpression ? true : false foo = (booleanExpression ? true : false) ? true : false foo = ((booleanExpression ? true : false) ? true : false) ? true : false Last edited by stickboy; 19th February 2008 at 08:05. |
||
19th February 2008, 10:45 | #7 | Link | |||
Registered User
Join Date: Jun 2004
Location: Chicago
Posts: 19
|
If I knew how I would have definitely gone that route... Unfortunately, my only real programming experience is with using php. I'm self-taught and still rely heavily on the php.net references so I don't think jumping straight into C plugin development would be very fruitful...
Quote:
Quote:
Quote:
Last edited by Corran; 19th February 2008 at 15:57. Reason: clarification/addition |
|||
19th February 2008, 16:36 | #8 | Link |
Registered User
Join Date: Apr 2006
Posts: 57
|
here's my script
Code:
DGDecode_mpeg2source("D:\dario\VTS_01rc2.d2v",info=3) ColorMatrix(hints=true) source = last backward_vec3 = source.MVAnalyse(isb = true, delta = 3, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec3 = source.MVAnalyse(isb = false, delta = 3, pel = 2, overlap=4, sharp=1, idx = 1) source.MVDegrain3(backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=400,idx=1) dfttest(sigma= 0.8) crop(4,4,-4,-0) LimitedSharpenFaster(ss_x=1.1, ss_y=1.1, smode=4, strength=200,undershoot= 10,dest_x=720,dest_y=480) fastlinedarkenmod() ColorYUV(levels="tv->pc") |
20th February 2008, 10:59 | #9 | Link |
Registered User
Join Date: Jun 2004
Location: Chicago
Posts: 19
|
Well... everything seems to work fine for me. Your script is extremely slow but I'm not getting any errors about newframe not being defined when I tag dupped() to the end of it.
At this point, I'd try moving out any plugins and avsi scripts you aren't using and see if there is something auto-loading that is causing your problem. |
26th February 2008, 21:42 | #10 | Link |
Registered User
Join Date: Feb 2005
Location: São Paulo, Brazil
Posts: 392
|
I get the following error:
Code:
I don't know what "newframe" means ([ScriptClip], line 16 And what are the main diferences between yours motion-detection algorithm and dup's one? It uses more than two frames to make decisions? I also don't see the small blocks seen in dup subdivision of the frame (I use blksize=8 for clean sources). If you don't subdivide like this, how you detect small moviments? |
26th February 2008, 23:54 | #11 | Link | ||
Registered User
Join Date: Jun 2004
Location: Chicago
Posts: 19
|
Quote:
Code:
AVISource("Videofile.avi") MT("ttempsmooth()",4) fft3dgpu() dupped() Quote:
First, I'm taking a subtract() of the current and previous frame. If the frames were mathematically identical, such a subtract would result in a pure gray frame with a luma of about 128. When a change occurs, the luma of this subtract will also. Some areas will darken while others will brighten. With YPlaneMinMaxDifference() I can calculate the difference between the lowest and highest values of the luma. If this value passes a certain threshold, the frame is considered new. If not, then similar checks will be performed on the current and next frame as well as the previous and next frame as needed. This use of subtract() essentially makes small changes stand out like night and day. Tiny lip flap on a tv in the background is enough to make a huge difference in the MinMax values of the subtract's luma, whereas mosquito noise is still too subtle to be a major influence. |
||
27th February 2008, 00:03 | #12 | Link |
Registered User
Join Date: Feb 2007
Location: ::1
Posts: 1,236
|
Hey,
Would you please consider adding a parameter, that, when enabled, would make the filter create images (PNG, BMP, GIF, JPG, etc.) of any frames that were classified as duplicates but were close to the threshold by a user-inputed amount, and the file they were classified as duplicates of, with filenames that were recognizable as duplicates of a certain other frame? And then possibly, another parameter that would load a *.txt and exclude any frames listed in there from being duplicates? Big request, but if you get bored Thanks very much! Gonna have some fun with this one! |
27th February 2008, 03:21 | #15 | Link |
Registered User
Join Date: Feb 2005
Location: São Paulo, Brazil
Posts: 392
|
I removed everything from avisynth plugins folder to C:/ leaving there only the DirectShowSource.dll and the dupped.avsi. Then I used the folowing script:
Code:
directshowsource("testfile.wmv", fps=23.976, convertfps=true) Dupped() And yours change detection algorithm is realy clever. I was afraid of an frame-wide averaging because there was no subdivision in the picture. I think that dup makes an simple (or not so simple) averaging in each block of the picture, and if any of them is higher than the threshold it considers the frame diferent. At least, that is what I deducted from dup's show=true. |
27th February 2008, 08:32 | #16 | Link |
ангел смерти
Join Date: Nov 2004
Location: Lost
Posts: 9,556
|
You should be able to fix that by adding a
Code:
global newframe = a |
27th February 2008, 15:31 | #17 | Link |
Registered User
Join Date: Feb 2005
Location: São Paulo, Brazil
Posts: 392
|
Yeah, that fixed here. Thanks.
Now to feature requests: - Add the option to show the "showme screen" in the right of the picture. Useful especialy for editing non-widescreen content in widescreen monitors, but also fits better in AvsP in most cases. - Add the blend=true option from dup() for help in denoising. - Your funcion seems to not process chroma. Adding it would make the detection more precise, isn't? I think it would be especialy useful for detecting moviment in CG. Toki wo Kakeru Shoujo don't have explicit CG, but many others animes do. Or it don't matter and only slows down? I'm now comparing dup and dupped with the folowing method: Code:
dup1 = last.Dup(threshold=0.85, blksize=8, copy=false, show=true) # visual analize mode dup2 = last.dupped(thresh=28,panthresh=1.7,stats=true,showme=false) merge(dup1,dup2) Latter I will do an test encode with lagarith. In moviments with lines, both are good enought, with your script reporting the highest diferences when in moviment. The problem is with slow clouds moviment or pans, zooms and fading... |
27th February 2008, 18:08 | #18 | Link |
Registered User
Join Date: Dec 2001
Posts: 1,219
|
neuron2: regarding signal to noise, since anime these days is produced digitally, there doesn't seem to be much problem with noise in many sources. On a nice clean source, your dup filter failed pretty badly on simple things like slow pans, detecting many duplicates that were not really duplicates (I should note that this was in my usage, which may or may not have been performed correctly; I don't mean to say bad things about your filter). However, I'm sure your filter would perform much better on sources with more noise in them, as it seems that this was one of your primary goals in designing your filter.
However it seems that this function of Corran's, although simple in it's approach, may be more useful than dup in some situations. Perhaps though, you could add functionality like this into your dup filter, as an extra check to avoid marking frames incorrectly as duplicates? Personally, I am very wary of using either of these filters on my encodes, as I don't particularly trust it to make the best decision when it comes to something as major as completely dropping frames from the source. But I think a combination of these 2 filters would provide a very good safeguard against false positives, and would be something that I feel I could rely on. What do you think? Also, I had an idea of using a strong denoiser as a preprocessor for the duplicate detection. Couldn't something like this improve the accuracy of detecting duplicates? |
27th February 2008, 22:18 | #19 | Link |
Registered User
Join Date: Feb 2007
Location: ::1
Posts: 1,236
|
I'm tending to agree with Zarxrax here. My intended use for a function like this would be to increase compressibility with x264, so I would put it near the end of a script, certainly after denoising.
Perhaps this would help, too: [All filtering, including normal denoising] beforeSDenoise = last foo = somedenoiserhere(parametersstrongerthanusualusage) return beforeSDenoise.dupped(foo, ......) After all, I think it'd be extremely rare the case when a denoiser actually removed enough small details to create erroneous duplicate frames when it is dupped(). Last edited by Ranguvar; 27th February 2008 at 22:20. |
29th May 2008, 05:29 | #20 | Link |
Sleepy overworked fellow
Join Date: Feb 2008
Location: Maple syrup's homeland
Posts: 933
|
Is this thread dead? IMO a "motion-adaptative" dup function would be very nice... especially with a denoised dclip. A hard threshold like dup() has is simply destructive for low motion scenes, so is there any further development considered?
__________________
AnimeIVTC() - v2.00 -http://boinc.berkeley.edu/- Let all geeks use their incredibly powerful comps for the greater good (no, no, it won't slow your filtering/encoding :p) |
Thread Tools | Search this Thread |
Display Modes | |
|
|