Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
11th November 2012, 17:29 | #1 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
Bye bye blurred frames
I tried to automate the task of replacing bad frames in a clip by other frames. The result is the attached filter function FSubstitute().
Edit: you can find an abstract some posts below. The filter uses MVTools2's MCompensate and MFlowInter functions to reconstruct frames, as well as Freezeframe. Target are - blurred frames because of rapid camera movement - super8 digitized clips with some bad frames from broken machinery or cutting - frames with compression artefacts or blur from compression bitrate shortage Usage is documented in the script header. For the list of plugins and their versions see bottom of script. The filter is slow when effective. It is possible to bypass good frames with appropriate parameter settings and thus get acceptable overall speed. I get about 3FPS with thoroughly processed 640x480 footage on a Core i5/750. When you give it a try, watch the memory usage. I process clips in slices of max. 10000 frames. Sorry I failed writing in a tidy style. It is commented extensively. It it is surely not bug free. Comments are appreciated! But please have patience. Change log see end of script. Edit 13/31/2013: see change log at script bottom Last edited by martin53; 31st January 2013 at 21:43. Reason: new version |
11th November 2012, 17:47 | #2 | Link |
Registered User
Join Date: Apr 2010
Posts: 175
|
Let me just jump in to remember how the old Nikon 950 selected the best frame inside of 8, IIR the BBS setting : Probably by retaining the heaviest one which holds most of details. So the good images weight more and reversely the blurred images are simply lighter than average. Is there a way to retrieve the size of an image and compare it to, huh, the average of a window.. sequence, etc. ?
Thanks, L |
11th November 2012, 21:32 | #3 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
The script does not use compressibility, if that's what you call weight?
The amount of detail is determined basically with a function that only honors edges, ignoring average luma. You find it under the name cHighpass() in the script. Of its output, the 'weight' of a frame is simply its AverageLuma. Tailoring this function will guide the frame estimation/election process fundamentally. The actual implementation was a balance between quality and complexity. |
12th November 2012, 05:31 | #5 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
@Moderator, can attachment be approved please.
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? |
12th November 2012, 18:55 | #6 | Link | |
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
The average luma of the original picture says nothing about the amount of detail in it. After applying a function that makes pixels at edges bright, and pixels in even areas dark (basically any edge detection), the picture will be brighter if there are more edges. Now, the runtime function AverageLuma() will return a measure for the amount of edges in the original picture. Got it? |
|
12th November 2012, 19:44 | #7 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Quote:
a) more edges with less local contrast, and b) less edges with more local contrast. Or, put as simple as possible, "20 edges with value 100" is the same as "200 edges with value 10", when just using AverageLuma. For that distinction, you need to also take the "area %age covered by edges" into account.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
|
12th November 2012, 22:04 | #8 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
If a task needs the distinction between few strong and many weak edges, RT_YInRange() can maybe help by now.
For the script I made, i felt no need to make that distinction. Where do you think that from two adjacent frames with no scene change in between, one could have few concise edges, while being blurred with subtle edges, and the other vice versa? |
12th November 2012, 22:19 | #9 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
I've no idea. Frankly, I have no clue what at all you are doing there in that script.
I was just going by your comment about "measuring the amount of edges" via AverageLuma. That reminded me of my similar problem in the past, and I thought it wouldn't hurt to mention that specific complicacy.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
13th November 2012, 09:53 | #10 | Link | |
Registered User
Join Date: Apr 2010
Posts: 175
|
Quote:
1) Nikon Corp implemented it probably in it's 950 Coolpix, because this early camera didn't feature edge detection algorithm. 2) Exactly as you say, from 2 adjacent frames ... - if one makes a jump in size (kb) then there is an image quality change, - toward blurriness if the image is lighter. Last edited by lisztfr9; 13th November 2012 at 11:13. |
|
17th November 2012, 23:34 | #11 | Link |
Registered User
Join Date: Nov 2006
Posts: 19
|
Intriguing function but I'm having difficulties with it. You posted some impressive results in another thread:
I'm no expert so bear with me. I gathered all the plugins you mentioned and tried a simple call: Code:
avisource("test.avi") FSubstitute() Code:
Avisynth open failure: rangeF must be 0...5, is -1 (_FSubstitute.avsi, line 964) (_FSubstitute.avsi, line 1407) Code:
avisource("test.avi") FSubstitute(rangeF=5) Code:
Avisynth open failure: Script error: MAnalyse does not have a named argument "level" ([GScript], line 4) ([GScript], line 15) (_FSubstitute.avsi, line 203) (_FSubstitute.avsi, line 978) (_FSubstitute.avsi, line 1407) Code:
Line 191 cVectors = cSuper.MAnalyse(isb=(iDelta>0?true:false), delta=abs(iDelta), dct=dct, levels=2, [etc.]) Line 197 cVectors = cSuper.MAnalyse(isb=(iDelta>0?true:false), delta=abs(iDelta), dct=dct, levels=2, [etc.]) Flipping that image vertically, the text reads: Code:
MVTools: vector clip is too small (corrupted?) ([GScript], line 208) ([GScript], line 211) ([GScript], line 330) ([ScriptClip], line 332) Some questions for you:
Last edited by kurish; 18th November 2012 at 00:02. Reason: typos |
18th November 2012, 17:47 | #12 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
First of all, thanks that you gave it a try.
I really appreciate your feedback, since I regret any inconvenience with the script, and the feedback helps me to reduce these inconveniences! Since I packed so many features into the script, I am unable now to manage to test them all after I did improvements to one part, so I experienced that other parts suffered from bugs afterwards, and that's what you experienced. It seems that with your version, I introduced 'forced freeze', and I controlled that with a negative rangeF setting. It is a separate forceF now. The docomentation is at the start of the script. The use of some parameters evolved unfortunately in a non backward compatible manner, but I hope the recent version is in good state! My MAnalyse does have the level parameter and always worked. I checked it is V2.3.1 - it's outdated. I'll update to 2.5.11.3, but then I'll need to publish another updated version of my script (haven't got the time at this moment). The flipped display was a neccessary consequence of a tweak I used to enhance motion estimation. I found a better way in the meantime, so errors will display better now (for more details see 'orientation' parameter). And the last problem you mention - the wrong vector clip size - should no more be present. I used a reduced dimensions clip for some preprocessing for speedup, but that too is done in a different manner now, only little MVTools usage left and no more need for a small helper clip. I'll give some hints about what the script is supposed to do and how to start with it later today or tomorrow. The image you found basically shows it - it's about clips with short blurred sequences because of rapid camera motion etc. You could start with setting info=4 to see thumbs of the three substitution candidates and some helpful(?) figures. Please give me the dimensions of your clip, I worked mostly with 640x480 clips so far, and it's quite probable that some bugs only show with certain dimensions. |
18th November 2012, 22:50 | #13 | Link | |
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
FSubstitute is an environment for the MVTools2 functions 'MCompensate' and 'MFlowInter'. It is tedious to use these functions on themselves because a very specialized script is needed with individual commands for each frame to process, after manual decision on each frame, without knowing in advance how to best configure the functions. How It Works
What It Can Do, And What Not FSubstitute is good for clips that have short quality fall-offs (up to 10 frames in a sequence by now). It is not useful for enhancing footage with a constant quality over time. It is not 'yet another sharpener'. It does no sharpening at all, just frame cloning. One example of such a fall-off is the motion blur from moving a camera because of shutter time. This is usually not obvious from the start, but annoying after stabilizing. Also, I have some WMV clips here (copyrighted, so I cannot publish examples), which develop sharpness only slowly, with visible delay after each pan. Howto And Examples For a start, you might set info=4 to see the figures that your footage creates, especially dThres to avoid processing of good frames. Find a frame range where you identify a quality fall-off and compare the outputs of Compensate, Freeze and Interpolate. Set rangeC, rangeF and rangeI accordingly to the length of bad sequences. Use info=3 or info=2 together with dbgView to find out more about the things that happen. Together with the explanation in the script header and throughout the script, I hope that can clarify things. I'll publish the example sequence shortly. |
|
19th November 2012, 13:47 | #14 | Link |
Registered User
Join Date: Jan 2012
Location: Toulon France
Posts: 249
|
@martin53
I have tried your script on one old Super 8 film. The capture is 720 by 576 pixels. With 11-12-2012 script version i and info = 4, the result for two frames, first good and second blurred is shown with picture frame 002685.jpg and frame 002686.jpg (pictures below). With 11-18-2012 script version i and info = 4, the result for the same two frames, first good and second blurred is shown with picture frame 002685.jpg and frame 002686.jpg (pictures below). With the last version, a green mask appears, why ? Last edited by Bernardd; 19th November 2012 at 15:15. Reason: amend |
19th November 2012, 18:50 | #15 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
@Bernardd,
on top left, you see the sharpness figures for the frame group.
I checked all versions of my plugins with the available ones now and wrote a version list in the change history at the end of the script. Last edited by martin53; 19th November 2012 at 21:16. |
19th November 2012, 22:11 | #16 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
So, here is a real - extreme - example.
Original clip Stabilized clip (with these deshaker scripts, enlarged by 1.05) Processed clip (rangeC=0,rangeF=5,rangeI=0,maxErrF=2) It is all done only with FreezeFrame, since I disabled MCompensate and MFlowInter. Because only still life was filmed, I notice the small 'movements' that MCompensate and MFlowInter leave after compensation. The clip looks a bit stuttering, but I feel that it is harder to concentrate on single objects or details in the original because they become blurred. While the flat objects just look better, the extreme 3-dimensional sequences do not move smoothly. |
19th November 2012, 22:12 | #17 | Link |
Registered User
Join Date: Jan 2012
Location: Toulon France
Posts: 249
|
@martin53
With November 18 version of your script, the green mask appears on the corrected frames. I have made your corrections without visual picture change. Only debbug text has a little rewritted. I use MaskTools v1.5.8, but i try the MaskTool v1.4.16 and it is the same with green mask. With November 12 version of your script, no problem with green mask. I have processed my 3 mn long clip with the November 12 version. After one hour, my clip have no more blurred frames. But some frames was changed, without neccessary for a human read. It is not easy to control the clip frame by frame. A list of changed frames should be welcome. In November 18 version script, i see new params "framelist" and "framefile". Thus i wait and hope. This script is a impressive job. With the November 12 version, i see smart substitutions where i cannot make better with manual work. Last edited by Bernardd; 19th November 2012 at 22:19. |
20th November 2012, 18:18 | #19 | Link | |||
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
- Please try the script with parameter bright=false. I had the same symptom immediately after I updated to MaskTools v1.5.8 (but not with v1.4.16 all the while). Bright=false cured the green symptom at my place. My suspicion is, that something in the YV12LUT function changed between v1.4.16 and v1.5.8. The other places in the script where YV12LUT is used did not make a difference when I changed to the alternative processing without YV12LUT (usually, these commands are in the script, but commented out for quick comparisons). So, in the 11/19/2012 version, I commented out the line #crgb = crgb.YV12LUT... and re-activated the line before, crgb = crgb.Tweak(coring=false, bright=Averageluma(c)-Averageluma(crgb)). Then the green was gone even with bright=true. Please comment that. What is 'bright' for: The script tries to adjust brightness and contrast of the substitute to those of the original. But because just RT_Averageluma and RT_YPlaneStdev are used to derive brightness and contrast, the algorithm might fail for some footage. I think is is useful for correcting fades. With bright=false, simply this feature remains unused. Quote:
Quote:
You can only enter additional frames to be processed. But because you can easily set dThres or sThres to high values, you can practically disable automatic frame evaluation. It is better to use the 11/19/2012 version, see change history at script end. |
|||
20th November 2012, 18:39 | #20 | Link | |
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
With higher sThres, the script tries only substitutions if frames with an even higher sharpness factor than the current frame are in the substitution range. With lower rangeC (or rangeF, rangeI), a smaller neighbourhood is evaluated. This is useful if the blurred sequences are short, e.g. only single frames. With lower qThres, more frames are over the 'substitution need' limit and left unprocessed. This is the main speed control parameter, too. There should be no default for qThres. But then, you would have encountered one more barrier to try the script. Look at the figures in the original with info=4 and set qThres to the value shown in brackets of an appropriate frame. If your footage contains many small objects with individual motion - like the skiers, in contrast to a close-up situation in my example clip, then you should try much lower values for lsad and lambda, if you're not satisfied with the default output. High values help MVTools2 to avoid decomposition of distant local sharp parts of the same object. Low values give MVTools2 the chance to adapt to different independent local motions. More about that in the MVTools2 doc. Last edited by martin53; 15th December 2012 at 16:40. Reason: typos |
|
|
|