Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 11th November 2012, 17:29   #1  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
Bye bye blurred frames

I tried to automate the task of replacing bad frames in a clip by other frames. The result is the attached filter function FSubstitute().

Edit: you can find an abstract some posts below.

The filter uses MVTools2's MCompensate and MFlowInter functions to reconstruct frames, as well as Freezeframe.

Target are
- blurred frames because of rapid camera movement
- super8 digitized clips with some bad frames from broken machinery or cutting
- frames with compression artefacts or blur from compression bitrate shortage

Usage is documented in the script header.

For the list of plugins and their versions see bottom of script.

The filter is slow when effective. It is possible to bypass good frames with appropriate parameter settings and thus get acceptable overall speed. I get about 3FPS with thoroughly processed 640x480 footage on a Core i5/750.

When you give it a try, watch the memory usage. I process clips in slices of max. 10000 frames.
Sorry I failed writing in a tidy style. It is commented extensively.
It it is surely not bug free. Comments are appreciated! But please have patience.
Change log see end of script.

Edit 13/31/2013: see change log at script bottom
Attached Files
File Type: zip _FSubstitute_121223.zip (28.1 KB, 135 views)
File Type: zip _FSubstitute_130131.zip (29.3 KB, 202 views)

Last edited by martin53; 31st January 2013 at 21:43. Reason: new version
martin53 is offline   Reply With Quote
Old 11th November 2012, 17:47   #2  |  Link
lisztfr9
Registered User
 
Join Date: Apr 2010
Posts: 175
Let me just jump in to remember how the old Nikon 950 selected the best frame inside of 8, IIR the BBS setting : Probably by retaining the heaviest one which holds most of details. So the good images weight more and reversely the blurred images are simply lighter than average. Is there a way to retrieve the size of an image and compare it to, huh, the average of a window.. sequence, etc. ?

Thanks, L
lisztfr9 is offline   Reply With Quote
Old 11th November 2012, 21:32   #3  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
The script does not use compressibility, if that's what you call weight?

The amount of detail is determined basically with a function that only honors edges, ignoring average luma. You find it under the name cHighpass() in the script. Of its output, the 'weight' of a frame is simply its AverageLuma.

Tailoring this function will guide the frame estimation/election process fundamentally. The actual implementation was a balance between quality and complexity.
martin53 is offline   Reply With Quote
Old 11th November 2012, 21:51   #4  |  Link
lisztfr9
Registered User
 
Join Date: Apr 2010
Posts: 175
@martin53

With weight i was meaning the size in kb.

In your second § also i fear there is a contradiction with subject luma, which is ignored, but finally outputted... ?
lisztfr9 is offline   Reply With Quote
Old 12th November 2012, 18:55   #5  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
Quote:
Originally Posted by lisztfr9 View Post
@martin53

With weight i was meaning the size in kb.

In your second § also i fear there is a contradiction with subject luma, which is ignored, but finally outputted... ?
Size means little compresssibility. That is not what I use.

The average luma of the original picture says nothing about the amount of detail in it. After applying a function that makes pixels at edges bright, and pixels in even areas dark (basically any edge detection), the picture will be brighter if there are more edges.
Now, the runtime function AverageLuma() will return a measure for the amount of edges in the original picture.
Got it?
martin53 is offline   Reply With Quote
Old 12th November 2012, 19:44   #6  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,407
Quote:
Originally Posted by martin53 View Post
Now, the runtime function AverageLuma() will return a measure for the amount of edges in the original picture.
Hehe, 've been there too, back in the dark ages when Restore24() was made. It kinda works, but you cannot reliably distinguish between the cases

a) more edges with less local contrast,
and
b) less edges with more local contrast.

Or, put as simple as possible, "20 edges with value 100" is the same as "200 edges with value 10", when just using AverageLuma.

For that distinction, you need to also take the "area %age covered by edges" into account.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 12th November 2012, 22:04   #7  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
If a task needs the distinction between few strong and many weak edges, RT_YInRange() can maybe help by now.

For the script I made, i felt no need to make that distinction. Where do you think that from two adjacent frames with no scene change in between, one could have few concise edges, while being blurred with subtle edges, and the other vice versa?
martin53 is offline   Reply With Quote
Old 12th November 2012, 22:19   #8  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,407
Quote:
Originally Posted by martin53 View Post
Where do you think that ...
I've no idea. Frankly, I have no clue what at all you are doing there in that script.

I was just going by your comment about "measuring the amount of edges" via AverageLuma. That reminded me of my similar problem in the past, and I thought it wouldn't hurt to mention that specific complicacy.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 13th November 2012, 09:53   #9  |  Link
lisztfr9
Registered User
 
Join Date: Apr 2010
Posts: 175
Quote:
Originally Posted by martin53 View Post

Where do you think that from two adjacent frames with no scene change in between...
I got your point, but mine is valid too :

1) Nikon Corp implemented it probably in it's 950 Coolpix, because this early camera didn't feature edge detection algorithm.

2) Exactly as you say, from 2 adjacent frames ... - if one makes a jump in size (kb) then there is an image quality change, - toward blurriness if the image is lighter.

Last edited by lisztfr9; 13th November 2012 at 11:13.
lisztfr9 is offline   Reply With Quote
Old 12th November 2012, 05:31   #10  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 11,410
@Moderator, can attachment be approved please.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 19th November 2012, 13:47   #11  |  Link
Bernardd
Registered User
 
Join Date: Jan 2012
Location: Toulon France
Posts: 252
@martin53

I have tried your script on one old Super 8 film. The capture is 720 by 576 pixels.

With 11-12-2012 script version i and info = 4, the result for two frames, first good and second blurred is shown with picture frame 002685.jpg and frame 002686.jpg (pictures below).

With 11-18-2012 script version i and info = 4, the result for the same two frames, first good and second blurred is shown with picture frame 002685.jpg and frame 002686.jpg (pictures below).

With the last version, a green mask appears, why ?
Attached Images
File Type: jpg frame 002685.jpg (50.8 KB, 1248 views)
File Type: jpg frame 002686.jpg (44.0 KB, 1249 views)
File Type: jpg frame 002685 last version.jpg (75.1 KB, 1227 views)
File Type: jpg frame 002686 last version.jpg (93.4 KB, 1233 views)

Last edited by Bernardd; 19th November 2012 at 15:15. Reason: amend
Bernardd is offline   Reply With Quote
Old 19th November 2012, 18:50   #12  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
@Bernardd,
on top left, you see the sharpness figures for the frame group.
  • The figure for the blurred frame is below its neighbours - good.
  • on bottom left, you see the 'MCompensate' candidate. The 11-12-2012 reading (center image) says that the script tries to compensate from frame with delta=-1 and achieves an 'error' value of 0.617 (the white areas), which is below the default limit of maxErrC (1.0). Because also the sharpness of 10.58 is above the actual 8.70, this frame will be the substitution source and is marked with a '*'.
Edit: Another compatibility issue, now between my MaskTools v1.4.16 and actual v1.5.8.
I checked all versions of my plugins with the available ones now and wrote a version list in the change history at the end of the script.

Last edited by martin53; 19th November 2012 at 21:16.
martin53 is offline   Reply With Quote
Old 19th November 2012, 22:11   #13  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
So, here is a real - extreme - example.
Original clip
Stabilized clip (with these deshaker scripts, enlarged by 1.05)
Processed clip (rangeC=0,rangeF=5,rangeI=0,maxErrF=2)

It is all done only with FreezeFrame, since I disabled MCompensate and MFlowInter. Because only still life was filmed, I notice the small 'movements' that MCompensate and MFlowInter leave after compensation. The clip looks a bit stuttering, but I feel that it is harder to concentrate on single objects or details in the original because they become blurred.

While the flat objects just look better, the extreme 3-dimensional sequences do not move smoothly.
martin53 is offline   Reply With Quote
Old 19th November 2012, 22:12   #14  |  Link
Bernardd
Registered User
 
Join Date: Jan 2012
Location: Toulon France
Posts: 252
@martin53

With November 18 version of your script, the green mask appears on the corrected frames. I have made your corrections without visual picture change. Only debbug text has a little rewritted.

I use MaskTools v1.5.8, but i try the MaskTool v1.4.16 and it is the same with green mask.

With November 12 version of your script, no problem with green mask.

I have processed my 3 mn long clip with the November 12 version. After one hour, my clip have no more blurred frames. But some frames was changed, without neccessary for a human read.

It is not easy to control the clip frame by frame. A list of changed frames should be welcome.

In November 18 version script, i see new params "framelist" and "framefile". Thus i wait and hope.

This script is a impressive job. With the November 12 version, i see smart substitutions where i cannot make better with manual work.

Last edited by Bernardd; 19th November 2012 at 22:19.
Bernardd is offline   Reply With Quote
Old 20th November 2012, 18:18   #15  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
Quote:
Originally Posted by Bernardd View Post
@martin53

With November 18 version of your script, the green mask appears on the corrected frames. I have made your corrections without visual picture change. Only debbug text has a little rewritted.

I use MaskTools v1.5.8, but i try the MaskTool v1.4.16 and it is the same with green mask.

With November 12 version of your script, no problem with green mask.
In function cTransformFrame(), there is a block if (bright) {...}

- Please try the script with parameter bright=false. I had the same symptom immediately after I updated to MaskTools v1.5.8 (but not with v1.4.16 all the while). Bright=false cured the green symptom at my place. My suspicion is, that something in the YV12LUT function changed between v1.4.16 and v1.5.8. The other places in the script where YV12LUT is used did not make a difference when I changed to the alternative processing without YV12LUT (usually, these commands are in the script, but commented out for quick comparisons). So, in the 11/19/2012 version, I commented out the line #crgb = crgb.YV12LUT... and re-activated the line before, crgb = crgb.Tweak(coring=false, bright=Averageluma(c)-Averageluma(crgb)).
Then the green was gone even with bright=true. Please comment that.

What is 'bright' for: The script tries to adjust brightness and contrast of the substitute to those of the original. But because just RT_Averageluma and RT_YPlaneStdev are used to derive brightness and contrast, the algorithm might fail for some footage. I think is is useful for correcting fades. With bright=false, simply this feature remains unused.

Quote:
Originally Posted by Bernardd View Post
I have processed my 3 mn long clip with the November 12 version. After one hour, my clip have no more blurred frames. But some frames was changed, without neccessary for a human read.

It is not easy to control the clip frame by frame. A list of changed frames should be welcome.
In dbgView, with info=1 and info=2, the action on each frame is written, among other data. Please try filtering in dbgView.

Quote:
Originally Posted by Bernardd View Post
In November 18 version script, i see new params "framelist" and "framefile". Thus i wait and hope.
These parameters are already working. You may either enter a space separated list of frames to be processed directly in 'framelist', or a name of a text file with this information in 'framefile'. If you enter both, 'framefile' wins. Line breaks are internally converted to spaces.
You can only enter additional frames to be processed. But because you can easily set dThres or sThres to high values, you can practically disable automatic frame evaluation.

It is better to use the 11/19/2012 version, see change history at script end.
martin53 is offline   Reply With Quote
Old 20th November 2012, 18:39   #16  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
Quote:
Originally Posted by Bernardd View Post
I have processed my 3 mn long clip with the November 12 version. After one hour, my clip have no more blurred frames. But some frames was changed, without neccessary for a human read.
The default for sThres, 1.05, is quite low. Usually, substitutions achieved a sharpness result 1.05 below the sharpness of the substitution source in my experiments, so i set the default to that value.
With higher sThres, the script tries only substitutions if frames with an even higher sharpness factor than the current frame are in the substitution range.
With lower rangeC (or rangeF, rangeI), a smaller neighbourhood is evaluated. This is useful if the blurred sequences are short, e.g. only single frames.
With lower qThres, more frames are over the 'substitution need' limit and left unprocessed. This is the main speed control parameter, too. There should be no default for qThres. But then, you would have encountered one more barrier to try the script. Look at the figures in the original with info=4 and set qThres to the value shown in brackets of an appropriate frame.

If your footage contains many small objects with individual motion - like the skiers, in contrast to a close-up situation in my example clip, then you should try much lower values for lsad and lambda, if you're not satisfied with the default output. High values help MVTools2 to avoid decomposition of distant local sharp parts of the same object. Low values give MVTools2 the chance to adapt to different independent local motions. More about that in the MVTools2 doc.

Last edited by martin53; 15th December 2012 at 16:40. Reason: typos
martin53 is offline   Reply With Quote
Old 19th November 2012, 23:16   #17  |  Link
lisztfr9
Registered User
 
Join Date: Apr 2010
Posts: 175
I'm a bit frighten by a 1400 lines script, because i wrote a 2684 lines program, and i know what it means to debug code.

Last edited by lisztfr9; 19th November 2012 at 23:21.
lisztfr9 is offline   Reply With Quote
Old 21st November 2012, 11:02   #18  |  Link
Bernardd
Registered User
 
Join Date: Jan 2012
Location: Toulon France
Posts: 252
@martin53

You have right, with 18 nov version of your script, on my computer with param "bright=true", green mask appears and with this param at false no green mask with the two Masktools versions.

But well done, with 19 nov version of your script, none green mask, with bright default valor (true) for the two Masktools versions.

Thanks

I try now your last version. I exam your param mod proposals and report you later.
Bernardd is offline   Reply With Quote
Old 21st November 2012, 16:08   #19  |  Link
Bernardd
Registered User
 
Join Date: Jan 2012
Location: Toulon France
Posts: 252
@martin53

Nice script.

I understand the symbol * for substitution picture, but can you explain why the text colors change on this pictures.
two with substitution
frame 002680 november 19 version_1.jpg
frame 002683 november 19 version_1.jpg
two without substitution
frame 002671 november 19 version_1.jpg
frame 002672 november 19 version_1.jpg
Bernardd is offline   Reply With Quote
Old 21st November 2012, 20:09   #20  |  Link
martin53
Registered User
 
Join Date: Mar 2007
Posts: 408
Quote:
Originally Posted by Bernardd View Post
but can you explain why the text colors change on this pictures.
The subtitle of candidates is green if the candidate meets all requirements to replace the original, red if it's not. Only the winning candidate (always green of course) has an asterisk.

The original clip's top line subtitle is white if it does not need substitution by thresholds, pink if it needs substitution but no candidate meets all requirements, green if substitution will be made.

maxErrC etc are not exact values: the script loosens this limit when the current frame is very blurred. Maybe you wondered about the fQ figure.
fQ is the quality factor between the current frame and the best frame in the (-5...5) range. That means: if the current frame only has half the 'quality' than the best frame in this range, all maxErr limits are multiplied by 2. This helps to keep maxErr small, yet allow substitution of frames that drop significantly in their neighbourhood.

And then forceF is another tool for very bad frames. If a frame's quality is below the average of its two direct neighbours and if still no substitution candidate meets the limits, another try is made with FreezeFrame of the better of the two direct neighbours. This candidate profits from a much higher maxErr. The idea behind it is, that the eye will not so much notice that the frames is just copied, but it will notice single very bad frames.
martin53 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:31.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.