Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 8th April 2011, 21:23   #1  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
One-dimensional spatial denoisers...?

Has anyone seen or produced a filter that works like SpatialSoften (i.e. thresholded spatial averaging), but in only one dimension? Or where the radius can be set separately for vertical and horizontal dimensions?

I'm trying to clean up some footage from a very bad CCD camera that features vertical lines of reduced brightness, visible in some areas (chiefly on the darkest parts of the picture). After a LOT of trial and error I find that the best technique to get rid of them is to use SpatialSoften with a fairly large radius and a low threshold: this smooths over the lines in the areas where they're most visible (large areas of consistent colour/brightness), while leaving more detailed areas mostly untouched (the lines are less obvious there anyway). The downside is that it's slow, and that's partly because it's looking at many more pixels than it needs to.

The ideal solution would be a version of SpatialSoften with independent vertical and horizontal "radius" values, so that I can tell it to look a long way (e.g. 4 pixels) left and right, but only (say) one pixel up and down.

For the moment, what I'm doing is pre-stretching the video in the vertical direction by a factor of 2, applying SpatialSoften(4,3,4) and reducing the height afterward, so that the "footprint" of SpatialSoften is effectively squashed... But of course this is even slower than applying SpatialSoften directly!

If necessary I'll hack my own version of SpatialSoften for this (I have 30 hours of video to clean up... I need more FPS! ), but I thought I'd check if anyone already had something like this lying around.

It occurs to me such a filter might also be useful in some situations when dealing with interlaced video (i.e. where you might want to rely less in picture information from above and below than that from the same video line).

Last edited by pbristow; 8th April 2011 at 21:26.
pbristow is offline   Reply With Quote
Old 8th April 2011, 21:37   #2  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
No such filter comes to my mind. Maybe there isn't one.

Haven't used SpatialSoften for ages, but I seem to remember that it was rather slow, generally. For the same filtering framwork, you might try to use Deen instead.


I could imagine that a rather simple chain of bicubicresize (simulating a gauss blur) with mt_lutxy limiting could do a reasonable job, and it would be pretty fast. Could you show a picture or short sample of the problem? It's mostly about how "thick" the bands are, and how "sharp" their transition gradient.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 8th April 2011, 21:46   #3  |  Link
*.mp4 guy
Registered User
 
*.mp4 guy's Avatar
 
Join Date: Feb 2004
Posts: 1,348
This is probably what you are looking for. You might even want to try the thread specific version. I don't remember how fast it is, but I cant imagine it being too bad. Quality should be pretty good too, if I understand your description correctly.

Quote:
Originally Posted by Didée View Post
I could imagine that a rather simple chain of bicubicresize (simulating a gauss blur) with mt_lutxy limiting could do a reasonable job, and it would be pretty fast. Could you show a picture or short sample of the problem? It's mostly about how "thick" the bands are, and how "sharp" their transition gradient.
Didée makes a good point here, the specifics are important in cases like this.

Last edited by *.mp4 guy; 8th April 2011 at 21:48.
*.mp4 guy is offline   Reply With Quote
Old 8th April 2011, 21:53   #4  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
That thread I had in mind, too. - Problem is, if the stripes are "thick" enough so that a rather big radius is needed, then mt_convolution is ... well, not exactly fast.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 8th April 2011, 22:01   #5  |  Link
*.mp4 guy
Registered User
 
*.mp4 guy's Avatar
 
Join Date: Feb 2004
Posts: 1,348
Yeah, I've been assuming that the stripes are 1px or so wide, as that's the type of ccd artifact I've seen before, but I have to agree that mt_convolution could be slow if this stripes are big. However, using offset, and considering its 1 dimensional, it shouldn't get crazy, I seem to remember a thread where it was stacked 5 or so times, so it can't be as slow as other things I've written...
*.mp4 guy is offline   Reply With Quote
Old 8th April 2011, 22:28   #6  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Here's an example screenshot from my current cleanup process.
- Top left is original video;
- Bottom left is cleaned up using the stretch-SpatialSoften-squash technique (plus a light sharpen);
- Bottom right is the difference between the two (highly exaggerated).

http://www.storageserver.co.uk/files...aCam1.JPG.html

You can see the lines on the singer's jumper and skirt (especially low down, in the shade), and also on the back of the music folder sitting on the stand. The stripes seem to be 1 or 2 pixels wide, and repeat *mostly* every 8 pixels, although sometimes they seem to be every 4 pixels.

As this stuff is all concert footage, I'm mainly concerned with preserving the performers' facial details, and avoiding any artifacts that distract from the performance.
pbristow is offline   Reply With Quote
Old 8th April 2011, 22:51   #7  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Thanks for the suggestion guys! I have pasted what you offered in that thread into a function called "MP4GuyCleanup".
I will try playing with it once my current run has finished.
pbristow is offline   Reply With Quote
Old 8th April 2011, 23:00   #8  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Hmm. It's just simple DCT crap what I'm seeing there. (Those havn't changed in the last ten years.) - The artifacts are not strictly vertical, and are located primarily in those 8x8 blocks that can be spotted as such. Another word for that is "blocking".

Considered using deblock()? Or, if that stuff is not temporally persistent, then a temporal smoother could work just as well. Maybe even more well.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 8th April 2011, 23:12   #9  |  Link
*.mp4 guy
Registered User
 
*.mp4 guy's Avatar
 
Join Date: Feb 2004
Posts: 1,348
There is a lot of dct crap, but there are also vertical lines, often munged by the compression, but still a separate artifact. there is one towards the left that goes from the top of the screen uninterrupted all the way to the bottom, that's not a strictly dct artifact.

Anyway, this will remove almost all of the vertical lines, but does soften the picture a fair amount.
Code:
function DeStripe(Clip C, int "rad", int "offset", int "thr")
{

	rad = Default(rad, 2)
	offset = Default(offset, 0)
	thr_ = Default(thr, 256)


	Blurred = Rad == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 ", vertical = " 1 ", u=1, v=1) : C
	Blurred = Rad == 2 ? offset == 0 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 1 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
	Blurred = Rad == 3 ? offset == 0 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 1 ?  C.Mt_Convolution(Horizontal=" 1 1 0 1 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 1 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
	Blurred = Rad == 4 ? offset == 0 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 0 1 0 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 2 ? C.Mt_Convolution(Horizontal=" 1 1 0 0 1 0 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 0 1 0 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
	Blurred = Rad == 5 ? offset == 0 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 1 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 0 1 0 1 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 2 ?  C.Mt_Convolution(Horizontal=" 1 1 1 0 0 1 0 0 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 3 ?  C.Mt_Convolution(Horizontal=" 1 1 0 0 0 1 0 0 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 0 0 1 0 0 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
		Diff = Mt_Makediff(C, Blurred)

	THR=string(thr_)
	MedianDiff =  Rad == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : Diff
	MedianDiff =  Rad == 2 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
	MedianDiff =  Rad == 3 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
	MedianDiff =  Rad == 4 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 2 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
	MedianDiff =  Rad == 5 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 2 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 3 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
		ReconstructedMedian = mt_makediff(Diff, MedianDiff)
			Mt_AddDiff(Blurred, ReconstructedMedian)

Return(Mergechroma(Last, C, 1))
}

ImageReader("K:\De-striping_the_VeraCam1.JPG").crop(0, 3, 640, 480).converttoyv12()

destripe(rad=4, offset=3, thr=9)
destripe(rad=2, offset=1, thr=9)
destripe(rad=1, offset=0, thr=4)
I'll see if I can mitigate some of the blurring later. Although, if they are not temporally stable, you should just hit it with some temporal filtering, as Didée suggested.
*.mp4 guy is offline   Reply With Quote
Old 8th April 2011, 23:19   #10  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Quote:
Originally Posted by Didée View Post
Hmm. It's just simple DCT crap what I'm seeing there. (Those havn't changed in the last ten years.) - The artifacts are not strictly vertical, and are located primarily in those 8x8 blocks that can be spotted as such. Another word for that is "blocking". .
I considered that early on... The source is mini-DV, by the way, and I have to say I think the DV standard is *not* well suited to encoding noisy CCD-sourced pictures!

But the artifacts are consistently "stripey" over quite wide areas of what should be consistent colour & (lack of) brightness. I'm used to seeing blocking artifacts appearing where there's a gradient or level of detail that the block doesn't have enough bits to follow, but what would make a DCT *add* a cyclical detail, conjured out of nothing?

Notice the "ruler" of black & white lines I added above and below the picture, for diagnostic purposes. The dark lines on the video mostly coincide with the bright lines on my "ruler", and one of my early attempts at getting rid of them was just to "fill them in" with a static image of white lines on black, using merge(). It succesfully evened things out in the dark areas, but at the cost of visibly *brighter* stripes on the lighter areas! I could have started messing around with luma-masking, but then I had the idea about SpatialSoften.

Last edited by pbristow; 8th April 2011 at 23:22.
pbristow is offline   Reply With Quote
Old 12th April 2011, 12:58   #11  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
OK, I gave DeStripe a good try, with various settings, but it doesn't discriminate enough between the wanted edges and the unwanted ones.

It probably didn't help you guys that I forgot to point out that the original video is a lot darker than what I posted here: my script adds a gamma curve after all the other processing, to lift the performers' faces out of the gloom but avoid washing-out the background. (The original scene is unhelpfully back-lit.) So in the raw source, the stripes I'm trying to get rid of are very low amplitude (deviations of no more than 3 from the correct luma level) - which is why SpatialSoften does such a good job of smoothing them while leaving, say, the transition from mike stand to (bright) background is untouched - but then the stripes get boosted in brightness and prominence by the gamma curve.

I've decided to settle for using SpatialSoften with a smaller radius (2), and no vertical stretching tricks. I doesn't completely kill the stripes, but it greatly reduces them and is a lot quicker than the earlier method.

Thanks for your help, anyway. I really should get to grips with the power of MTlut(whatever) sometime...
pbristow is offline   Reply With Quote
Old 12th April 2011, 13:31   #12  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
But you want to use a vertical stretching trick. Just a little bit different.

Code:
src = last                       # whatever you have
b1 = src.deen("a2d",3,8,0,min=1) # whatever you need to catch the stripes. 
D1 = mt_makediff(src,b1)
D2 = D1.bicubicresize(width,((height/8)/4)*4) .bicubicresize(width,height,0,0.5)
# ensure that a pixel's diff will only get smaller by blurring, never larger:
D2 = D2.mt_lutxy(D1,"x 128 - y 128 - * 0 < 128 x 128 - abs y 128 - abs < x y ? ?") 
src.mt_makediff(D2,U=2,V=2).mt_logic(src,"max",U=2,V=2)

i.e. if the stripes are exactly vertical, then they should be pretty much immune against vertical blurring.
(Maybe a Median would be better suited than the (quasi-) gauss shown here. It's just to show the principle.)
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 12th April 2011 at 13:41.
Didée is offline   Reply With Quote
Old 14th April 2011, 02:14   #13  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Well, I have now firmly established that the key to separating these unwanted stripes from wanted details really is the smallness of the luma changes involved, rather than their verticality.

So, throwing caution to the wind and my trousers in the washing machine, I have spent today learning how to use the MT_lut... family of filters, and creating my own crude approximation to a "horizontal only SpatialSoften". Here it is, complete with explanatory comments (written mostly for my own benefit, as otherwise I keep forgetting what things do!).

N.B. I will probably change the name to something more sensible at some point. )

Code:
function RollMeOwn(Clip C, int "rad", int "thr", int "option")
{
	# By Paul Bristow (pbristow).
	# Performs selective horizontal averaging of luma, where variation is already low.
	# (Similar in concept to SpatialSoften, but cruder and only working in one dimension.)
	#
	# Works in two stages:
	# 1. First look at a horizontal region within +/- rad of the current pixel;
	#  - If the pixel's luma value is *very* close (difference less than half of thr) to the average in that region, then replace it with that average.
	# 2. Otherwise, look at a small region, +/- 0.5*rad of the pixel;
	#  - If the pixel's luma value is *fairly* close (difference less than thr) to the average in the smaller zone, then replace it with that average.
	#
	# This approach avoids excessively wide blurring near the edges of dark regions, while giving lots of blur within the dark regions. 

	rad = Default(rad, 4)
	thr_ = Default(thr, 3)
	THR=string(thr_)
	HALF_THR=string(thr_/2)
	option = Default(option, 0)

	# Create required blurring pattern, given the radius:
	Blurred = Rad == 1 ? 	C.Mt_Convolution(Horizontal=" 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 2 ? 	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1)   : \
		  Rad == 3 ? 	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1)  : \
		  Rad == 4 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 5 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 6 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 7 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 8 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  C

	Blurred2 = Rad == 1 ? 	C.Mt_Convolution(Horizontal=" 1 2 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 2 ? 	C.Mt_Convolution(Horizontal=" 1 1 1 ", vertical = " 1 ", u=1, v=1)   : \
		  Rad == 3 ? 	C.Mt_Convolution(Horizontal=" 1 2 2 2 1 ", vertical = " 1 ", u=1, v=1)  : \
		  Rad == 4 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 5 ?	C.Mt_Convolution(Horizontal=" 1 2 2 2 2 2 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 6 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 7 ?	C.Mt_Convolution(Horizontal=" 1 2 2 2 2 2 2 2 1 ", vertical = " 1 ", u=1, v=1) : \
		  Rad == 8 ?	C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : \
		  C

# Option 1: Decide based on similarity to pre-computed average (i.e. the blurred clip):
	OptA =	MT_Lutxy(C, Blurred,  		expr = " X Y - abs  "+THR+" <=  Y X ? " )

# Option 1b: Two-stage version:
	OptB =	MT_Lutxyz(C, Blurred, Blurred2, expr = " X Y - abs  "+HALF_THR+" <=  Y     X Z - abs  "+THR+" <=  Z X ?   ? " )

# Option1c: ...with added brightness threshold:
	OptC =	MT_Lutxyz(C, Blurred, Blurred2, expr = " Y 48 >=   X    X Y - abs  "+HALF_THR+" <=  Y     X Z - abs  "+THR+" <=  Z X ?   ?   ? " )

# Option1d: ...with added brightness threshold only applied to the wider blur situation:
	OptD =	MT_Lutxyz(C, Blurred, Blurred2, expr = " Y 48 <   X Y - abs  "+HALF_THR+" <=   &    Y     X Z - abs  "+THR+" <=  Z X ?   ? " )
# ... i.e.:	"If local brightness is less than 48 AND consistent over a wide area, then use the wider average; 
# 		 else, if brightness is consistent over the narrower area, use the narrower average; 
#		 otherwise leave the original alone.


# Option1e: Better selection method, based on max difference within the region of interest:.
# This prevents averaging too widely over regions with gradual gradients, or which are bordering a bright area 
# at one end and a dark area at the other (so the average is close to the centre value, but the extremes aren't);

	MaxDiffs  = mt_luts( C, C, mode = "max", pixels = mt_rectangle( rad, 1 ), expr = "x y - abs" )
	MaxDiffs2 = mt_luts( C, C, mode = "max", pixels = mt_rectangle( rad/2, 1 ), expr = "x y - abs" )
	# First lay in the narrow blurring where appropriate:
	C2 = MT_Lutxyz(C, Blurred2, MaxDiffs2, expr = "              Z      "+THR+"    <=           Y     X   ? " )
	# Now overlay the wider blurring where appropriate. (Wide trumps narrow, wherever both could apply.)
	OptE = MT_Lutxyz(C2, Blurred,  MaxDiffs,  expr = " Y 48 <       Z   "+HALF_THR+"  <=    &      Y     X   ? " )

# Option f: ...And for speed, an option with just the wider blurring:
	OptF = MT_Lutxyz(C,  Blurred,  MaxDiffs,  expr = " Y 48 <       Z   "+HALF_THR+"  <=    &      Y     X   ? " )

# Option g: ...And one with just the narrower blurring:
	OptG = MT_Lutxyz(C,  Blurred2, MaxDiffs2, expr = "              Z      "+THR+"    <=           Y     X   ? " )


# Choose and return final result:
	return  option == 1 ? OptA :\
		option == 2 ? OptB :\
		option == 3 ? OptC :\
		option == 4 ? OptD :\
		option == 5 ? OptE :\
		option == 6 ? OptF :\
		option == 7 ? OptG :\
		C
}
Gosh, I am proud!

Using option=7 (with rad=5, thr=4) in the context of my overall script (which includes things like yadif, TemporalSoften(1,5,6) and SoftLevels to apply the gamma curve), I get 5.0fps, which is the speed goal I was aiming for. The results are at least as good as I got with SpatialSoften(3,3,3) running at only 3.3 fps. For tougher cases I can use option 5, which uses 2 passes and completely nukes the stripes with no ill effects at all... although it only runs at 3fps.

Since I own the camera that produces these weird effects (and it is definitely the CCD that's doing it: I noticed that on a shot where I used the camera's image stabiliser, when the camera moves the lines start to shift a moment later, and drift back into their original position once the camera settles), I think I'll be needing this function quite a lot! (That's what you get for buying 2nd hand, I guess... )

If anyone can offer tuning tips to make it run any faster, without sacrificing effectiveness, please do. In the meantime, I can get back to running those overnight clean-up jobs, knowing that each one will at least be finished by next lunchtime!

Last edited by pbristow; 14th April 2011 at 02:44. Reason: (just fixing typos)
pbristow is offline   Reply With Quote
Old 14th April 2011, 02:35   #14  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
The actual output (10MB, 600frames at 50fps): http://www.storageserver.co.uk/files...2-235.avi.html

Comparison with source (14.4MB, 600frames at 50fps):
http://www.storageserver.co.uk/files...MPARE.avi.html

Notice in particular that the performer's face is almost untouched!
pbristow is offline   Reply With Quote
Old 14th April 2011, 10:42   #15  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Quote:
Originally Posted by pbristow View Post
If anyone can offer tuning tips to make it run any faster, without sacrificing effectiveness, please do.
mt_lutxyz takes a long time to compile and uses a lot of memory. You can speed up script loading (although not run-time) by making the calls to mt_lutxyz conditional on the corresponding option value, ie only evaluate the ones you are going to use.
Code:
OptB = option == 2 ? MT_Lutxyz(...) : NOP()
...
If you like, you can also remove the ugly Rad conditionals (and the implicit upper limit on Rad) by using a string repetition function:
Code:
function RepeatStr(string s, int n) {
  return (n<=0 ? "" : s + RepeatStr(s, n-1))
}
...
Blurred = C.Mt_Convolution(Horizontal=RepeatStr("1 ", 2*Rad+1), vertical = " 1 ", u=1, v=1)
Blurred2 = Rad%2 == 0 ? \
  C.Mt_Convolution(Horizontal=RepeatStr("1 ", Rad+1), vertical = " 1 ", u=1, v=1) : \
  C.Mt_Convolution(Horizontal="1 "+RepeatStr("2 ", Rad)+"1 ", vertical = " 1 ", u=1, v=1)
__________________
GScript and GRunT - complex Avisynth scripting made easier
Gavino is offline   Reply With Quote
Old 14th April 2011, 20:02   #16  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Useful tips, thanks Gavino.
pbristow is offline   Reply With Quote
Reply

Tags
denoise, denoise filters, denoising

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:20.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.