Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
28th March 2005, 10:32 | #1 | Link |
Registered User
Join Date: Apr 2004
Location: Romania
Posts: 101
|
Contrast enhance in dark area
This an adaptation from
http://www.gimpguru.org/Tutorials/ContrastMask/ The original methode (for photo) 1. duplicate layer 2. duplicate layer -> B&W 3. B&W -> negative 4. negative -> Mode = Overlay 5. Overlay -> Gaussian Blur 10-30 An AviSynth adaptation a = AviSource() b = a. GrayScale().Levels(0,1,255,255,0)Blur(0.2,0.2) Layer(a,b,"add",10) Happy testing |
28th March 2005, 10:56 | #2 | Link |
Registered User
Join Date: Jan 2005
Posts: 48
|
Great. Simple, yet clever. I love this type of scripts, as much as I like those mega-watts iPP or HybridFuPP. Actually I'd like to love iPP or HybridFuPP more, but my CPU (even though at 3.4 GHz) tell me to love them less
|
28th March 2005, 14:37 | #3 | Link |
Registered User
Join Date: Feb 2002
Posts: 1,195
|
give nice results but you should try to use YV12Layer to increase speed of your script and avoid to convert input source in YUY2.
++
__________________
AutoDub v1.8 : Divx3/4/5 & Xvid Video codec and .OGG/.MP3/.AC3/.WMA audio codec. AutoRV10 v1.0 : Use RealVideo 10 Codec and support 2 Audio Streams and Subtitles. |
28th March 2005, 14:54 | #4 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
hey, cool. with a slight modification this will also make a fast and nice substitute for the virtualdub filter "windowed histogram equalization".
__________________
sucking the life out of your videos since 2004 |
28th March 2005, 22:33 | #5 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
Sorry to spoil the fun, but this does not enhance contrast. In contrary, it is reducing contrast, shifting all outer levels towards 128. The pleasing effect is only the brightening of the dark parts. Also, the histogram shows loss of tonalities, indicated by "spikes" in the histogram. Please check yourself with
interleave(a,last).histogram(mode="levels") Then, blur(0.2,0.2) is not a simulation of a radius 10 gaussian blur. To simulate that, you'd need to replace that blur() with resize(33%).resize(100%)
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
28th March 2005, 23:53 | #6 | Link | |
ReMember
Join Date: Nov 2003
Posts: 416
|
Quote:
Last edited by Backwoods; 29th March 2005 at 00:33. |
|
29th March 2005, 03:39 | #7 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
Backwoods: Yes. but you've hawk's eyes, and are probably trying on cartoon or animee. Try bicubic(25%, 0.33,0.33).bicubic(100%, 1.0,0.0) instead of lanczos
I'm too tired right now to work it all through. But please, just try the following: - raise strength in layer() from "10" to, say, "32" or "64" - *do check* the result via interleave() & histogram(mode="levels") as I said. Then you will see what really is going on. Nothing against the method. It is a simple method, working very fast. And so is the result: fast and, well, simple. Put shortly, the method is nothing more than a combination of {reduction of dynamic range} plus {unsharp masking}. In particular, one is loosing blacks and whites. BTW, using MaskTools, one cane do exactly the same as one-liner: without any blurring (ultrafast): Code:
strength = 10 yv12lut(source, "x 128 - 128 x - "+string(strength)+" 128 / * + 128 +", U=2,V=2) # interleave( source, last ) .histogram(mode="levels") Code:
strength = 10 blurred = source.blur(0.2,0.2) #blurred = source.bicubicresize(source.width/16*4,source.height/16*4) # \ .bicubicresize(source.width,source.height, 1.0, 0.0) yv12lutxy(source,blurred,"x 128 - 128 y - "+string(strength)+" 128 / * + 128 +",U=2,V=2) # interleave( source, last ) .histogram(mode="levels") - chroma is not touched (original script washes out colors, too) - it is faster - it works in YV12 - it is more precise: one-step-operation, hence no rounding errors. Also, try both blurring versions of the 2nd scriptlet. You'll notice that the first one (weak blurring @ 0.2) destroys the homogenity of a frame's histogram, and the output also looses sharpness. With the gaussian blurring, the histogram stays smooth, and the output doesn't loose sharpness. Good night then ... /*snoozes away*/
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
29th March 2005, 07:49 | #9 | Link | ||
Registered User
Join Date: Apr 2004
Location: Romania
Posts: 101
|
Quote:
Quote:
The quality allways depend on shooting, but sometimes you can use some tricks, knowing that are tricks. |
||
31st March 2005, 08:55 | #10 | Link | |
Really? Really really!
Join Date: Jan 2004
Location: Finland
Posts: 163
|
Quote:
Regards, M7S
__________________
How many Microsoft employees does it take to screw in a light bulb? None, they declare darkness a new standard. |
|
31st March 2005, 12:43 | #11 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
M7S : it will indeed make a large radius blur, but it will be slower than the downsizing / upsizing combo. Another way ( in YV12 only ) would be to use YV12Convolution with the horizontal and vertical vectors equal to something like "1 2 1" or "1 4 6 4 1" or "1 6 15 20 15 6 1" and so on... It should be faster than the blurs combo on very high radius, but it'll be slower ( i think )than the down/upsize combo.
|
31st March 2005, 18:41 | #12 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
Indeed. Chaining several blur()'s theoretically is the right thing. But in practice, there will be an accumulation of rounding errors, which lead to a) inefficiency for big radii and b) sometimes nasty artefacts, especially in flat areas.
For doing a pretty precise gaussian blur, yv12convolution is just it. However for most tasks, the resizing combo is precise enough, and it's really so much faster. It's probably not so awfully important for a single operation, but if you need several of them ... I'm fiddling with a script that internally uses a good bunch of different gaussian-blur operations. Doing all of them through BicubicResize, speed is slow, but acceptable. Exchanging all of'em with yv12convolution, Vdub's processing window is too small to completely hold the numbers for "estimated time" ... This one is fast, precise enough, and also allows different radii for x-axis and y-axis: Code:
function FastGaussBlur(clip clp, float rx, float "ry") { ry = default(ry, rx) rx = rx+1.0 ry = ry+1.0 ox = clp.width oy = clp.height oxs1 = int(round(ox/4.0/sqrt(rx))*4) oys1 = int(round(oy/4.0/sqrt(ry))*4) oxs2 = int(round(sqrt(ox*oxs1)/4.0)*4) oys2 = int(round(sqrt(oy*oys1)/4.0)*4) oxs1 = oxs1<16 ? 16 : oxs1 oys1 = oys1<16 ? 16 : oys1 clp.bicubicresize(oxs1,oys1) rx>9.0||ry>9.0 ? bicubicresize(oxs2,oys2,0.5,0.25) : last return bicubicresize(ox,oy,1,0) }
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
31st March 2005, 19:16 | #13 | Link |
Registered User
Join Date: Aug 2004
Location: Denmark
Posts: 807
|
what about variableblur. I haven't seen any rounding artifacts yet with it.
|
31st March 2005, 21:05 | #14 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
tsp:
Holy sh** , somehow I had totally forgotten about your filter! Sorry, and thanks for reminding. Okay, just a quick test about precision, artefacts and speed: As can be seen clearly, chaining blur()'s is practically unusable. Not only the color degrades, but note the banding artefacts on the wall, and the overall degradation for radii 24 and 64. YV12convolution and VariableBlur seem very precise both. FastGaussBlur is less precise, but IMHO sufficient for most "normal" requirements of blurring. And the speed: For each filter method, I let compute 10 chained instances of Gaussian Blurring with radius=16. In case of yv12convolution, I cheated a little: since I was too lazy to create Pascal's Triangle up to the 34th line, I made radius=16 by applying 4 times radius=4 ("1 8 28 56 70 56 28 8 1"). Dunno whether or not this put any penalty on yv12convolution, but still ... look below: Results: Code:
FastGaussBlur: 11.2 fps VariableBlur: 6.9 fps chained blur(): 1.8 fps yv12convolution: 1.0 fps
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 25th January 2007 at 13:06. |
31st March 2005, 23:35 | #15 | Link | |
x264 developer
Join Date: Sep 2004
Posts: 2,392
|
Quote:
Likewise, you have to chain 256 blur(1)'s to get a radius=16 gaussian. Last edited by akupenguin; 31st March 2005 at 23:58. |
|
1st April 2005, 00:44 | #16 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
Then you catched me on notation & terminology. So this means, the diameter of the used convolution kernel represents the radius of the gaussian? Doesn't seem obvious to me, but if it is like that, well, let it be ...
Means the sqrt's should be taken out of above function to make it "correct" - and tsp should adjust his plugin too, since it seems to interpret the gaussian radius wrongly as well. Back to topic: Quote:
levels( 0,1.0,255, 10,245 ) which isn't that revolutionary to be used as a trick. It has to be a big gauss, else you're only doing this simple levels squishing, but in a more complicated way.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 1st April 2005 at 00:51. |
|
1st April 2005, 01:41 | #17 | Link | |
x264 developer
Join Date: Sep 2004
Posts: 2,392
|
Quote:
".01 .01 .01 .01 .02 .02 .03 .04 .05 .06 .07 .09 .11 .13 .15 .18 .21 .24 .28 .32 .37 .42 .47 .52 .57 .62 .68 .73 .78 .83 .87 .91 .94 .97 .98 1.00 1.00 1.00 .98 .97 .94 .91 .87 .83 .78 .73 .68 .62 .57 .52 .47 .42 .37 .32 .28 .24 .21 .18 .15 .13 .11 .09 .07 .06 .05 .04 .03 .02 .02 .01 .01 .01 .01" I call that radius=16. <edit> To be exact, radius = the distance from the center to the coefficient with value 0.37 = 1/e (assuming the center coeff is 1)</edit> Last edited by akupenguin; 1st April 2005 at 05:36. |
|
1st April 2005, 04:26 | #18 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,389
|
Quote:
edit: Wait. Does that make sense? Because, if one would scale the values of above kernel to, say, [0,1024] range, then those several "0.1" coefficients would get different, unique numbers. Thus the "radius" would change, despite the fact that still a kernel with the same shape is sampling over the same area ... no, I don't get it yet. /edit. Well, however ... feeding yv12convolution with above kernel, a single instance runs @ 2.1 fps. VariableBlur /w radius=256 runs @ 6.2 fps. The resizer thingie runs at ... ~65 fps. One has to think twice if one's actual need for applying gaussian blurring really requires "maximum precision". Perhaps a pretty close approximation is sufficient, especially if it runs 10x - 30x faster.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 1st April 2005 at 04:34. |
|
2nd April 2005, 17:12 | #19 | Link |
Registered User
Join Date: Aug 2004
Location: Denmark
Posts: 807
|
remember that a gaussian blur is based on the gaussian function:
1/sqrt(2*pi*SD)*exp(-(x^2+y^2)/(2*SD^2)) radius 1 is then equal to 2 SD(95% of the blurred pixel value is based on the pixels within the radius). An ordinary 1 2 1 kernel is about the same as 0.5 SD^2 1 4 6 4 1 = 1 SD^2 1 8 28 56 70 56 28 8 1 = 2 SD^2 that is for step down on pascals triangle is equal to 1 SD^2 increase. radius=1 in variable blur is the same as 1/2 SD^2 radius 2 = 1 SD^2 etc. the bigger the radius is the better the binomial blur approaches the true gaussian blur. Also note that variable blur uses a 5x5 kernel for the blur and repeats it to produces higher radius. Due to the greater kernelsize (compaired to a 3x1 kernel in Blur()) rounding error isn't a problem. Last edited by tsp; 2nd April 2005 at 17:17. |
Thread Tools | Search this Thread |
Display Modes | |
|
|