Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 12th November 2010, 06:36   #61  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
Gavino --

I looked at convert.h just enough to find the yuy2 to rgb conversion (and back) and didn't see any option for matrix. Now that you've clued me in, I see the matrix choices in convert_yuy2.cpp.

IanB --

Doing everything consistently in yuv space has the advantage that a given video source converted into rgb or yuv space running through the filter would give very similar results, whereas running it through using max(r,g,b) vs luma(r,g,b) would give different results. In RGB space, a pure blue pixel has already pegged the range (a 0x00 component and a 0xFF component) and no levels adjustment is possible, while in YUV space, such a pixel has a luma value of only ~11%.

I've looked a bit at some imagemagick filters for ideas and an array of options are offered, including adjusting each rgb channel independently, or by luma, or intensity, or brightness, or ...

http://www.fmwconcepts.com/imagemagi...evel/index.php

From the thumbnail example given it is hard to discern the difference between some of them. At any rate, I guess there isn't a one right way to do it, that different approaches make sense in different situations. Because I'm not doing much in the way of optimization (unrolling or assembly), each flavor of conversion routine probably costs only a few hundred bytes incrementally, so it wouldn't be unthinkable to offer a mode option to specify joint rgb vs separate rgb vs 709 luma vs 601 luma vs avg intensity. I just had a look at levels.cpp and it isn't trying any harder than I am to be fast, so I guess it is acceptable.

It would also be a relatively small extension to offer an auto gamma option based on the same statistics being gathered for autolevels and the same machinery for adjusting pixel values. From the same source as before:

http://www.fmwconcepts.com/imagemagi...amma/index.php

A more fully featured autolevels might want to support more of the features of levels. Ignoring all issues of feature creep and just dreaming out loud, I could imagine these features fitting together into one filter:

Version 0.3 options:

[filterRadius] -- number of frames before/after to average over

[sceneChgThresh] -- sensitivity to detecting scene changes

[frameOverrides] -- specify frames or ranges to ignore/enforce scene changes

new options:

[debug] -- info/debug option (in version 0.4 already)

[clip_low] -- fraction of dark pixels to ignore (default 1/256)
[clip_high] -- fraction of bright pixels to ignore (default 1/256)
[clip] -- specifies same value for clip_low, clip_high

[input_low] -- as in levels()
[input_high] -- as in levels()
[output_low] -- as in levels()
[output_high] -- as in levels()
[coring] -- as in levels()
[gamma] -- as in levels()

[autogamma] -- exclusive of use with [gamma], it is 0 by default, meaning don't autoadjust gamma, while 1.0 means estimate and apply a gamma; weights between the two change it in proportion, above 1.0 gooses it higher than what the algorithm calculates.

[mode] -- used only for rgb formats, "rgb" means adjust each r, g, b plane separately, "joint" means use the min/max of rgb component of each pixel and adjust all planes the same, while rec709/rec601/pc.709/pc.601/average are like the "matrix" option for converting rgb/yuv.

One thing is I would want the filter to produce the same results as the 0.3 version given the same parameters. This means that by default outliers in the distribution could map to luma < 16 and luma > 236. There should be some mechanism to specify that these values should be clamped to [16 .. 235], perhaps just another flag, but maybe there is some less verbose way to infer this without another parameter. For instance, mode could default to pc.601 and "pc.*" signifies whether luma is limited to [16 .. 235] when operating in yuv space.
frustum is offline   Reply With Quote
Old 12th November 2010, 11:22   #62  |  Link
boondoggle
Registered User
 
Join Date: Jan 2008
Posts: 11
I'm not sure if you still want the error testing with the 0.3 version, but i did it and the stretching error was there in YV12 as in the new version. I too agree on that the percentage option would be nice.
boondoggle is offline   Reply With Quote
Old 27th December 2010, 07:06   #63  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
autolevels() version 0.6 beta

I had some time to work on autolevels and I've made some fixes and improvements to the filter. Perhaps I went overboard. Nevertheless, I'm offering it as a beta 0.6 of the filter; all feedback is welcomed.

Rather than doing all the pixel adjustment with new code and handling RGB adjustment by doing on the fly RGB to YUV to RGB remapping, I took a different approach. Instead, the filter measures the luma histogram and takes stats, but it recycles the code from the built-in levels() function to do the pixel value remapping. In essence, autolevels() is kind of a front end now to levels().

There are three benefits to this approach. First, the old autolevels() code only remapped the Y plane and didn't touch the U and V planes, which was an error. U and V are not orthogonal to Y; when the brightness of a pixel is increased, not only is Y is made larger, but U and V must move proportionally further away from their midpoint value, otherwise color saturation is lost. Second, the old autolevels allowed luma to be remapped outside the range of [16 ... 235]. Third, the levels() code handles RGB pixels in RGB space without all the YUV remapping. For an example of how processing U and V improves color fidelity, please see my webpage about it.

Because I'm not doing RGB->YUV->RGB remapping, I've removed the "matrix" parameter which I had introduced in the previous version (0.4).

The other big new feature is I've added an autogamma option, along with the midpoint parameter. If autogamma=true, then a gamma is chosen such that the mean of the distribution is remapped to the midpoint of the range (0.5 by default, but usually I find a midpoint of 0.35 or 0.4 to be more pleasing). Like the levels, the gamma value is based on the mean luma value for a window of frames around the current frame. I've also added a filter called autogamma() which is identical in its functionality to autolevels, except that by default autogamma() has autolevels=false, autogamma=true by default. The simplest invocation would simply be out = in.autogamma().

While I was at it, I incorporated most of the features from levels(): gamma, input_high, input_low, output_high, output_low. "gamma" (default=1.0) can be used to manually specify a gamma remapping function that can be folded in to autolevels at zero per-pixel cost; it is ignored if autogamma=true. input_high and input_low are normally computed by the autolevels() function as the brightest and darkest pixels of the frame, but one or both can be overridden manually. output_high and output_low can be used to specify the output range of values (normally 16..235 for Y, 16..240 for U,V, and 0..255 for RGB).

There was a version 0.5 of the filter which I distributed to one person (videoFred) that added a few features. One of the complaints about autolevels() is that it ignores the darkest and brightest 1/256th of all pixels when determining the range of the luma distribution, which can be too much. The parameter "ignore" can specify the fraction of outliers to ignore, and ignore_high and ignore_low can be used to specify them separately if you want asymmetric tail clipping.

There is one last feature, also added by the 0.5 version. The autolevels and autogamma adjustments are based on the distribution of luma values in the frame. Sometimes the edges of the video are not representative of the image and we might want to ignore the edges for the purposes of gathering statistics (although the full frame is always processed). "border=20" would indicate to ignore 20 pixels along all four edges; border_t, border_b, border_l, border_r can be used to individually specify borders for top, bottom, left, right respectively.

Yes, there are a lot of parameters now, about 20 of them, but most of the time you can leave most of them at their default value.

There is one feature of levels() that I didn't implement: coring. The image luma statistics are taken before coring remapping, and if coring was enabled, the input/output high/low values are taken while Y has been mapped from [16..235]. Thus if the before-coring minimum Y value is 30, to map 30 to zero with coring enabled would require requesting an input_low of 14 (== 30-16), and an input_high of 191 (== (180-16)*255/219). This is just weird, so internally I keep coring off. I could be talked into adding it just to have a strict superset of levels() functionality.

The full webpage for the filter is here.
frustum is offline   Reply With Quote
Old 27th December 2010, 09:32   #64  |  Link
AlanHK
Registered User
 
Join Date: May 2006
Posts: 236
Quote:
Originally Posted by frustum View Post
The full webpage for the filter is here.
Thanks.
The zip doesn't include any documentation, not even a readme. I can scrape the webpage, of course.
AlanHK is offline   Reply With Quote
Old 27th December 2010, 11:28   #65  |  Link
julius666
Registered User
 
julius666's Avatar
 
Join Date: May 2009
Location: Hungary
Posts: 75
Hi, and thank you for your hard work frustrum.

Could you explain how sceneChgThresh works? How could I turn it off? Since I have a source that is one big scene (no real scenechanges) and sometimes autolevels makes a small, barely visible flicker. I'm pretty sure that autolevels reads those 'flickering points' as a new scene.
julius666 is offline   Reply With Quote
Old 27th December 2010, 16:34   #66  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
AlanHK -- I've updated the zip file; it was an oversight to not include the "Autolevels_ReadMe.txt" file. The contents of the readme match what is found on the webpage.

julius666 -- sceneChgThresh is unchanged from its operation since before I touched the filter. How it works is this. The filter computes the luma distribution for all the pixels in the frame, ignoring the smallest and largest N outliers in the distribution (normally 1/256), yielding low and high values. If the low value differs by more than "sceneChgThresh" from the previous frame's low value, it is decided that the current frame must be a new scene. Likewise, if the high value differs by more than "sceneChgThresh" from the previous frame's high value, it is decided that the current frame must be a new scene.

As the documentation explains, you can disable this feature by setting scheChgThresh to a high value which is impossible to trip, like 255.
frustum is offline   Reply With Quote
Old 10th January 2011, 08:45   #67  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
autolevels() version 0.6 final

I have revised my 0.6 beta release and now have a 0.6 "final", although I don't know if anybody other than me has used it at all.

It was probably unnecessary, but as I went 90% of the way towards covering all levels() functionality, I've gone and added the "coring" flag to autolevels as well. One oddity is that coring is true by default for levels() but false by default for autolevels() in order to backwards compatible with older versions of autolevels().

So now levels(a,b,c,d,e,coring=f) can be performed as autolevels(input_low=a,gamma=b,input_high=c,output_low=d,output_high=e,coring=f), not that you'd ever want to do this exactly. Other than that, all the new features of 0.6 beta remain the same.

Updated document and zip file containing the new dll, source, and docs here: http://www.thebattles.net/video/autolevels.html

All feedback is welcomed, even opinions like: hey, that is nice you are making all those changes, but you've gone off the rails.

Last edited by frustum; 10th January 2011 at 08:46. Reason: added title
frustum is offline   Reply With Quote
Old 10th January 2011, 09:18   #68  |  Link
smok3
brontosaurusrex
 
smok3's Avatar
 
Join Date: Oct 2001
Posts: 2,375
a quick test, using script like this;

avisource("pathtoavi.avi")
info()
autolevels(sceneChgThresh=5, filterRadius=50)
converttoyuy2()
yadif()

interstingly every clip looks wrong one way or another, (either to bright or too dark or it fluctuates like somebody would forget to turn-off automatic settings on camcorder). i think it finds scene changes correctly, but really can't be sure.
smok3 is offline   Reply With Quote
Old 10th January 2011, 16:11   #69  |  Link
videoFred
Registered User
 
videoFred's Avatar
 
Join Date: Dec 2004
Location: Gent, Flanders, Belgium, Europe, Earth, Milky Way,Universe
Posts: 591
Autolevels 0.6 works very fine here. I'm testing it right now. Some of my 8mm transfers are captured in 14-140. (because I was to lazy to set my camera ) Those have to be stretched a lot by autolevels. Here it goes wrong sometimes. Blown out whites. Perhaps I do not understand a few parameter settings for files like these?

PS: to be more precise: those pixels are blown out before they reach the 255. I'm using Trevlacs histogram for VDub to check this. It can be seen as a small one line peak at the end of the histogram.

Fred.

Last edited by videoFred; 10th January 2011 at 16:15.
videoFred is offline   Reply With Quote
Old 10th January 2011, 17:14   #70  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
smok3 -- "sceneChgThresh=5" means the scene change detection has a hair trigger. The whole point of autolevels is to smoothly track the changing scene lighting, but it attempts to find scene changes so as to allow sudden changes when warranted. With the threshold set so low (by default it is 20), you will keep getting false positives and thus no averaging. set "debug=false" and it will indicate which frames it thinks begin a new scene. Try increasing it until the scene change detections are plausible. Remember too that it isn't necessary to detect all scene changes. If scene N and scene N+1 have similar enough lighting that the scene change detection doesn't trigger, it probably means it is OK to average the luma statistics across the scene boundary.

Fred -- I'm not sure what to do about those pixels that are blown out either. Do you have any ideas for algorithms to help in such situations? The autolevels algorithms is simply to map the left end of the histogram to luma 16 and the right end of the distribution to 235. It should only blow out pixels which were to the right of the cutoff point. You can set the ignore parameter to set the fraction of pixels to sacrifice on the tails of the distribution. Setting it to zero will prevent any of them from getting saturated, but then even a single pixel in a frame which is white can prevent autolevels() from adjusting anything.

When operating in rgb mode, a single mapping table is set up to convert old_intensity to new_intensity. This table is applied independently to each of the RGB channels. If the gain is high, a value like (r,g,b)=(240,220,210) might map to (255,255,255). Whatever hue it had will get crushed into white. This is no different than the way levels() does it, and in fact, I stole the levels() code to do the pixel adjustment. A smarter, and slower, approach would be to find the max of the three components and if it exceeds 255, the lesser components would be scaled in proportion to the amount the largest component experienced to get to 255.
frustum is offline   Reply With Quote
Old 11th January 2011, 08:42   #71  |  Link
videoFred
Registered User
 
videoFred's Avatar
 
Join Date: Dec 2004
Location: Gent, Flanders, Belgium, Europe, Earth, Milky Way,Universe
Posts: 591
Quote:
Originally Posted by frustum View Post
Fred -- I'm not sure what to do about those pixels that are blown out either. Do you have any ideas for algorithms to help in such situations?
Creating algorithms is way beyond my knowledge, sorry.

But I have very good results with the border parameter. Border= 120 helps a lot in a situation like this.

Tweaking output_low and output_high also helps. Anyhow, of cource these narrow 14-140 captured files are extreme examples.

Fred.

Last edited by videoFred; 11th January 2011 at 09:02.
videoFred is offline   Reply With Quote
Old 12th January 2011, 05:13   #72  |  Link
Mini-Me
Registered User
 
Join Date: Jan 2011
Posts: 121
frustum, this is a very promising filter for people like me with a LOT of scenes to correct. I'm really glad you decided to adopt it.

I do see a potential area for improvement though: Because it relies on the built-in levels filter, Autolevels inherits the weaknesses of the former. Specifically, the contrast stretch and gamma shift are done at low precision, so they result in banding in the image and histogram. If you'd be willing to change the back end again or include the option for an alternate backend, you'd be able to bypass these issues and obtain higher-quality output. I'm not sure how many options are out there for high-precision levels adjustment with dithering, but I just noticed this one called SmoothAdjust at the top of the forum. It seems to be YV12 only, so it's not ideal, but there might be other alternatives as well. Anyway, what are your thoughts on the basic idea? If the implementation would be too difficult or clumsy, what about debug text output listing the levels adjustments for each frame or section? (The desperate could then parse and format that output into trim and filter calls.)

Last edited by Mini-Me; 12th January 2011 at 06:14.
Mini-Me is offline   Reply With Quote
Old 12th January 2011, 07:01   #73  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
Mini-Me --

I haven't studied smoothadjust, but it may make more sense to put autolevels logic into smoothadjust than it the other way around. Smoothadjust doesn't come with source code, so it would have to be called from within the filter. I know there is a way to do it, although I haven't looked into it. I'll add it to my list of things to think about when I next get time.
frustum is offline   Reply With Quote
Old 16th February 2011, 14:11   #74  |  Link
boondoggle
Registered User
 
Join Date: Jan 2008
Posts: 11
It seems we still have to do the ConvertToRGB before filtering?
I tried to fiddle around with the new settings and when I tried Autolevels(filterRadius=10,sceneChgThresh=255,autolevel=false, autogamma=false,border=5) which I thought would do nothing, it still did. And it seems like Fred says that the whites are blown out still. I know you say "detect scenes that should be kept dark (may not be easy to do)" but if you do implement it (I almost convinced my programming friend to look at it) I also, if possible, want to add to that feature a kind of logarithmic stretching(?) i.e. the more it needs to stretch luma the less it does. I dreamt of a script that degrained harder the more the luma was stretched .. keep versions comming though!
boondoggle is offline   Reply With Quote
Old 6th April 2011, 03:09   #75  |  Link
Jeremy Duncan
Didée Fan
 
Jeremy Duncan's Avatar
 
Join Date: Feb 2006
Location: Canada
Posts: 1,079
Is autolevels() the same as ColorYUV(autogain=true) using frustum's autolevels_0.6_20110109.dll?
__________________
When I get tired during work with dvd stuff i think of River Tamm (Summer Glau's character). And the beauty that is Serenity.
Jeremy Duncan is offline   Reply With Quote
Old 6th April 2011, 04:02   #76  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
Jeremy, ColorYUV(autogain=true) looks only at the histogram of the current frame to adjust levels for that frame. This can lead to sudden jumps in the levels.

autolevels() (the original one or my recent versions) improve on the situation by using the histogram stats from N frames before and after the current frame to adjust the levels of the current frame. By using a rolling average it smooths the adjustment so there are no sudden jumps.

So, in short, no, autolevels() is not the same as ColorYUV(autogain=true)
frustum is offline   Reply With Quote
Old 6th April 2011, 09:07   #77  |  Link
Jeremy Duncan
Didée Fan
 
Jeremy Duncan's Avatar
 
Join Date: Feb 2006
Location: Canada
Posts: 1,079
Hi Frustum,

I use a frame doubler and I tried to use autolevels and when I framestepped the video would go dark tone and next frame light tone.

I traced the problem to the input_low, input_high, output_low, _output_high being different values.
So I fixed this dark and light tone error by setting it like this:
input_low=16, input_high=235, output_low=16, output_high=235, but this ruined the effect I was looking for which was improved frame doubling efficiency.

This was the code I used:

Code:
setmtmode(5,0)
SetMemoryMax(512)
video=ffdshow_source().changefps(29.976, linear=true)
A = video
setmtmode(2)
SetMemoryMax(512)
super = A.MSuper(pel=2, hpad=12, vpad=12, rfilter=1, levels=1, isse=true).autolevels()
backward_vec = MAnalyse(super, isb=true, blksize=16, levels=1, search=3, searchparam=2,  isse=true, sadx264=7, pnew=180, lambda=2100, lsad=1300,global=true)
forward_vec = MAnalyse(super, isb=false, blksize=16, levels=1, search=3, searchparam=2,  isse=true, sadx264=7, pnew=180, lambda=2100, lsad=1300,global=true)
backward_vec_1 = MRecalculate(super, backward_vec, blksize=8, search=3, searchparam=1)
forward_vec_1 = MRecalculate(super, forward_vec, blksize=8, search=3, searchparam=1)
A.MBlockFps(super, backward_vec_1, forward_vec_1, num=FramerateNumerator(A)*2, den=FramerateDenominator(A)*1,thscd1=400,  mode=3, thres=100)
GetMTMode(false) > 0 ? distributor() : last
Then I remembered what my buddy subjunk did in a different thread and tried that and it fixed my tone problem, see the code below:

Code:
setmtmode(5,0)
SetMemoryMax(512)
video=ffdshow_source().changefps(29.976, linear=true)
A = video
setmtmode(2)
SetMemoryMax(512)
super_1 = A.MSuper(pel=2, hpad=12, vpad=12, rfilter=1, levels=1, isse=true).autolevels()
super_2 = A.MSuper(pel=2, hpad=12, vpad=12, rfilter=1, levels=1, isse=true)
backward_vec = MAnalyse(super_1, isb=true, blksize=16, levels=1, search=3, searchparam=2,  isse=true, sadx264=7, pnew=180, lambda=2100, lsad=1300,global=true)
forward_vec = MAnalyse(super_1, isb=false, blksize=16, levels=1, search=3, searchparam=2,  isse=true, sadx264=7, pnew=180, lambda=2100, lsad=1300,global=true)
backward_vec_1 = MRecalculate(super_1, backward_vec, blksize=8, search=3, searchparam=1)
forward_vec_1 = MRecalculate(super_1, forward_vec, blksize=8, search=3, searchparam=1)
A.MBlockFps(super_2, backward_vec_1, forward_vec_1, num=FramerateNumerator(A)*2, den=FramerateDenominator(A)*1,thscd1=400,  mode=3, thres=100)
GetMTMode(false) > 0 ? distributor() : last
link to video clip I used, it's dancing, and helicopters, and a man riding his motorcycle.

Thanks man.
__________________
When I get tired during work with dvd stuff i think of River Tamm (Summer Glau's character). And the beauty that is Serenity.

Last edited by Jeremy Duncan; 6th April 2011 at 09:12. Reason: added link
Jeremy Duncan is offline   Reply With Quote
Old 6th April 2011, 13:31   #78  |  Link
frustum
Registered User
 
Join Date: Sep 2010
Location: Austin, TX
Posts: 40
Jeremy,

You are doing

A = video
...
super = A.MSuper(blah, blah, blah).autolevels()

I don't know the details of how mvtools works, but my vague understanding is that the superclip processes the original video and then stores this analyzed data (the motion vectors and who knows what else) as video within the superclip. Performing autolevels on the superclip will screw with this generated data.

Shouldn't you be doing

A = video.autolevels()
...
super = A.MSuper(pel=2, hpad=12, vpad=12, rfilter=1, levels=1, isse=true)
frustum is offline   Reply With Quote
Old 6th April 2011, 14:33   #79  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,390
Quote:
Originally Posted by frustum View Post
Shouldn't you be doing

A = video.autolevels()
...
super = A.MSuper(pel=2, hpad=12, vpad=12, rfilter=1, levels=1, isse=true)
Yes, definetly. MVTools2 does not store the vectors in the super clip, so that's not an issue. But doing the autolevels first is a) faster because of less image area to process (the super clip is *much* bigger), and b) avoids the dark/bright flickering issue. This issue was directly caused by the unlucky autolevels usage: in case of (exact) framerate doubling, the even frames are just the untouched original input frames, hence they aren't affected by the autolevels.

* * * *

BTW, why are all those "realtime" framerate doubling scripts using search=3/searchparam=2? This IMHO is plain silly! search=3 is very slow, and then the speed problem is fixed by using a very small search range and few superclip-'levels'.

Jeremy, that script has a search range of only 1 or 2 pixel! (levels=1, searchparam=1/2). "Motion search" is something else.

Really, there is no point in using the best-but-slowest search method, and then cripple it so badly that it becomes worse than other, more sane search methods would be to start with.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 6th April 2011 at 14:35.
Didée is offline   Reply With Quote
Old 6th April 2011, 22:53   #80  |  Link
Jeremy Duncan
Didée Fan
 
Jeremy Duncan's Avatar
 
Join Date: Feb 2006
Location: Canada
Posts: 1,079
Here is the code I updated:

Code:
setmtmode(5,0)
SetMemoryMax(512)
video=ffdshow_source().changefps(29.976, linear=true)
A = video.autolevels()
B = video
setmtmode(2)
SetMemoryMax(512)
super_1 = A.MSuper(pel=2, hpad=12, vpad=12, rfilter=2, isse=true)
super_2 = B.MSuper(pel=2, hpad=12, vpad=12, rfilter=1, isse=true)
backward_vec = MAnalyse(super_1, isb=true, blksize=16, levels=2, search=2, searchparam=1,  isse=true, sadx264=7, lambda=200, dct=10)
forward_vec = MAnalyse(super_1, isb=false, blksize=16, levels=2, search=1, searchparam=1,  isse=true, sadx264=7, lambda=200, dct=10)
backward_vec_1 = MRecalculate(super_1, backward_vec, blksize=8, search=2, searchparam=1, dct=10)
forward_vec_1 = MRecalculate(super_1, forward_vec, blksize=8, search=1, searchparam=1, dct=10)
B.MBlockFps(super_2, backward_vec_1, forward_vec_1, num=FramerateNumerator(A)*2, den=FramerateDenominator(A)*1,thscd1=350,  thscd2=100, mode=3, thres=30)
GetMTMode(false) > 0 ? distributor() : last
I don't want to discuss my code in this thread though since it's about levels. Please go to the mvtools thread in the avisynth usage section where I posted my codes in response to Gavino if you want to talk about this code. Thank you both for pointing that out to me. The source is in that other thread, so please try the codes you suggest so you know for sure that the strobing light dark tones are fixed by your suggestions, as it is, it works.
__________________
When I get tired during work with dvd stuff i think of River Tamm (Summer Glau's character). And the beauty that is Serenity.
Jeremy Duncan is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:19.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2017, vBulletin Solutions Inc.