Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 23rd March 2023, 11:03   #2201  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Dogway I get an error message within ex_autolevels() when using this on an old HD animation film.
I noticed strange artefacts in scene fades and camera pans and after several hours of searching I finally found the reason:


It's an overlay error message in the prefiltering clip from ex_autolevels.

Both packs are up to date and this message does NOT appear on my SD sources I encoded so far. Thank god.

First I thought it's the HD resolution. So I resized it down to 720x540 from 1440x1080. But it stays the same.
Also added frameprops before calling SMDegrain, just to be sure. But no effect either.

used custom prefilter is what we already spoke about:

Code:
PREl1 = ex_autolevels(      true ,true, true,Deflicker=true,tv_out=false)
PREl2 = ex_autolevels(PREl1,false,true,false,Deflicker=true,tv_out=false)
PREmv = ex_Median(PREl2,mode="IQMST", UV=3,thres=min(255,(Stage*50)))
PREv  = ex_blend(PREl2,PREmv,"blend",opacity=(Stage/10.)).ex_sbr(3,UV=3)
Don't really know what else to do here and why it causes a syntax error in ArrayOps.
Could you maybe take a look into this?

Last edited by LeXXuz; 23rd March 2023 at 19:08.
LeXXuz is offline   Reply With Quote
Old 23rd March 2023, 14:33   #2202  |  Link
madey83
Registered User
 
Join Date: Sep 2021
Posts: 136
@Dogway,

below code i use as prefiler as you proposed and it is better (picture looks just better) and it's process quicker than ex_BM3D
pre=ex_Median(mode="IQMST",thres=155)
ex_blend(pre,"blend",opacity=0.3).ex_sbr(1,UV=3)

thank you
madey83 is offline   Reply With Quote
Old 23rd March 2023, 18:59   #2203  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,351
@LeXXuz: Yes I also had some issues (Error with 'Inf') on a clip I was working on, but used another script and forgot to keep the source for investigation. One of the things I wanted to do is to check with the previous AVS+ test versions like 7 or 6 to know if it's a bug related to the recent refactor on frameprops or not. Your issue seems a little different so most likely there's some issue on the function that I need to address, if you can share a small clip which shows the issue I can have a look, but first since it's an animation try with anime=true in case it's not an intraframe only clip. About the memory leaks do you still get them? And the clip you shared, can you post the script that was used?

By the way, this week I've been spending time with ChatGPT to resolve long standing research questions of different projects and topics. It's the free version so you need some time and question massaging to get it to some point, and you need to revise the answers but it's helping me. Probably will pay a 1 or 2 months suscriptions later on when they update ChatGPT 4/5 to 2023 dataset. The bot doesn't understand AviSynth by they way.

In any case it helped me to understand a few things and ported myself a checkerboard dedither kernel from RA's slang shaders:
Code:
function ex_dedither(clip a, float "blend", int "UV") {

    rgb  = isRGB(a)
    isy  = isy(a)
    bi   = BitsPerComponent(a)
    fs   = propNumElements (a,"_ColorRange")  > 0 ? \
           propGetInt      (a,"_ColorRange") == 0 : rgb

    bl   = Default(blend,       1.0 )
    UV   = Default(UV, rgb ? 3 : 128)
    un   = Undefined()
    si   = fs ? "allf" : "all"

    str = Format("f32 x[-1,-1] UL^  x[-1,0] U^  x[1,-1]  UR^
                      x[-1,0]   L^  x[0,0]  C^  x[0,1]    R^
                      x[-1,1]  DL^  x[1,0]  D^  x[1,1]   DR^

                      L R max LRM@ C max L R min LRN@ C min -
                      1 {bl} - * DIFF@

                      1 + 0.5 * C *  1 DIFF - 0.125 *  L R U D + + + *   +

                      LRM C min U D max C min max
                      LRN C max U D min C max min clip")

    isy     ? Expr(a, str                                 , scale_inputs=si) : \
    UV == 1 ? Expr(a, str, ""                             , scale_inputs=si) : \
              Expr(a, str, ex_UVexpr(str, UV, bi, rgb, un), scale_inputs=si) }
I hope to keep investigating and implement a decrawl algo which was one of my original quests in 2021.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 23rd March 2023 at 19:04.
Dogway is offline   Reply With Quote
Old 23rd March 2023, 19:08   #2204  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Quote:
Originally Posted by Dogway View Post
@LeXXuz: Yes I also had some issues (Error with 'Inf') on a clip I was working on, but used another script and forgot to keep the source for investigation. One of the things I wanted to do is to check with the previous AVS+ test versions like 7 or 6 to know if it's a bug related to the recent refactor on frameprops or not. Your issue seems a little different so most likely there's some issue on the function that I need to address, if you can share a small clip which shows the issue I can have a look, but first since it's an animation try with anime=true in case it's not an intraframe only clip. About the memory leaks do you still get them? And the clip you shared, can you post the script that was used?
Our postings collided. So I better put my last edit in here as a general reply.


EDIT: In addition, there also seems to be an error occuring along fades with consecutive faults causing artefacts and disturbances and ending in an error overlay message until the next scene change.
Video:
https://send.cm/d/MOUt

Screenshots of prefilter clip:


Of course I can send you some original clips of the source HD anime and this scene of the old film we started the wobble discussion with some weeks ago.

Script for that HD anime was:
Code:
LoadPlugin("C:\Video Editing\MeGUI (x64)\tools\ffms\ffms2.dll")
FFVideoSource("G:\WORK\test.mkv", fpsnum=24, fpsden=1, threads=1)
crop(220, 0, -220, 0)
        #
ConvertBits(16)
SceneStats("Range+Stats")
#
Stage=08
	PREl1 = ex_autolevels(      true ,true, true,Deflicker=true,tv_out=false)
	PREl2 = ex_autolevels(PREl1,false,true,false,Deflicker=true,tv_out=false)
        PREmv = ex_Median(PREl2,mode="IQMST", UV=3,thres=min(255,(Stage*50)))
	PREv  = ex_blend(PREl2,PREmv,"blend",opacity=(Stage/10.))#.ex_sbr(3,UV=3)
#
SMDegrain(show=true,mode="MDegrain", tr=12, thSAD=(Stage*80), thSADc=(Stage*60), thSCD1=max(400,(Stage*90)), Str=2.0, contrasharp=true,  LFR=false, DCTFlicker=false, refinemotion=true, truemotion=false, blksize=24, search=5, pel=2, subpixel=3, prefilter=PREmv, chroma=true, plane=4, gpuid=-1, interlaced=false)
ex_unsharp(Stage/50.)
F3KDB_3(range=20, Y=32, Cb=24, Cr=24, grainY=24, grainC=16, random_algo_ref=2, random_algo_grain=2, output_depth=16, sample_mode=2)
        #
Prefetch(32)
Return(Last)
And script for the SD film:
Code:
SetMemoryMax(48*1024)
SetCacheMode(0)
#
LoadPlugin("C:\Video Editing\MeGUI (x64)\tools\ffms\ffms2.dll")
FFVideoSource("H:\WORK\test.mkv", fpsnum=25, fpsden=1, threads=1)
#
AssumeFPS(25)
CTelecine()
CPostProcessing()
propSet("_FieldBased",0)
crop(0, 4, -8, -4)
ConvertBits(16)
Levels(2*256,0.9,255*256,0,225*256).Tweak(sat=0.95)
SceneStats("Range+Stats")
#
Stage=07
#
	PREl1 = ex_autolevels(      true ,true, true,Deflicker=true,tv_out=false)
	PREl2 = ex_autolevels(PREl1,false,true,false,Deflicker=true,tv_out=false)
        PREmo = ex_Median(      mode="IQMST", UV=3,thres=min(255,(Stage*25)))
        PREmv = ex_Median(PREl2,mode="IQMST", UV=3,thres=min(255,(Stage*50)))
        PREm  = ex_blend(PREmo,"blend",opacity=(Stage/20.)).ex_sbr(2,UV=3)
	PREv  = ex_blend(PREl2,PREmv,"blend",opacity=(Stage/10.)).ex_sbr(3,UV=3)
#
SMDegrain(show=true,mode="MDegrain", tr=12, thSAD=(Stage*50), thSADc=(Stage*40), thSCD1=max(400,(Stage*60)), Str=2.0, contrasharp=true, LFR=true, DCTFlicker=true, refinemotion=true, truemotion=true, blksize=16, search=5, pel=4, subpixel=3, mfilter=PREm, prefilter=PREv, chroma=true, plane=4, gpuid=-1, interlaced=false)
ex_unsharp(Stage/40.).ex_unsharp((Stage/20.),Fc=width()/1.5)
F3KDB_3(range=10, Y=32, Cb=24, Cr=24, grainY=24, grainC=16, random_algo_ref=2, random_algo_grain=2, output_depth=16, sample_mode=2)
        #
Prefetch(32)
Return(Last)
I have not run into memory problems yet. But they occured mostly with BM3D involved. And, like madey83 already mentioned,
I also replaced BM3D completely by IQMST or STTWM as both produce superior results with less CPU load than BM3D.

Even run in CUDA mode, the final CPU based aggregation needed more CPU time than the new filters. So I see no reason to keep using BM3D at the moment.

EDIT2:
Source scene of SD film: https://send.cm/d/MOix
Source scene of HD anime: https://send.cm/d/MOkL

I'm using pinterf's AVS+ 3.7.3 test9 at the moment.

Last edited by LeXXuz; 23rd March 2023 at 19:42.
LeXXuz is offline   Reply With Quote
Old 23rd March 2023, 21:19   #2205  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,351
You need to send me again the SD film source because the DDL is capped.

Talking about the animation film, I didn't get any error (although I use different prefetch values) nor I saw any luma flicker, so you don't need the autolevels, specially with Deflicker. You can use autolevels though if you want to normalize the clip, but that's going to change the SAD value on a per-scene basis. So they go out.
By the way as I mentioned SceneStats should be SceneStats("Range+Stats", anime=true) in this case, but we don't need it anymore due to lack of flicker.

Another thing is IQMST, looks a bit overkill for this source, this is more like medium grain, and being 2D animation probably NLMeans should be best suited. NLMeans has temporal unlike DGDenoise so it will prefilter a bit more I think.
Code:
SMDegrain(mode="MDegrain", tr=3, thSAD=(Stage*80), Str=2.0, contrasharp=true, refinemotion=true, prefilter=6, chroma=true, plane=4)
If you want to stabilize the background you can run a stronger SMDegrain and then merge them with the MotionMask workaround I posted in previous pages. I mean, IQMST is not perfect so it's only suited for very damaged live action sources. I tried to add limiting like the next example:
Code:
pre=ex_Median(mode="IQMST", UV=3, thres=15)
pre=ex_blend(pre,"blend",opacity=0.5).ex_sbr(2,UV=3)

SMDegrain(mode="TemporalGauss", tr=6, thSAD=640, Str=2.0, contrasharp=true, refinemotion=true, prefilter=pre, chroma=true, plane=4)
and alternative
Code:
pre=ex_Median(mode="IQMST", UV=3,thres=15)
pre=ex_blend(pre,"blend",opacity=0.5).ex_median("IQMV").ex_sbr(1,UV=3)

SMDegrain(mode="TemporalSoften", tr=5, thSAD=640, Str=2.0, contrasharp=true, refinemotion=true, prefilter=pre, chroma=true, plane=4)
But even so when framestepping you will notice of some line warping before a head movement or so, and that's even using thres=15 and halving the IQMST effect. Yes you can use repair to fix the line warping passing it through a mix of motion and edge masks, but you see things get that complicated.

Passed DigitalFilmV5 over it to see how far I could take it:

__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 23rd March 2023 at 22:07.
Dogway is offline   Reply With Quote
Old 23rd March 2023, 22:19   #2206  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Quote:
Originally Posted by Dogway View Post
You need to send me again the SD film source because the DDL is capped.
Odd. Just tested the DL. Still working. If it still doesn't work for you I will send you a new version via PM.

Quote:
Originally Posted by Dogway View Post
Talking about the animation film, I didn't get any error (although I use different prefetch values) nor I saw any luma flicker, so you don't need the autolevels, specially with Deflicker. You can use autolevels though if you want to normalize the clip, but that's going to change the SAD value on a per-scene basis. So they go out.
By the way as I mentioned SceneStats should be SceneStats("Range+Stats", anime=true) in this case, but we don't need it anymore due to lack of flicker.
There are other scenes showing a little flicker and more sticky kind of noise, that's why I put autolevels in, just to play it safe. I haven't run another test yet with anime=true. But I can give that another try to see what happens.

I need that high prefetch to fill up my CPUs mostly. Running two encodes with a lower prefetch adds too much overhead and the overall framerate is lower.

Quote:
Originally Posted by Dogway View Post
Another thing is IQMST, looks a bit overkill for this source, this is more like medium grain, and being 2D animation probably NLMeans should be best suited. NLMeans has temporal unlike DGDenoise so it will prefilter a bit more I think.
Code:
SMDegrain(mode="MDegrain", tr=3, thSAD=(Stage*80), Str=2.0, contrasharp=true, refinemotion=true, prefilter=6, chroma=true, plane=4)
Well what I usually do is, I set up my SMDegrain template with stage=1 and then add greyscale() or a color tint to mfilter. Then I take a preview of the denoised output and rank up the stage until there are only little grey/tinted areas around moving objects while the stills/background show no colour changes. Then I consider that source as cleaned. As there is only little content left where MVtools didn't find a suitable match.
It may be just "quick and dirty" or even false to do so, but I must say it served me quite well so far over many many tests. At least far better than setting up SMDegrain by looking for faint residual noise on my office monitor which I can hardly see with me old eyes and getting a big surprise later on in my living room when watching on my OLED TV which is very picky regarding noise.

So, I usually add one "Stage"-step for safety. And so I ended up with Stage=7 (~ 7*80=560 thSAD) for that anime with that prefilter setup. Like I said before, there are other scenes that look much more noisy with other grain, especially night shots.

Quote:
Originally Posted by Dogway View Post
If you want to stabilize the background you can run a stronger SMDegrain and then merge them with the MotionMask workaround I posted in previous pages. I mean, IQMST is not perfect so it's only suited for very damaged live action sources. I tried to add limiting like the next example:
[CODE]pre=ex_Median(mode="IQMST", UV=3, thres=15)
pre=ex_blend(pre,"blend",opacity=0.5).ex_sbr(2,UV=3)
Maybe I still get this limiting threshold wrong? I thought reducing it from the default 255 means less pixels or pixel differences get into consideration for processing? Or is that false? Then how would you best describe what this threshold actually does?

Quote:
But even so when framestepping you will notice of some line warping before a head movement or so, and that's even using thres=15 and halving the IQMST effect. Yes you can use repair to fix the line warping passing it through a mix of motion and edge masks, but you see things get that complicated.
I think you're right that's getting too complex and maybe will cause problems in other scenes, especially with MotionMask. And I don't want to end up with clip chopping and dozens of different settings. That'd be way too much effort.

It's a compromise and I was quite happy with the results, besides those overlay error messages of course, which surely must have a reason for appearing?

I may not need autolevels on that anime, but I really need it on that SD film or the wobble is killing me. But I have no idea what that overlay message I showed in the screeshots above is trying to tell me...

Last edited by LeXXuz; 23rd March 2023 at 22:24.
LeXXuz is offline   Reply With Quote
Old 24th March 2023, 02:39   #2207  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,351
Now I could download after the server wait time.

Deflicker is not something you can apply lightly or blindly. There's a lot going on, from SceneStats which is fine but causes a lot of non linear frame access, to ex_autolevels which requires the flicker to be of higher frequency than the content. I updated recently the ScenesPack to reflect this aspect after finding an anime source that had a smooth flicker of maybe 1 or 2 seconds, it didn't work.
A fade in or fade out is the same, a smooth flicker-like behavior. If you need to deflicker a high freq luma flicker over some part it's better to apply trims or simply ClipClop.

The examples above for Lucky Luke are the first for retaining a film vibe, and the second with the stronger TemporalSoften for a more original cel look. The NN model runs slow on my machine, like 0.05fps or so, but my card is old.

Quote:
Well what I usually do is, I set up my SMDegrain template with stage=1 and then add greyscale() or a color tint to mfilter
Yes that's a great trick, I have used it myself before, but nowadays I'm too lazy. I simply raise thSAD until I like it, then add mfilter if deemed necessary.

For the 'thres' in ex_median() play with it visually. You can see that below 30 it starts to recover some edges and the lower you go more details are recovered until you start to recover noise also. Basically it reads as "Only allow changes below a difference of 'thres'", or "Block all changes above a diff of 'thres'".

You can get funny with the masks, it's entertaining when you get the grasp of it. I simply mean that you can go as entropic as you want, didn't want to confuse you further.

For example:
Code:
em  = ex_edge().extracty()
mm  = motionmask()
msk = ex_logic(em,mm).ex_expand().ex_deflate().ex_deflate()

rp  = src.ex_smooth(2,sharp=true)
pre = ex_merge(rp,msk)
As for the SD clip, you need to take out the fades. This is how you do it with ClipClop:
Code:
r0=SMPTE_legal(false) # to PC levels

r1    = ex_autolevels(      true ,true, true,Deflicker=true,tv_out=false)
PREl1 = ClipClop(r0,r1,sCmd="r1(0,45); r0(46,66); r1(67,165)")
r2    = ex_autolevels(PREl1,false,true,false,Deflicker=true,tv_out=false)
PREl2 = ClipClop(r0,r2,sCmd="r1(0,45); r0(46,66); r1(67,165)")
Maybe you can get away with the second ClipClop only. I know there's a way to put any var name into ClipClop, but the docs are hard to grasp so conformed to r#
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 24th March 2023 at 03:07.
Dogway is offline   Reply With Quote
Old 24th March 2023, 08:36   #2208  |  Link
madey83
Registered User
 
Join Date: Sep 2021
Posts: 136
@Dogway,

you mentioned to LeXXuz that IQMST is to strong, is there any option to reduce its strength, some additional paramiter or something like that?

pre=ex_Median(mode="IQMST",thres=155)
ex_blend(pre,"blend",opacity=0.3).ex_sbr(1,UV=3)

Last edited by madey83; 24th March 2023 at 08:52.
madey83 is offline   Reply With Quote
Old 24th March 2023, 09:51   #2209  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Quote:
Originally Posted by madey83 View Post
@Dogway,

you mentioned to LeXXuz that IQMST is to strong, is there any option to reduce its strength, some additional paramiter or something like that?

pre=ex_Median(mode="IQMST",thres=155)
ex_blend(pre,"blend",opacity=0.3).ex_sbr(1,UV=3)
ex_Median does not have a strength option in general. That is why you blend it with the original clip via ex_blend. You can modify the strength this way with the opacity setting. You already use only 30% of the filter strength in that example.

What Dogway meant was that IQMST as prefilter may be too strong for the sample I provided and suggested a different approach for that.

In general IQMST is very strong but still beats BM3D with a sigma of 10 or more in my opinion. But that's just my personal taste.
LeXXuz is offline   Reply With Quote
Old 24th March 2023, 09:58   #2210  |  Link
madey83
Registered User
 
Join Date: Sep 2021
Posts: 136
Quote:
Originally Posted by LeXXuz View Post
ex_Median does not have a strength option in general. That is why you blend it with the original clip via ex_blend. You can modify the strength this way with the opacity setting. You already use only 30% of the filter strength in that example.

What Dogway meant was that IQMST as prefilter may be too strong for the sample I provided and suggested a different approach for that.

In general IQMST is very strong but still beats BM3D with a sigma of 10 or more in my opinion. But that's just my personal taste.
hi,

so if i understood you correctly to reduce strength of IQMST i can change value for " opacity " from 0.3 to 0.1 as an example?
madey83 is offline   Reply With Quote
Old 24th March 2023, 10:20   #2211  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Quote:
Originally Posted by Dogway View Post
A fade in or fade out is the same, a smooth flicker-like behavior. If you need to deflicker a high freq luma flicker over some part it's better to apply trims or simply ClipClop.
...
As for the SD clip, you need to take out the fades. This is how you do it with ClipClop:
Code:
r0=SMPTE_legal(false) # to PC levels

r1    = ex_autolevels(      true ,true, true,Deflicker=true,tv_out=false)
PREl1 = ClipClop(r0,r1,sCmd="r1(0,45); r0(46,66); r1(67,165)")
r2    = ex_autolevels(PREl1,false,true,false,Deflicker=true,tv_out=false)
PREl2 = ClipClop(r0,r2,sCmd="r1(0,45); r0(46,66); r1(67,165)")
Maybe you can get away with the second ClipClop only. I know there's a way to put any var name into ClipClop, but the docs are hard to grasp so conformed to r#
Thanks again about that deflicker explanation. I was afraid you might say that and I have to invest a little more time to handle fades.

But if I want to be lazy on some less important encodes it would be a help if you could disable those overlay error messages, or give an option to do so.
That way those fades would look a little better, because those artefacts won't appear from the overlay messages leaving just that kind of posterization from ex_autolevels.

Or are these messages happening on Avisynth level and I have to ask pinterf to add an option to disable these? If there already isn't one, of course.

Quote:
Originally Posted by Dogway View Post
Yes that's a great trick, I have used it myself before, but nowadays I'm too lazy. I simply raise thSAD until I like it, then add mfilter if deemed necessary.
I'm relieved that I was on the right track with this for once. I really have trouble in judging my scripts by watching for residual noise. This way its much easier for me to see.

Quote:
Originally Posted by Dogway View Post
For the 'thres' in ex_median() play with it visually. You can see that below 30 it starts to recover some edges and the lower you go more details are recovered until you start to recover noise also. Basically it reads as "Only allow changes below a difference of 'thres'", or "Block all changes above a diff of 'thres'".
I did. And I think I know what you meant by now. And that my old thresholds were too high and merely for the trash can.

Quote:
You can get funny with the masks, it's entertaining when you get the grasp of it. I simply mean that you can go as entropic as you want, didn't want to confuse you further.

For example:
Code:
em  = ex_edge().extracty()
mm  = motionmask()
msk = ex_logic(em,mm).ex_expand().ex_deflate().ex_deflate()

rp  = src.ex_smooth(2,sharp=true)
pre = ex_merge(rp,msk)
I know. I already played around with it since the first time you provided me that sample with MotionMask().
But I think this leaves the area of simple denoising and enters the field of scene or even frame-based restoration as you have to treat scenes differently depending on their motion content.
And I don't have the time for that. Sadly.

Last edited by LeXXuz; 24th March 2023 at 10:30.
LeXXuz is offline   Reply With Quote
Old 24th March 2023, 10:26   #2212  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Quote:
Originally Posted by madey83 View Post
hi,

so if i understood you correctly to reduce strength of IQMST i can change value for " opacity " from 0.3 to 0.1 as an example?
Yes, you can adjust the "strength" by adjusting the opacity of the blend somewhere between 0 (no denoise filter effect) and 1.0 (full denoise filter effect)

But if you have just some fine grain better use something different like STTWM. Dogway provided examples for that some posts above and they can also be found in his ExTools package ~line 2194.

Last edited by LeXXuz; 24th March 2023 at 10:32.
LeXXuz is offline   Reply With Quote
Old 24th March 2023, 10:51   #2213  |  Link
madey83
Registered User
 
Join Date: Sep 2021
Posts: 136
Quote:
Originally Posted by LeXXuz View Post
Yes, you can adjust the "strength" by adjusting the opacity of the blend somewhere between 0 (no denoise filter effect) and 1.0 (full denoise filter effect)

But if you have just some fine grain better use something different like STTWM. Dogway provided examples for that some posts above and they can also be found in his ExTools package ~line 2194.
thank you for sure i will try it as well.
madey83 is offline   Reply With Quote
Old 24th March 2023, 12:27   #2214  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,703
Is there a fundamental difference (in HBD) between using ex_blend and ex_limitchange to limit the amount of change in the result? I suppose the granularity is better with the former?
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 24th March 2023, 15:40   #2215  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,351
Quote:
Originally Posted by Boulder View Post
Is there a fundamental difference (in HBD) between using ex_blend and ex_limitchange to limit the amount of change in the result? I suppose the granularity is better with the former?
ex_blend(mode="blend") just mixes the two clips (similar or different), it works as an opacity layer with the top layer blended with the bottom one at 'opacity' ratio.

ex_limitchange() is different, you need to provide a clip and a filtered version of it, then based on the diff (limit and bias args) you merge those differences to the original clip. It's like ex_merge(orig,filt,msk) but with the mask embedded. ex_limitdif() probably is more useful as you can control the threshold and elasticity for a non-linear merge.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 24th March 2023, 16:13   #2216  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,703
Quote:
Originally Posted by Dogway View Post
ex_blend(mode="blend") just mixes the two clips (similar or different), it works as an opacity layer with the top layer blended with the bottom one at 'opacity' ratio.

ex_limitchange() is different, you need to provide a clip and a filtered version of it, then based on the diff (limit and bias args) you merge those differences to the original clip. It's like ex_merge(orig,filt,msk) but with the mask embedded. ex_limitdif() probably is more useful as you can control the threshold and elasticity for a non-linear merge.
My use case is for applying the proposed spatial filter like ex_minblur() to the clip fed to MDegrain, but often the blurring becomes a bit too visible in flat background areas as I tend to like a bit grainier image. I don't know why MDegrain is not applied there since they should be low SAD areas but maybe there's just some weighing involved.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 24th March 2023, 16:16   #2217  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Hmm... I can't get KNLMeansCL to work properly on Win11 with 3.7.3+test9 on a RTX30 card unless I set it to minimal settings. 1080p source.

Which means (a=1, s=def, d=0, wmode=0, chroma=false) and even then it eats up 97% of my 8GB VRAM.
With the settings in SMDegrain as prefilter-6 it won't even start.

Is there a recommended version to use?
The latest I could find is pinterf's version of Nov 2020:
https://github.com/pinterf/KNLMeansC...es/tag/v1.1.1e
LeXXuz is offline   Reply With Quote
Old 24th March 2023, 16:50   #2218  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,351
Quote:
Originally Posted by Boulder View Post
My use case is for applying the proposed spatial filter like ex_minblur() to the clip fed to MDegrain, but often the blurring becomes a bit too visible in flat background areas as I tend to like a bit grainier image. I don't know why MDegrain is not applied there since they should be low SAD areas but maybe there's just some weighing involved.
I also noticed that on the Lucky Luke sample, even if I apply a strong median prefilter like IQMV. This is probably because of grain/noise temporal convergence, the solution is to increase 'tr' and/or use stronger modes like 'TemporalSoften'. I certainly don't like spatial filters applied directly because they destroy texture detail, but you can apply ex_minblur(sharp=true) and pass it through ex_limitdif() and see how it looks, specially by tuning its args. Another option is pass it through FlatMask masking in case it's 2D animation.

@LeXXuz: What are the source details? I'm not having any issues, might it be driver related? Also pure GPU scripts don't like prefetch(>1). That one is the latest version. Debug a little and I will have it a look, alongside SceneStats during this weekend.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 24th March 2023 at 16:53.
Dogway is offline   Reply With Quote
Old 24th March 2023, 17:09   #2219  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 716
Quote:
Originally Posted by Dogway View Post
@LeXXuz: What are the source details? I'm not having any issues, might it be driver related? Also pure GPU scripts don't like prefetch(>1). That one is the latest version. Debug a little and I will have it a look, alongside SceneStats during this weekend.
The source was that HD anime clip. I have latest Nvidia Studio drivers installed.

Uhm... now that you mention it, it may be a good time to check my mtmodes.avsi again. Haven't done that in quite a while.
ex_DGDenoise() works just fine, but is not my first choice obviously.

Thanks for taking a look into SceneStats.
LeXXuz is offline   Reply With Quote
Old 24th March 2023, 19:59   #2220  |  Link
madey83
Registered User
 
Join Date: Sep 2021
Posts: 136
Quote:
Originally Posted by LeXXuz View Post
The source was that HD anime clip. I have latest Nvidia Studio drivers installed.

Uhm... now that you mention it, it may be a good time to check my mtmodes.avsi again. Haven't done that in quite a while.
ex_DGDenoise() works just fine, but is not my first choice obviously.

Thanks for taking a look into SceneStats.

Lucky Luke, hmm, nice cartoon of my youth .

I would be nice to have it in this quality...

Where i can find it?
madey83 is offline   Reply With Quote
Reply

Tags
avisynth, dogway, filters, hbd, packs

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:58.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.