Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th May 2022, 07:35   #1161  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
Quote:
Originally Posted by Dogway View Post
with limitS=false you are using temporal limiting, therefore having more temporal coherence
Ah now I get it. Yes of course, that makes sense.

Quote:
Originally Posted by Dogway View Post
Also I understand you are using limitS=true with contrasharp=true right?
EDIT: Yes. I updated the results above and added:
contrasharp=true, limitS=false: 1.73GB
contrasharp=true, limitS=true: 1.65GB

I swear I double checked those numbers. It acts the other way round as with LSFplus. I'm confused again.

And the question remains why has the strength of LSFplus so little to no effect.
And what better to choose from. Contrasharp or LSFplus? Speed is of no concern. Quality is.

Last edited by LeXXuz; 15th May 2022 at 09:26.
LeXXuz is offline   Reply With Quote
Old 15th May 2022, 10:37   #1162  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,280
Quote:
Originally Posted by LeXXuz View Post
Okay I did a few encodes with the new version. Source was an old movie on DVD in black & white.

This is roughly the filtering I used:

SMDegrain(tr=3, thSAD=600,
Is this what is to be expected with these settings or is there still something wrong maybe?
If you need more degraining with quality you need to use as high tr value as possible (memory/threads/speed may limit) and as low thX values as possible. thSAD 600 may be too high for all-frame processing even in the DVD medium grain footage and cause more detail blurring. Typical mvtools builds support tr up to 128 with MDegrainN. It is not physical but some old programming limit like 'as high as never used value' being too slow at old CPUs. Unfortunately it is static array allocation limit so it is good to make dynamic in some new versions to remove this limit. Typically visible effect makes doubling of tr-value like 2,4,8,16,... Changing from 10 to 11 may be close to invisible.

Typically you adjust th-values just above starting to catch noise levels in the footage (and as high as allow to keep valueable details for example at skin textures) and next adjust tr-value to get the most practical degraining (if you not limited by hardware and time to process it have some bitdeph + encoder settings limit: after some high enough value the MPEG output speed and file size almost stop to decrease).

The new alt versions of MDegrain have motion-adaptive th-settings - https://forum.doom9.org/showthread.p...62#post1962762 . To increase th-values at static and large coherent moving patches and to use 'base' th-value at other blocks to decrease blurring and to increase degraining at static areas.

Quote:
Originally Posted by Dogway View Post
Theoretically yes, maybe for true static like anime, but for live action half a pixel is the difference between sharp and blurry, MAnalyse might have a hard time picking half a pixel change more so in noisy clips and the higher the 'tr' the blurrier the output.
To decrease noise-affected false movement in MAnalyse output for noisy footages the new MVLPF processing is added in the latest builds of MDegrainN - https://forum.doom9.org/showthread.p...88#post1968988
Currently 2 filter kernels available - hard cut-off sinc-based and some 'LPF-too' gauss kernel. I currently use gauss as it simple to control and not create over-under shoots in any setting. But I hope some day will read DSP filter design book and will finish filter design with definition both cut-off frequency and some controllable roll-off in pass-band to make it non over/under shooting and ringy. After filtering of MVs the SAD of new positions is re-checked again so too bad SAD-wise new positions are skipped and replaced with original from MAnalyse.

Quote:
Originally Posted by LeXXuz View Post
In theory any movie/source is different and needs special adjustment on its own for optimum results.
Practically every cutscene in a movie require its own best settings. It may be shot with different film ISO roll or different camera gain or other noise-reduction manual or auto settings in camera.

"What about the temporal radius? "

Theoretical limit of tr-value for 'ideal engine' sit between typical cutscene duration (several seconds to dozens of seconds X FPS/2 = 100..1000+) to total movie length (about 100000). This theoretical engine must find all occurrences of patch in the cutscene or total movie, perform compensation for all possible transforms and reconstruct patch looking in every frame removing random component that is typically noise. It is really sort of total rebuilding of movie using object-oriented approach. Practically the MPEG encoder makes something close to it but with encoding of residual noise as additional component that can not be compensated using current very limited and poor computing capabilities. So the 'ideal engine' do not have tr-param at all - it analyse all input frames to find all possible occurrences of object. And to prevent blurring (and other distortions like colour) it simply have 'ideal protection' from applying not ideally compensated transforms. Currently all protection from passing to blending operation too badly back-transformed block is based on simple SAD-checking. So to make protection better - we need to set thSAD as low as possible.

Also the protection from too bad blending based on 2 parts:
1. Hard limit by thSAD for total skipping too bad by SAD block from blending.
2. Some soft function for weighting of block's weight in blending for range of SAD in 0..thSAD. The shape of this function is hardcoded in old MDegrain* and some controllable by wpow param in new builds. Old hardcoded default of wpow=2. New builds allow to use 1 to 7 where 7 is simple equal weighting below thSAD (fastest). The actual shape of this weighting function is also subject to big research for some types of footages. Setting wpow above 2 allow to use more weight inside thSAD range and get more degraining, but may cause more distortions in some cases. In my encodings I typically use wpow=4.

Also planned but still not finally debugged some statistical analysis based on idea to check not SAD of current ref block vs current src in tr-scope but make first relative statistical analysis of all frames in tr scope to found most likely looking block in tr-scope and check thSAD of current ref block relative to most likely block. This should help not skip some ref blocks with SAD close to the edges of normal SAD distribution if current source block is located on the other side of normal distribution and SAD-distance close to 2x of the width of distribution. So with successful implementation will allow to set lower 'base' thSAD value with same degraining level and better protection from blurring.

Currently the mvtools makes very limited part of this - it find occurrences of some patch in tr-scope of frames around current and compensate of translate (shift) transform only. But for static parts of cutscene it already works very well so tr-scope for static parts of the frame may cover all cutscene duration.
For scripting it is possible to create motion-mask using mvtools may be too like MMask and process static and moving parts of cutscene with different tr-value (and th-values) to have less blurring of moving (complex transformed) areas of a frame. Though this sometime create effect of very clean static background and still noised moving areas. It still make MPEG average speed lower but some users may prefer evenly covered by noise image. Though in better case required noise should be added at playback.

Last edited by DTL; 15th May 2022 at 12:19.
DTL is offline   Reply With Quote
Old 15th May 2022, 12:15   #1163  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,372
@DTL, a simple low-pass is going to distort edges too much you want to invert the the gauss weighting, just like I pointed with IQM, similar but more complex filters are MinBlur, guidedblur, ANguidedblur and of course bilateral (with some slight blur on top).

AFAIK whatever filtering you apply in the spectrum domain translates to the spatial domain but with less required radius (hence faster), so convert with FFT, apply gaussian LPF and convert back. Gaussian (or binomial) roll-off is smooth so that translates well to the image. For prefiltering I don't know if it's worth it at all, for things like Retinex it does because you apply high sigma radius.
That's unless you want to do passband filtering, in that case have a look at the ex_MFR() diagram in ExTools, since you do substraction spectrum energy is maintained.

The problem with high temporal radius is that the further frames are rarely correlated, since MDegrain performs a temporal mean average this looks risky. Maybe it might be worth to test a weighted average instead, it would lead to less denoising by default so some higher 'tr' should be used.

Did you mention before that 'pzero' or 'pnew' were not relevant for higher than tr=1? I tried to find your post without luck.

@LeXXuz: Yes, those numbers are suprising, but in any case they are two different monsters, I would try to visually compare them and see what they do and don't. If I recall correctly temporal limiting required a bit more sharpening, although some places are never going to sharpen due to the temporal nature.
After LSFplus v5.0 I would say that LSFplus should give more quality, as I chained the Smode algorithm directly to the LSFplus contrasharpening code block, but check visually anyway. I think the lower effect of the strength is because you are contrasharpening, that's totally different than plain LSFplus() sharpening, I also set over/undershoot to 0 to avoid ringing at any cost, so the sharpening is conservative. If you want stronger or very strong sharpening add it post SMDegrain().
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 15th May 2022, 12:37   #1164  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,280
"a simple low-pass is going to distort edges too much"

It is filtering of a sequence of MVs in 'time' domain. Not in 2D spatial. The 'ringing or over/under shoot' distortions of the edges values will decrease its efficiency and system just fallback to 'old MAnalyse output MVs' if SAD of the new expected-filtered position is too bad (hard above thSAD, but new filtered position is accepted if its SAD higher in compare with MAnalyse output because it may be distorted by noise). So I not like hard-cutoff sinc-based LPF without some smoothing and use gauss kernel now (with not as sharp cutoff of noise frequencies but easier in implementation and control and possibly better keeps the edges values from running away of good positions because of filtering issues). The actual noise-affected MVs coordinates of static blocks really jumps as hell here and there and their sequence definitely not follow any Nyquist-limiting or anti-gibbs conditioning so all nice developed filtering practices for 2D good images is not applicable to this type of input data to filter. To see how it jumps you can see MShow output in consecutive frames.

" so convert with FFT"

Currently it uses simple C-based convolution and runs fast enough with kernel size about 10. I do not think making FFT with too small size will make speed better. The only good thing of FFT-filtering is a way to apply required curve in 'graphical' form. Not create impulse kernel of filter with desired response.

"The problem with high temporal radius is that the further frames are rarely correlated"

It greatly depends on content - the static background with static lighting is about ideally correlated at the total cutscene duration. Example - static camera and static lighting and small narrator - https://imageban.ru/show/2022/05/13/...e872601c0c/png . About 90..90+% of blocks are close to ideally correlated at the all cutscene duration. In theory it is possible to mix MShow functions with MDegrain to enable 'debug output' text overlay of some statistics to help adjust parameters. For example % of totally skipped blocks in current tr-scope for each frame.
Also with slow enough camera pan we have lots of frames for translate-only compensation and again got very good correlation until block is still in camera visibility. This creates the effect of uneven degraining of static and slow moving areas without other transforms and other fast transformed areas with 'high enough' tr-value.

"Did you mention before that 'pzero' or 'pnew' were not relevant for higher than tr=1? "

I do not think so. They are applicable to every pair of current+ref pairs of frames sent to MAnalyse to process and these pairs processed completely independent (if temporal predictor is disabled). These penalties are about searching steps inside each level inside frames pair. Not about anything inbetween different pairs of frames to analyse. So influence of penalty values should not be any interconnected with tr-value.

"Maybe it might be worth to test a weighted average instead, it would lead to less denoising by default so some higher 'tr' should be used."

MDegrainN already in the 'old' 2.7.45 version create weighted blending based both on tr-distance and SAD value. For tr-distance it is controlled by thSAD2 param interpolation is something about cosine-shaped https://github.com/DTL2020/mvtools/b...ipFnc.cpp#L171 and for SAD value it is not user-controllable hardcoded weighting function https://github.com/pinterf/mvtools/b...egrain3.h#L880 . In the newer builds SAD-weighting is controlled by wpow param.

Last edited by DTL; 15th May 2022 at 13:22.
DTL is offline   Reply With Quote
Old 15th May 2022, 13:45   #1165  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
Thank you very much DTL and Dogway for your valued input.

I did a lot of encodes with SMDegrain 3.5.0.d recently before I started to test out Dogways current build. And I really mean A LOT.

My findings with that old build clearly backed up what you just wrote DTL. I had presets from (tr=2 & thSAD=150) up to (tr=16
and/or thSAD up to 600) for really really bad sources. Depending on the kind of grain/noise I had to deal with.

And of course you're right, technically every scene can be different. Some scenes looked awesome while others felt worse to the eye than their original counterpart. So it was/is always a compromise to use filtering. I kept some encodes to replace their originals. But I also discarded many encodes because I was not satisfied with the results. I have to admit I'm picky and if the results look (even just partially) bad in any way, I'd rather stay with the untouched source.

Adjusting the filtering for every movie anew already is tiresome work if you plan to transcode a vast collection. Adjust filtering on scene basis clearly exceeds the time I would and could spend on my hobby. I just want to reduce the size of my movie collection and optimize the visuals a little in the process so I have a smaller file with equal or even better visual quality in the end.

However, with Dogways current build my old presets don't work anymore. With his build there has been some great improvement on tricky scenes or objects with much less ghosting or smearing. From what I can see so far, I can use much higher thSAD values than before without destroying detail that would've been with the old version. Right now I'm very happy on that part. All I need now is to find a good setting for the sharpener as the cleaned output is a little softer without it, than the original. It's also funny how noise can be perceived as detail which really isn't there. So the cleaned picture 'feels' dull and a little blury, but it isn't in direct comparison.

I didn't really bother with transcoding in years. With storage space being dead cheap I just remuxed the movies I bought onto my NAS and be done with it.
But through other circumstances I now have quite a lot of CPU power at my disposal which made me revive my old hobby, very much to the burden of my electrical bill.
LeXXuz is offline   Reply With Quote
Old 15th May 2022, 14:07   #1166  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,372
Yeah, actually I don't know why I still develop SMDegrain lol. I do like my films grainy, but I still use it when grain is blocky or uneven/dirty, then add grain back.

You can automate scene based SMDegrain filtering with SceneStats, you need to enclose it in a ScriptClip block and tune your values according to the stats.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 15th May 2022, 14:12   #1167  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,913
Quote:
Originally Posted by Dogway View Post
You can automate scene based SMDegrain filtering with SceneStats, you need to enclose it in a ScriptClip block and tune your values according to the stats.
Show us.

And please don't stop developing. GPU MVTools are arriving and perhaps a new native CUDA BM3D.
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 15th May 2022, 14:49   #1168  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
Quote:
Originally Posted by Dogway View Post
Yeah, actually I don't know why I still develop SMDegrain lol.
Don't you dare and stop your amazing work!

Looking at the early days of this forum and the filters/tools available back then, it's amazing how these evolved over the past two decades thanks to each and everyone involved in this process.

And with filters like these some movie gems finally get the treatment they deserve.
LeXXuz is offline   Reply With Quote
Old 15th May 2022, 15:42   #1169  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,372
By the way Asd-g just released AGM (for AdaptiveGrainMask) plugin. I didn't know what it did until I read it's some kind of exposure reading so there is less overall grain on bright scenes, as it would happen naturally on film.
I'm tempted to try this with cretindesalpes plugin, haven't had time to check it.

Now that I think about it... it only looks for averageluma. Maybe I can implement the '_SceneExposure' algo described here on a per frame basis, and use that for modulation in GrainFactory3mod. I mean, AGM is just an opacity mask, it won't tune grain size (among other) based on pixel value.

Scene based SMDegrain:

Code:
    SceneStats(mode="Range+Stats+Motion+Detail+EV", interval=0.5, th=0.5, dFactor=3.7, show=false)

    ScriptClip(function [] () {

        sts  = propGetAsArray("_SceneStats")
        mo   = propGetFloat  ("_SceneMotion")
        de   = propGetFloat  ("_SceneDetail")
        ev   = propGetInt  ("_SceneExposure")
        mo1  = min(6,round(exp(0.54*pow(mo,-0.44)+0.7)-2.9))
        mo2  = round(-125*mo+112)
        ev1  = ev==5?0:max(0,round(7.7*pow(ev,-0.44)-2.7))

        SMDegrain(mo1, 400, prefilter=-1, ContraSharp=mo2, Str=ev1, RefineMotion=true)
    } )

Probably for performance reasons it would be better to precompute prefilter clip outside ScriptClip.
Here I only plugged motion and exposure variables but you are free to use other stats or mix them to configure SMDegrain settings.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 15th May 2022 at 15:58.
Dogway is offline   Reply With Quote
Old 15th May 2022, 17:10   #1170  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
And he just pulls that out of the hat. Just like that.

However integrating that succesfully into my scripts exceeds a little my Avisynth knowledge which is not much more than loading filters and setting parameters here and there.
LeXXuz is offline   Reply With Quote
Old 16th May 2022, 07:26   #1171  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,913
Quote:
Originally Posted by LeXXuz View Post
However integrating that succesfully into my scripts exceeds a little my Avisynth knowledge which is not much more than loading filters and setting parameters here and there.
We share the same boat, pal.
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 16th May 2022, 17:34   #1172  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,169
I meet error: "there is no function named "ZoptiResize" when use with video
https://drive.google.com/file/d/1b0W...ew?usp=sharing

my scripts:
LoadPlugin("C:\Megui\MeGUI-2924-64\tools\lsmash\LSMASHSource.dll")
LWLibavVideoSource("C:\Users\ADMIN\Downloads\Yuuri - Betelgeuse [1440x1080i MPEG2 SSTV HD].ts")
AnimeIVTC(mode=1)
deep_resize(1920.0, 1080.0,edge="Zopti2",flat="SoftCubic100",show=false,th=0.6,elast=9)
kedautinh12 is offline   Reply With Quote
Old 16th May 2022, 17:43   #1173  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,372
Yes sorry, Zopti2 has been deprecated, it's now called "Zopti" as original name, and old Zopti is now "ZoptiN" for Neutral.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 18th May 2022, 07:34   #1174  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
I've been playing around with different settings.

And I don't get it why this setup:
Code:
ConvertBits(16)
PRE=(sneo_dfttest(Y=3, U=3, V=3, tbsize=1, slocation="0.0:4 0.5:16 0.9:32 1.0:64", chrslocation="0.0:2 0.5:8 0.9:16 1.0:32"))
SMDegrain(tr=6, thSAD=600, contrasharp=50, limitS=true, LFR=150, DCTFlicker=true, refinemotion=true, truemotion=true, Str=5.0, pel=2, subpixel=3, prefilter=PRE, chroma=true, plane=4)
neo_f3kdb(range=15, Y=64, Cb=32, Cr=32, grainY=32, grainC=16, sample_mode=4)
Prefetch(16,16)
Return(Last
is much slower than this one:
Code:
ConvertBits(16)
PRE=(sneo_dfttest(Y=3, U=3, V=3, tbsize=1, slocation="0.0:4 0.5:16 0.9:32 1.0:64", chrslocation="0.0:2 0.5:8 0.9:16 1.0:32"))
PRE2=SMDegrain(tr=3, thSAD=600, blksize=32, contrasharp=false, limitS=true, LFR=150, DCTFlicker=true, refinemotion=true, truemotion=true, Str=5.0, pel=2, subpixel=3, prefilter=PRE, chroma=true, plane=4)
SMDegrain(tr=6, thSAD=300, blksize=32, contrasharp=50, limitS=true, LFR=false, DCTFlicker=false, refinemotion=true, truemotion=true, Str=5.0, pel=2, subpixel=3, prefilter=PRE2, chroma=true, plane=4)
neo_f3kdb(range=15, Y=64, Cb=32, Cr=32, grainY=32, grainC=16, sample_mode=4)
Prefetch(16,16)
Return(Last)
Or this one:
Code:
ConvertBits(16)
PRE=(sneo_dfttest(Y=3, U=3, V=3, tbsize=1, slocation="0.0:4 0.5:16 0.9:32 1.0:64", chrslocation="0.0:2 0.5:8 0.9:16 1.0:32"))
PRE2=(sneo_dfttest(Y=3, U=3, V=3, tbsize=1, slocation="0.0:2 0.5:8 0.9:16 1.0:32", chrslocation="0.0:1 0.5:4 0.9:8 1.0:16"))
SMDegrain(tr=6, thSAD=600, blksize=32, contrasharp=false, limitS=true, LFR=150, DCTFlicker=true, refinemotion=true, truemotion=true, Str=5.0, pel=2, subpixel=3, prefilter=PRE, chroma=true, plane=4)
SMDegrain(tr=3, thSAD=300, blksize=32, contrasharp=50, limitS=true, LFR=false, DCTFlicker=false, refinemotion=true, truemotion=true, Str=5.0, pel=2, subpixel=3, prefilter=PRE2, chroma=true, plane=4)
neo_f3kdb(range=15, Y=64, Cb=32, Cr=32, grainY=32, grainC=16, sample_mode=4)
Prefetch(16,16)
Return(Last)
Results:


Filesizes don't differ that much so the encoder has about the same workload. What do I miss here?
LeXXuz is offline   Reply With Quote
Old 18th May 2022, 07:58   #1175  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,372
Sure, you are using blksize=32, so 4 times less blocks than say 16, but precision in MV is not going to very good so visually check everything is fine.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 18th May 2022, 08:58   #1176  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
Quote:
Originally Posted by Dogway View Post
Sure, you are using blksize=32, so 4 times less blocks than say 16, but precision in MV is not going to very good so visually check everything is fine.
Totally missed that. Thanks Dogway.

EDIT: So increase in blocksize decreases mv precision. 16 is default, right? Would it benefit to go down to 8 if speed is of less concern? Or is the possible visual gain negligible?

Last edited by LeXXuz; 18th May 2022 at 09:07.
LeXXuz is offline   Reply With Quote
Old 18th May 2022, 09:42   #1177  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,280
Mvtools default blocksize is 8. In some scripts I see recalculation down to 4 for best quality.

increase in blocksize decreases mv precision. '

Not only. It will either blur areas with small different moving objects with large enough thSAD or not degrain with low. Also will more distort edges of objects even with overlap enabled.

Last edited by DTL; 18th May 2022 at 09:48.
DTL is offline   Reply With Quote
Old 18th May 2022, 16:22   #1178  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
That's interesting and could explain some effects I've seen during my tests.
I'll give it a try with 4 then if there is no other penalty than speed.
LeXXuz is offline   Reply With Quote
Old 18th May 2022, 16:35   #1179  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,280
"give it a try with 4"

4 may be too low to search directly. It may be better to start search from 8x8 or bigger and recalculate (refine) down to 4x4.

MRecalculate: Refines and recalculates motion data of previously estimated (by MAnalyse) motion vectors with different super clip or new parameters set (e.g. lesser block size), after divide, etc. The two-stage method may be also useful for more stable (robust) motion estimation. The refining is at finest hierarchical level only. Interpolated vectors of old blocks are used as predictors for new vectors, with recalculation of SAD. Only bad quality new vectors with SAD above threshold thSAD will be re-estimated by search. thSAD value is scaled to 8x8 block size. Good vectors are not changed, but their SAD will be re-calculated and updated.

As I see at github the SMDegrain script can do 1 step half size refining with MRecalculate in refinemotion=true mode. So if you use initial blocksize 8 and set refinemotion=true it makes refining to 4x4.

More complex scripts can do >1 step refining so you can start from larger block size like 32x32 to decrease probability of gross search error and make 3 steps refining to 4x4 for best quality.

Last edited by DTL; 18th May 2022 at 16:59.
DTL is offline   Reply With Quote
Old 18th May 2022, 17:14   #1180  |  Link
LeXXuz
21 years and counting...
 
LeXXuz's Avatar
 
Join Date: Oct 2002
Location: Germany
Posts: 724
Quote:
Originally Posted by DTL View Post
So if you use initial blocksize 8 and set refinemotion=true it makes refining to 4x4.
Thanks DTL, I'll give that a try.

Quote:
Originally Posted by DTL View Post
More complex scripts can do >1 step refining so you can start from larger block size like 32x32 to decrease probability of gross search error and make 3 steps refining to 4x4 for best quality.
Maybe Dogway will add that one day to SMDegrain.
LeXXuz is offline   Reply With Quote
Reply

Tags
avisynth, dogway, filters, hbd, packs

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:27.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, vBulletin Solutions Inc.