Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#1781 | Link | |
Guest
Posts: n/a
|
Quote:
HI LeXXuz what is the benefit to use globals in denoising? Could it improve denoise, could it replace prefilters.... have you got some script to share to do some comparison test with my current setup? Thank you in advance. |
|
![]() |
![]() |
#1782 | Link | |
21 years and counting...
Join Date: Oct 2002
Location: Germany
Posts: 724
|
Quote:
The globals parameter can be helpful if you want to run more than one instance of SMDegrain for denoising. The benefit is you dont't have to recalculate motion vectors again for your second instance of SMDegrain which can improve performance. Some forms of noise can better be reduced by two instances of SMDegrain (each with weaker settings) than just one instance with stronger settings. This doesn't replace prefiltering in any way. Appropriate prefiltering is always important to get best results with SMDegrain. Last edited by LeXXuz; 3rd December 2022 at 09:48. |
|
![]() |
![]() |
![]() |
#1783 | Link |
Registered User
Join Date: Jan 2018
Posts: 2,169
|
Hi Dogway, your scripts requirements need only transforms pack. I wonder it's need 3 parts of them or only main version?? Of cause i load all and I ask for my friend (old man)
![]() Last edited by kedautinh12; 7th December 2022 at 06:05. |
![]() |
![]() |
![]() |
#1784 | Link |
Guest
Posts: n/a
|
@ Dogway ,
"Old Man" here, just to give a little more info on kedautinh12's post above. I asked him for some help a little earlier today, and he mentioned about the 3 Transforms Packs:- Transforms Pack-Main Transforms Pack-Models Transforms Pack-Transfers I have ONLY been using the "Main" one, with my scripts that need Transforms Pack, and it works....adding the other 2 didn't make any difference to the script I was working on. Respect. |
![]() |
![]() |
#1785 | Link |
Registered User
Join Date: Mar 2012
Location: Texas
Posts: 1,675
|
Transforms Pack-Main has lots of functions and some of those are dependent on the functions included in the Models/Transfers pack. If you're using the Main pack and it's not complaining about a missing function, then it does not anything from the other two packs. It does not hurt one bit to have all of the scripts.
Last edited by Reel.Deel; 7th December 2022 at 07:54. |
![]() |
![]() |
![]() |
#1786 | Link | |
Registered User
Join Date: Jan 2018
Posts: 2,169
|
Hi, i report new issue: when i use this scripts, i got that error (test with some video but i still got same issue. I think issue not from my video)
Quote:
![]() |
|
![]() |
![]() |
![]() |
#1787 | Link |
Registered User
Join Date: Nov 2009
Posts: 2,372
|
Sure, you are comparing the original with a scaled clip in ex_LFR().
By the way, among other things I finished the CAT refactor so I'm uploading TransformsPack v1.0 final in a few moments. Will write a full fledged post later in the day with examples and benchmarks.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread |
![]() |
![]() |
![]() |
#1789 | Link |
Registered User
Join Date: Nov 2009
Posts: 2,372
|
I had time to run some speed benchs and some examples for ConvertFormat()
Mind you, I still need to work on the Preset system and Model conversion within ConvertFormat() In my tests all filters run better at Prefetch(4). P8 indicates the speed at 8-bit for comparison. The template script is as follows: Code:
setmemorymax(2048*4) DGSource("src.dgi") ConvertBits(16) [...] # Filters Prefetch(4) YUV to RGB Code:
100 329 P8 520 z_ConvertFormat(pixel_type="RGBP16",colorspace_op ="709:709:709:l=>rgb:709:709:l",resample_filter="spline36") fmtc_resample(kernel="spline36", css="444") 65 213 P8 251 fmtc_matrix(mat="709", col_fam="RGB",fulld=false) 64 209 P8 372 ConvertFormat(1,1,"YUV","RGB",kernel="spline36",tv_out=true) 59 194 P8 368 ConverttoPlanarRGB(matrix="709:f",chromaresample="spline36") Code:
100 216 z_ConvertFormat(pixel_type="YUV420P16",colorspace_op ="rgb:709:709:f=>709:709:709:l",resample_filter="spline36") 76 163 ConverttoYUV420(matrix="709:l",chromaresample="spline36") fmtc_matrix(mat="709", col_fam="YUV") 75 161 fmtc_resample(kernel="spline36", css="420") 64 138 ConvertFormat(1,1,"RGB","YUV",kernel="spline36") Code:
100 253 P8 351 z_ConvertFormat(3840,2072,resample_filter="spline36") 83 210 P8 243 fmtc_resample(3840,2072,kernel="spline36") 63 158 P8 246 ConvertFormat(3840,cs_out="",kernel="spline36") Code:
100 294 z_ConvertFormat(pixel_type="YUV420P16",colorspace_op ="601:709:170m:l=>709:709:170m:l",resample_filter="spline36") fmtc_resample (css="444",kernel="spline36") fmtc_matrix (mats="601", matd="rgb") fmtc_transfer (transs="1886", transd="linear") fmtc_primaries(prims="170m", primd="709") fmtc_transfer (transs="linear", transd="1886") fmtc_matrix (mats="rgb", matd="709") 26 77 fmtc_resample (css="420",kernel="spline36") 13 38 ConvertFormat(cs_in="601",cs_out="709",kernel="spline36") Some useful examples for ConvertFormat() No Moirée ![]() Code:
ConvertFormat(0.5,nomoiree=true,kernel="RobiSharp") Ringing Plain lanczos4 x4 upscale ![]() With sigmoid scaling, a bit better. ![]() Code:
ConvertFormat(4,cs_out="",kernel="lanczos4",scale_space="sigmoid") ![]() Code:
ConvertFormat(4,cs_out="",kernel="lanczos4",noring=true) ![]() Code:
ConvertFormat(4,cs_out="",kernel="lanczos4",noring=true,scale_space="sigmoid") ![]() Code:
ConvertFormat(4,cs_out="",kernel="nnedi3") ![]() Code:
ConvertFormat(4,cs_out="",kernel="nnedi3",noring=true) UHD upscale Talking about nnedi3, you can upscale to UHD with decent speed at 72fps (nnedi3 GPU): Code:
ConvertFormat(3840,cs_out="",kernel="nnedi3") Code:
deep_resize(3840,grain=0) # 12 fps Downscaling Now for the opposite. With typical downscale kernels you can't go too far. See in this example with 'Zopti1080', which is a bicubic with -0.99 and 0.06 coefs. Source Spline64 ![]() Code:
ConvertFormat(0.25,cs_out="",kernel="spline64") ![]() Code:
ConvertFormat(0.25,cs_out="",kernel="Zopti1080") ![]() Code:
ConvertFormat(0.25,cs_out="",kernel="spline64",scale_space="linear") ![]() Code:
ConvertFormat(0.25,cs_out="",kernel="SSIM2") Code:
deep_resize(0.25)
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread Last edited by Dogway; 10th December 2022 at 12:23. |
![]() |
![]() |
![]() |
#1791 | Link | |
Acid fr0g
Join Date: May 2002
Location: Italy
Posts: 2,913
|
Quote:
Moreover, if I am not wrong, avsresize allows to compact the conversion process in a single line instead of two calls. I can't remember about fmt_ ![]() Perhaps you could gift us with some more tests. ![]()
__________________
@turment on Telegram Last edited by tormento; 10th December 2022 at 11:25. |
|
![]() |
![]() |
![]() |
#1792 | Link |
Registered User
Join Date: Nov 2009
Posts: 2,372
|
Well, I can't compete with compiled programs and yet ConvertFormat() manages to equal fmtconv and internal Convert_ in some modes. avsresize is another thing, I don't know if they handcrafted the code in assembly or what but it's insanely fast.
In any case it was never my purpose, the beauty of it is to for example, scale up with nnedi3, using noring, in linear scale, with correct chroma placement, and change transfer or color space with gamut compression all in "one step" and with a self explanatory one-liner. Moreover, all the coefficients are derived so you might probably get more accurate* results (though not standard). It also serves me and others as a notebook for color management, as I write notes here and there, and the code isn't populated with CPU/SIMD specific intrinsics. *Yeah, I realize CIE 1931 is far from being accurate, that's why I keep working on the CMFs. Running these tests gave me some ideas for further optimizations, namely while fixing several bugs deep_resize() saw a near 50% speed decrease, so I might give it a look for a refactor.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread Last edited by Dogway; 10th December 2022 at 22:01. |
![]() |
![]() |
![]() |
#1793 | Link |
Registered User
Join Date: Jul 2018
Posts: 1,280
|
"With typical downscale kernels you can't go too far."
Scaling for moving pictures systems is very complex task if trying to get sharpest result. Static scalers may give nice results but may cause significant artifacts in motion. So may be next generation scalers can be scene-adaptive so can apply best static scaler for static areas of the scene (it is really very common for many motion pictures footages like static camera for the total scenecut) and use motion-friendly not as sharp scaler for moving parts of the scene. So 'classic' non-content adaptive scalers for motion pictures may be not as sharp as best static but fast and produce average result for the total movie in both static and moving parts of frames. If possible it is nice to see 'motion' test of scalers using some synthetic or external slow motion speed content with typical motion speed about 0.1..0.3 samples per frame to see different content positions relative to sampling grid. 'As sharp as single (pixel)sample' static scalers may significantly fail this test. Because moving of single sample keeping its amplitude and 'width' in any real number position relative to sampling grid is not possible. May be animated .GIF file may be natural for text forum to see 'motion test' result. Typical issue for motion of too-sharp and/or non-linear scalers is 'temporal aliasing' that looks like flickering of small details and sharp edges. So selecting same for all moving pictures scaler is some balance between temporal artifacts and spatial artifacts and sharpness. Skipping temporal tests for moving pictures scaler may cause selecting too sharp scaler but producing too much temporal artifacts at natural moving pictures. The 'classic' AVS convolution based linear scalers are result of some balance between temporal artifacts and spatial artifacts and sharpness. Last edited by DTL; 10th December 2022 at 13:38. |
![]() |
![]() |
![]() |
#1794 | Link |
Pig on the wing
Join Date: Mar 2002
Location: Finland
Posts: 5,817
|
Can you do a test with that astronomical image and linear scale downscaling with avsresize (using HBD for all steps)? I'd be interested in seeing the result as I've always understood that linear light is better for such source material.
__________________
And if the band you're in starts playing different tunes I'll see you on the dark side of the Moon... |
![]() |
![]() |
![]() |
#1796 | Link | ||
Registered User
Join Date: Nov 2009
Posts: 2,372
|
Quote:
Quote:
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread |
||
![]() |
![]() |
![]() |
#1798 | Link | |
Registered User
Join Date: Nov 2009
Posts: 2,372
|
Quote:
Since you are keeping Dolby Vision (you need to attach metadata to the reencode), the metrics might be changed, but not so much than if you downscaled further. I did a test and scaling in Dolby Vision 'log' with SSIM2 has a similar effect than scaling with Zopti1080 in 'gamma' (because you lose some sharpness in log) That is true, it really depends whether your downscale is display referred or scene referred. For display referred you want something softer and motion aware, maybe a PP filter like Soothe() or ex_reduceflicker() can help, it is also content dependent. On the other side, with scene referred downscales, you are better relying on smart guest device upscalers/PP, similar to basicvsr and such.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread Last edited by Dogway; 10th December 2022 at 22:24. |
|
![]() |
![]() |
![]() |
Tags |
avisynth, dogway, filters, hbd, packs |
Thread Tools | Search this Thread |
Display Modes | |
|
|