Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 2nd June 2023, 10:02   #2601  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,209
But last release from pinterf was in mid of 2021. Now it is about 2 years passed and still no expected merging of my new features into builds by pinterf or may be new features from pinterf. As I see he simply do not have time because being busy with AVS core and also understanding in 'complex-200x' mvtools is not easy for many-projects programmer.

The processing in my builds up to 2023 in MAnalyse/MDegrainN become even more complex - so it may be even more hard for universal programmer to take it for support. So it was the original idea of pinterf in 2021 - lets keep mvtools of old design unchanged because close to no one understand how it is working so any new features will make project even more harder to support.

So I ask Asd-g not simply take pinterf version but make some combined from my 'branch' and 'stable pinterf' to port as many new features as possible at least for quality of denoising. The really complex to development multi-block search modes based on AVX2 and AVX512 SIMD computing may be skipped for todays very limited developers resources because it is only performance boost and also we have now hardware accelerated DX12-ME modes for MAnalyse and it can be used at least at first generations of multi-generations MVs search.

As mvtools is currently main API for AVS for motion search and some MC-denoise and used by many scripts it is good to have at least a few active developers per year or may be per decade (as things become slower and slower in compare of expected 'singularity of digital civilization from too fast and constantly increasing in speed progress' in the early 200x years).
DTL is offline   Reply With Quote
Old 2nd June 2023, 15:05   #2602  |  Link
salvo00786
Registered User
 
Join Date: Feb 2022
Posts: 19
Yes, I started with Staxrtip 2.19.0, and after that I tried go backwards.
Yes I tried with your script, but the error is the same. I don't know why this doesn't work. It's strange.
salvo00786 is offline   Reply With Quote
Old 2nd June 2023, 15:49   #2603  |  Link
Guest
Guest
 
Posts: n/a
Quote:
Originally Posted by salvo00786 View Post
Yes, I started with Staxrtip 2.19.0, and after that I tried go backwards.
Yes I tried with your script, but the error is the same. I don't know why this doesn't work. It's strange.
Well, that's not good

Have you asked on the StaxRip thread ?? (I haven't seen your name there).

Emulgator left an interesting post...

As the suggestions given here, haven't worked either
  Reply With Quote
Old 2nd June 2023, 17:11   #2604  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 720
So here are the two problem clips im using and the "man" clip is noisier and more problematic than the "sheep" clip.

IMAGES
Raw Clips UHD 55mb

Speed/Quality:
On my slow computer I get the fastest speed with
smdegrain uhdhalf=true but much less detail/more blurry than set to false.

The DTL-script with DX12ME is a little bit slower than the above but a bit faster than smdegrain uhdhalf=false although the DTL-script denoises the least and too much temporal instabilities are left. I think I am not using DTL's version correctly since I did not understand all the new params so Im sorry if I do it unjustice. Also I should raise the tr but then it gets too slow. Notice that I do not use any prefiltering with DTL's version, only its internal mvlpfGauss.

Mdegrain-script is pretty close to the smdegrain call but a little slower yet denoises more and still keeps alot of detail.

Here is the code for testing 3 different approaches by using SMDegrain (uhdhalf=true / false), DTL's mvtools version and a mdegrain-script:
Code:
SetMemoryMax(30200)
LSMASHVideoSource("C:\Users\era\Videos\man.mp4", decoder="h264_cuvid")
propclearall()
convertbits(16)
Levels(0, 1, 255*256, 0, 235*256, coring=false)
converttoYUV444()

cd=convertbits(8,dither=-1).convertToYv12()
hq=cd.hqdn3d(0,0,12,12,13,mt=true).merge(cd,0.33).removegrain(12).coloryuv(levels="tv->pc",gamma_y=190)#.invert("y").levels(0,0.5,255,0,255)#

#smdegrain#
smdegrain(tr=6,prefilter=hq,uhdhalf=true)

#mdegrain-script#
mdgr(tr=6,auxclip=hq,chroma=true)

#DTL# 
super=MSuper(last, mt=false, chroma=true, pel=1, levels=1)
sup=MSuper(cd, mt=false, chroma=true, pel=1) # do not create refined subplanes for pel > 1
multi_vec=MAnalyse (sup, multi=true, blksize=8, delta=tr, overlap=0, chroma=true, optSearchOption=5, mt=false, levels=1, UseSubShift=1)
mdegrainn(last,super, multi_vec, tr, thSAD=350, thSAD2=340, IntOvlp=3,mt=false, wpow=4, thSCD1=400, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, 
\ MVLPFGauss=0.9, thMVLPFCorr=50, MPBthSub=10, MPBthAdd=20, MPBNumIt=2)


deblock(planes="UV")
neo_F3KDB(grainY=0,grainC=0,mt=false,output_depth=16,range=30).mergechroma(last)

Convertbits(10,dither=1)
prefetch(4)
# prefetch(2) #for DTL's version 

# MDegrain-function:
function mdgr(clip Input, clip "auxclip", int "thSAD", int "thSADC", int "TR", int "BLKSize", int "Rb", int "Overlap", int "Pel", bool "Chroma", int "Shrp", int "Rad", float "thr", bool "tm", float "Falloff")
{
thSAD   = default(thSAD,        150)
thSADC  = default(thSADC,       230)
TR      = default(TR,             6)
BLKSize = default(BLKSize,       32)
Overlap = default(Overlap,       16)
Rb      = default(Rb,             8)
Pel     = default(Pel,            1)
Chroma  = default(Chroma,     false)
Tm      = default(Tm,         false)
Falloff = default(Falloff,      0.9)
Shrp    = default(Shrp,          56)
Rad    = default(Rad,             1)
Thr    = default(Thr,          1.21)
aux     = defined(auxclip)

    w       = width(input)
    h       = height(input)
    isUHD   = (w > 2599 ||  h > 1499) 
    nw      = round(w/2.0)
    nh      = round(h/2.0)

    inputA  = aux ? auxclip : input

    super     = input.unsharpmask_hbd(shrp,rad,thr,uv=1).MSuper(pel=pel, hpad = 0, vpad = 0, chroma=chroma, mt=false, levels=1, sharp=1)
    superfilt = MSuper(inputA,pel=pel, hpad = 0, vpad = 0, mt=false,sharp=1)
Multi_Vector  = Superfilt.MAnalyse(Multi=True, Delta=TR, BLKSize=BLKSize, Overlap=Overlap, Chroma=Chroma, truemotion=tm ,global=true)
    vmulti2   = Mrecalculate(superfilt,multi_vector,thsad=thsad,truemotion=false,tr=tr,blksize=rb,overlap=rb/2,mt=false,chroma=true)

Input.MDegrainN(Super, vmulti2, TR, thSAD=thSAD, thSAD2=Int(thSAD*Falloff), thSADC=thSADC, thSADC2=Int(thSADC*Falloff))
}
EDIT: To really get the noise removed which I would like then I usually add neo_vd(1.5,1,6,96) or (2.0,1,6,96) but it smears the finer details too much. And if I limit it with ex_limitdif or some edge filtering it is slow.

If you like to see how the chroma is affected with my B/W 3DLUT try this out:
OrtoChromatic-LUT

Last edited by anton_foy; 2nd June 2023 at 17:21.
anton_foy is online now   Reply With Quote
Old 2nd June 2023, 17:57   #2605  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,209
For really slow processing and possibly better MVs you can try the multi-generations approach of MAnalyse/MDegrain sequence described in https://forum.doom9.org/showthread.p...52#post1984152 . My tests shows already 2nd generation produces much better MVs. But it require to play with much more params (for MAnalyse and MDegrain at each stage). And it require updated MAnalyse with 2-inputs. MVLPF processing can not do too much with very noisy MVs from single (first) generation MVs search. Also MVLPF currently is FIR-only so to be more effective require not very small tr-setting (>10 is recommented). IIR-type of MVLPF with possible better results at small tr-values is planned but not yet implemented.

Example of using DX12-ME and onCPU MAnalyse in multi-generations MVs refining was in https://forum.doom9.org/showthread.p...76#post1984176 .

Also as final non-linear cleanup may be used additional MDegrain with pmode=1 from https://forum.doom9.org/showthread.p...27#post1984527 .

And yes - may be even small progress in denosing require many computing (like 3 Mdegrain in a sequence minimum is recommended in 2023 - first generation of pre-degrain, main (second gen) MDegrain, final non-linear MEL-mode statistical refining, each with tr>10 expected to be better). Each generation of MAnalyse/Mdegrain have its own sets of params. At least thSAD(s) between first and second generation is(are) completely different.

Last edited by DTL; 2nd June 2023 at 18:12.
DTL is offline   Reply With Quote
Old 2nd June 2023, 20:18   #2606  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 720
Quote:
Also as final non-linear cleanup may be used additional MDegrain with pmode=1 from https://forum.doom9.org/showthread.p...27#post1984527 .
I tried this but it takes several minutes to even get it to start and I get this
When I try to go forward in the timeline vdub crashes after several more minutes. I tried the FIR-mode in vsttempsmooth which seemed very interesting so I think it may have potential. It is the same mode for MDegrain or am I wrong?

EDIT: Sorry Dogway I realized this does not belong in your thread. Just thought maybe SMDegrain could benifit from DTL's version in the future hopefully when optimized.

Last edited by anton_foy; 2nd June 2023 at 20:28.
anton_foy is online now   Reply With Quote
Old 3rd June 2023, 00:07   #2607  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,209
Oh - it looks 16bit (and YUV444) really not work in my builds for many processing modes. Sorry. So it is really the task for good programmers to cover many real used bitdepth/blocksize combinations. I still live with only YV12. If that chroma error also happen with YV12 - post in DX12 thread the complete script example and I will try to see what is happen at least in that format.

The general idea for 2023 after some years of testing is about: If user want best possible denoise quality the single MAnalyse/MDegrain call is not enough (even with builds after 2.7.45). It is in theory possible to arrange several MAnalyse/MDegrain into single compiled 'filter' to make same processing as that multi-generation examples, but it may be much more complex to program also the number of params will be completely large for single filter call and it still not benefit for performance. So it will still require some 'complex scripting' like SMDegrain. And yes - to make processing of UHD frames with not very slow speed the powerfull enough machines required to run at least 2 MAnalyse (+MDegrain) in a sequence (better 3), may be with at least 1 DX12-ME hardware accelerator (though its quality is typical somehow lower in compare with onCPU MAnalyse with my GTX1060 experience).

Also to make quality tests with better performance it is not require to process all UHD frame - it is possible to crop only most interested region first like HD-sized or even SD-sized.

" It is the same mode for MDegrain or am I wrong?"

The implementation of statistical analysis is close but vsTTempSmooth is sample-based and MDegrainN is block-based. It require more tests to compare quality on different footages and noise/distortions types. Block-based statistical analysis expected to be better for noised sources, but it still only idea. Also some new ideas found while adding of IIR-mode to vsTTempSmooth (the memory update for better block/sample also, so memory keep not only best block/sample but also the 'statistical metric' of current stored in memory block/sample) still not transferred into MDegrainN. So next builds of MDegrainN may work a bit better.

Here is working example of compare of your posted params single MDegrain and 2-gen processing for YV12 sample clip man.mp4 from https://we.tl/t-0CS13HCzLO: . I think your tr is 6 ?

https://imgsli.com/MTgzNjIx frames 32 and 33 of interleaved clips.

Code:
# Input plugins
LoadPlugin("ffms2.dll")
LoadPlugin("mvtools2.dll")

SetFilterMTMode("DEFAULT_MT_MODE", 3)

my_thSAD=450
my_thSAD2=my_thSAD-20

my_thSAD_mg=250
my_thSAD2_mg=my_thSAD_mg-10

my_thSCD=my_thSAD+100

my_pzero=10
my_pnew=10
my_pglobal=10

my_pel=2
my_thCohMV=5 # 5..8 for pel=2, 10..16 for pel=4 ?
my_trymany=true

my_oPT=1
my_overlap=0
my_IntOvlp=3
my_searchparam=2

my_MPBNumIt=2

my_init_tr=12
my_refine_tr=12

Function RefineMV(clip mvclip, clip super_ref, clip src, int _thSAD, int _thSAD2, int in_tr, int refine_tr, int my_thSCD, int my_pel, bool my_trymany, int my_pnew, int my_pzero, int my_pglobal, \
int my_oPT, int my_overlap, int my_searchparam, int my_IntOvlp, int my_thCohMV)
{
 g_next=MDegrainN(src, super_ref, mvclip, in_tr, thSAD=_thSAD, thSAD2=_thSAD2, mt=false, wpow=4, thSCD1=my_thSCD, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=my_thCohMV, \
MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=my_IntOvlp)
 super_g_next=MSuper(g_next,chroma=true, mt=false, pel=my_pel)
 return MAnalyse(super_g_next, SuperCurrent=super_ref, multi=true, delta=refine_tr, search=3, searchparam=my_searchparam, trymany=my_trymany, overlap=my_overlap, chroma=true, mt=false,\
 optSearchOption=1, truemotion=false, pnew=my_pnew, pzero=my_pzero, pglobal=my_pglobal, global=true, optPredictorType=my_oPT)
}

Function RefineMV_HW(clip mvclip, clip super_ref, clip src, int _thSAD, int _thSAD2, int in_tr, int refine_tr, int my_thSCD, int my_pel, int my_thCohMV)
{
 g_next=MDegrainN(src, super_ref, mvclip, in_tr, thSAD=_thSAD, thSAD2=_thSAD2, mt=false, wpow=4, thSCD1=my_thSCD, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=my_thCohMV, \
MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3, UseSubShift=1)
 super_g_next=MSuper(g_next,chroma=true, mt=false, pel=my_pel, levels=1, pelrefine=false)
 return MAnalyse(super_g_next, SuperCurrent=super_ref, multi=true, delta=refine_tr, chroma=true, mt=false, optSearchOption=5, levels=1)
}

FFmpegSource2("man.mp4")

Crop(700, 540, 1300, 540)

noproc=last

#super_hwa=MSuper(last, mt=false, chroma=true, pel=my_pel, hpad=8, vpad=8, levels=1, pelrefine=false)
super_cpu=MSuper(last, mt=false, chroma=true, pel=my_pel, hpad=8, vpad=8, levels=0, pelrefine=true)

#multi_vec_hwa=MAnalyse(super_hwa, multi=true, blksize=8, delta=my_init_tr, overlap=0, chroma=true, optSearchOption=5, mt=false, levels=1)
multi_vec_cpu=MAnalyse(super_cpu, multi=true, delta=my_init_tr, search=3, searchparam=my_searchparam, trymany=my_trymany, overlap=my_overlap, chroma=true, mt=false, \
optSearchOption=1, truemotion=false, pnew=my_pnew, pzero=my_pzero, pglobal=my_pglobal, global=true, optPredictorType=my_oPT)

multi_vec_cpu2=RefineMV(multi_vec_cpu, super_cpu, last, my_thSAD, my_thSAD2, my_init_tr, my_refine_tr, my_thSCD, my_pel, my_trymany, my_pnew, my_pzero, my_pglobal, my_oPT, \
my_overlap, my_searchparam, my_IntOvlp, my_thCohMV)
#multi_vec_hybr2=RefineMV(multi_vec_hwa, super_cpu, last, my_thSAD, my_thSAD2, my_init_tr, my_refine_tr, my_thSCD, my_pel, my_trymany, my_pnew, my_pzero, my_pglobal, my_oPT, \
#my_overlap, my_searchparam, my_IntOvlp, my_thCohMV)
#multi_vec_hwa2=RefineMV_HW(multi_vec_hwa, super_hwa, last, my_thSAD, my_thSAD2, my_init_tr, my_refine_tr, my_thSCD, my_pel, my_thCohMV)

cpu2=MDegrainN(last,super_cpu, multi_vec_cpu2, my_refine_tr, thSAD=my_thSAD_mg, thSAD2=my_thSAD2_mg, mt=false, wpow=4, UseSubShift=1, thSCD1=my_thSCD, adjSADzeromv=0.7, \
adjSADcohmv=0.7, thCohMV=my_thCohMV, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=my_IntOvlp, MPBthSub=5, MPBthAdd=20, MPBNumIt=my_MPBNumIt, \
MPB_SPCsub=0.5, MPB_SPCadd=1.5, MPBthIVS=2200, showIVSmask=false)
#hwa2=MDegrainN(last,super_hwa, multi_vec_hwa2, my_refine_tr, thSAD=my_thSAD_mg, thSAD2=my_thSAD2_mg, mt=false, wpow=4, UseSubShift=1, thSCD1=my_thSCD, adjSADzeromv=0.7, \
#adjSADcohmv=0.7, thCohMV=my_thCohMV, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=my_IntOvlp, MPBthSub=5, MPBthAdd=20, MPBNumIt=my_MPBNumIt, \
#MPB_SPCsub=0.5, MPB_SPCadd=1.5, MPBthIVS=2200, showIVSmask=false).Weave().Subtitle("hwa2")
#hybr2=MDegrainN(last,super_hwa, multi_vec_hybr2, my_refine_tr, thSAD=my_thSAD_mg, thSAD2=my_thSAD2_mg, mt=false, wpow=4, UseSubShift=1, thSCD1=my_thSCD, adjSADzeromv=0.7, \
#adjSADcohmv=0.7, thCohMV=my_thCohMV, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=my_IntOvlp, MPBthSub=5, MPBthAdd=20, MPBNumIt=my_MPBNumIt, \
#MPB_SPCsub=0.5, MPB_SPCadd=1.5, MPBthIVS=2200, showIVSmask=false).Weave().Subtitle("hybr2")

super_3=MSuper(cpu2, mt=false, chroma=true, pel=2, hpad=8, vpad=8, levels=0, pelrefine=true)
pnew_mel=MDegrainN(cpu2,super_3, multi_vec_cpu2, my_refine_tr, thSAD=250, thSAD2=240, mt=false, thSCD1=350, pmode=1, TTH_thUPD=100, IntOvlp=3)
#pnew=cpu2

tr_s=6
super_s=MSuper(noproc, mt=false, chroma=true, pel=1, levels=0)
multi_vec_s=MAnalyse (super_s, multi=true, blksize=8, delta=tr_s, overlap=0, chroma=true, optSearchOption=1, mt=false, levels=0)
p_s=mdegrainn(last,super_s, multi_vec_s, tr_s, thSAD=350, thSAD2=340, IntOvlp=3,mt=false, wpow=4, thSCD1=400, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, 
\ MVLPFGauss=0.9, thMVLPFCorr=50, MPBthSub=10, MPBthAdd=20, MPBNumIt=2)

#Interleave(pnew_mel.Sharpen(0.7).Subtitle("pnew_mel"),cpu2.Sharpen(0.7).Subtitle("cpu2"), p_s.Sharpen(0.7).Subtitle("p_s"))
Interleave(cpu2.Sharpen(1.0).Subtitle("cpu2"), p_s.Sharpen(1.0).Subtitle("p_s"))

Prefetch(2) # for intel E7500 2 cores CPU
Using release https://github.com/DTL2020/mvtools/r.../r.2.7.46-a.22 . CPU processing only (the DX12-ME ways are commented out - may be tested for quality/performance too where Win10 and required hardware avaialble). I not use last MDegrainN with pmode=1 because it mostly for better MPEG compression - for denoise and details it still may not show visible difference on static frames compare. Also it looks really cause crash somewhere with VirtualDub at some frames - need to debug later. Added Sharpen(1.0) to show small details better.

Also the example of multi-generation MVs refining is 'very simple' using only MDegrain as self-denoise engine for next gen MAnalyse. For more advanced scripts may be tested diffrerent immediate denoisers (like fft3d/bm3d/vsttempsmooth and others) or even some masked overlay of several different denoise engines using some mask-generation additional algorithm. Also the number of quality-defining params to adjust easily reach 50 or even 100 so it is better to use some Computer-Aided optimizing engines runs (zopti ?). And example params are only some quick adjusted sample based on some old footage to test. Also it is still not best/placebo settings but some more better in performance - for better quality the all predictors may be recommended at all generations (oPT=0) and searchparam >2 for esa search. Also the all penalties (pzero/pglobal/pnew) in onCPU MAnalyse may be also subject to individual adjustment at each generation (at least between 1 and 2). Also full/real_search overlap of blksize/2 may be recommended (not interpolated simulation for just 'deblocking'). That will make processing even more several times slower.

Last edited by DTL; 3rd June 2023 at 09:46.
DTL is offline   Reply With Quote
Old 3rd June 2023, 16:21   #2608  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
Sorry I was a bit busy:

Quote:
Originally Posted by madey83 View Post
Regarding my CPU it is ashamed to admit by it is i7 10750h....

example:
https://mega.nz/file/8QgQSTbQ#Y2XXFl...3oSclOZDl1LoPs

this is my entire call which gives me bitrate ~3500 Kbps, and yes i know HDR/DV source should have DGHDRtoSDR but i can't properly find white point of my source and i think some colors are shifted/not match to the source. I will update this matter in next time, when i will see this...
No shame there, it's an i7, only timing was a bit off due to lack of AVX2 but CPU development unlike GPU has only got worse (diminishing returns).

You cannot match MaxFALL because DGHDRtoSDR() doesn't understand the source. The colors are not shifted, they are simply encoded differently, namely they are encoded in IPTPQc2's Dolby propietary color model. This is good and bad at the same time, it makes a huge improvement on the over 40 years old YCbCr color model (wrongly known as YUV). It represents the most efficient method of encoding video so you get more (quality) with less (size), and it's been my goal to decode this for the last 2 years. It's based on the IPT color model which is a very good perceptual uniform color model but adapted for HDR -> IPT-PQ crosstalk 2%

This is where the bad news arise, some folks at ffmpeg and other places have been reverse engineering this model and a part of the model depends on something called the RPU which is nothing else than embedded metadata. Not only this metadata will specify the peak (programme or scene) white or colorimetry, but also a few things that make it difficult to decode without hurdles, that is scene based metadata transfer functions (the MMR or polynomial depending on DV profile) and also a trim pass (check the color tint jump from frame 191 to 192).

I managed to decode the static parts, that is the HDR and colors, but not the metadata since I haven't researched enough on how to do that, probably DoVi_Baker or the DoVi_tools can feed the RPU into frame properties but haven't had time to look at it.
Code:
ConvertBits(16)

deep_resize(1920)

z_ConvertFormat(pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",resample_filter_uv="bicubic",filter_param_a_uv=0.262015,filter_param_b_uv=0.368993,use_props=0) # use_props=0 speeds up

# Manually found the 1000 MasterLevel, and scale=1. GC (Gamut Compression) is also enabled here
ICtCp_to_RGB("709",1000,DoVi=true,GC=true,scale=1,tv_out=false)

# Tonemapping (LinWh=11.2 is default, but commonly 30 will make more sense and varies depending on title, better tonemappers will be implemented in the future)
TM_Hable(mode="Default",filmic=false,LinWh=31.2)

# avsresize or below block
z_ConvertFormat(pixel_type="YUV420P8",colorspace_op="rgb:linear:709:f=>709:709:same:l", approximate_gamma=false, use_props=0)

# Alternative to avsresize
CCTF("1886",false,tv_in=false,tv_out=false)
RGB_to_YUV()
DolbyVision IPTPQc2______________________________________________________________________Rec709 - 1886
........

This looks way better, and for the most IPTPQc2 content enough since not all titles make use of the RPU for the poly shaping or trim pass.
If you don't want to process and reencode the clip I made a GPU pixel shader to decode/watch the content on the fly.

For my GPU (GTX 1070) it's barely playable with some frame drops here and there but I suspect there's something wrong on the MPC-HC shader pipeline since I see two shader instances.

So back to your original question, unless you want to keep the IPTPQc2 model, you would apply SMDegrain() normally to the output of the above script as usual. The next call worked well for me:

Code:
SMDegrain(4,thSAD=250,mode="temporalsoften",refinemotion=true,prefilter=2)
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 3rd June 2023 at 22:38.
Dogway is offline   Reply With Quote
Old 3rd June 2023, 16:27   #2609  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
Quote:
Originally Posted by anton_foy View Post
So here are the two problem clips im using and the "man" clip is noisier and more problematic than the "sheep" clip.

IMAGES
Raw Clips UHD 55mb

Speed/Quality:
On my slow computer I get the fastest speed with
smdegrain uhdhalf=true but much less detail/more blurry than set to false.

The DTL-script with DX12ME is a little bit slower than the above but a bit faster than smdegrain uhdhalf=false although the DTL-script denoises the least and too much temporal instabilities are left. I think I am not using DTL's version correctly since I did not understand all the new params so Im sorry if I do it unjustice. Also I should raise the tr but then it gets too slow. Notice that I do not use any prefiltering with DTL's version, only its internal mvlpfGauss.

Mdegrain-script is pretty close to the smdegrain call but a little slower yet denoises more and still keeps alot of detail.
I remember those clips from long ago. I don't remember my solution back then but I came up now with something like this:

Code:
ConvertBits(16)
pre=CCD(13).ex_median("IQMV",Y=2,UV=3).ex_sbr(2)
SMDegrain(4,thSAD=270,mode="temporalsoften",UHDHalf=false,chroma=false,mfilter=ex_minblur(1,UV=2),refinemotion=true,prefilter=pre,LFR=true)

# If you want to remove chroma vertical stripes
src1=last
RatioResize(1/3.,"%",kernel="RobiSharp")
src=last
DeStripeV(3,4,2,UV=3)
ex_LFR(src,last,LFR=400,UV=3)
RatioResize(3,"%",kernel="RobiSharp")
MergeLuma(src1)

ConvertBits(8,dither=1)
I wouldn't use hqdn3d() as it's really slow. Your mdgr() looks legit fine, denoise/sharpness balance but for some reason it warps a little bit which is annoying, maybe it's due to the prefiltering and not really the filter.

If my result looks a bit soft you can add ex_unsharp() afterward to bring back some sharpness. And as you can already see I used UHDHalf=false, I also noticed the output to be softer when disabled.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 3rd June 2023, 16:32   #2610  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
Quote:
Originally Posted by LeXXuz View Post
You know, you have a point there I think.
SMDegrain is an excellent script, but after many month and hundreds of tested configurations, it always came down to the very same conclusion that mvtools is the weak spot here and in its current state couldn't cut it to satisfy my needs and expectations of a high-class motion-based denoising workflow on more demanding sources.

I'm almost at the brink of saying I'd better buy more hard drives instead of wasting that money on energy cost for half-satisfying 'remastering' jobs of my collection which I may regret one day.
Even having developed SMDegrain for years, I'm a purist, I like my grain in films and whenever possible I like to keep things as in the source, with the exception of bad transfer/encodes/releases. It's this case where grain is no more but a mess of blobs and other artifacts when I go, degrain and reapply some grain back ("remastering" basically), and to be honest I think it's miles better than the source, one example being one of your last examples of an outdoor bath with red sky sample. Or the recent 16mm film clip.

I think MVTools2 is in a pretty good state, to be honest I don't want more knobs, and I'm a firm believer that more knobs to adjust is not a better filter but the contrary. A well designed filter should only have a few handful but important parameters to play with which adjust to most if not all clips. Everything else should be reparametrized and best-fitted internally. And I think DTL's work on putting MVTools2 to the next level is superb but it's a long path still and I'm quite burned already for the last 2 years work, currently only on maintenance mode. To be honest everything related MVTools2 feels kinda a placebo or highly subjective improvements so you will only see me paint broad strokes regarding settings.
I do think AI is the way to go for the foreseeable future, a highly specialized AI based MV filter that then you can feed to MVTools or whatever degraining tool. I can see GPU APIs offer some kind of support on this regard (hopefully) and if not there's always good papers and people taking AI where it hasn't been reached yet. You could currently do this on AviSynth, creating some manually crafted mini neural network, it's just not practical or performance worthy.
Quote:
Originally Posted by DTL View Post
(as things become slower and slower in compare of expected 'singularity of digital civilization from too fast and constantly increasing in speed progress' in the early 200x years).
Yeah, for some reason young people (mostly) have no interest on getting their hands dirty either literally or figuratively speaking, like understanding how things work and taking things up to the next level, at least from the demographics of Doom9 which mostly are millennials, GenX or Boomers. In Asia things are a bit better but not by much.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 3rd June 2023, 20:13   #2611  |  Link
madey83
Guest
 
Posts: n/a
Quote:
Originally Posted by Dogway View Post
Sorry I was a bit busy:



No shame there, it's an i7, only timing was a bit off due to lack of AVX2 but CPU development unlike GPU has only got worse (diminishing returns).

You cannot match MaxFALL because DGHDRtoSDR() doesn't understand the source. The colors are not shifted, they are simply encoded differently, namely they are encoded in IPTPQc2's Dolby propietary color model. This is good and bad at the same time, it makes a huge improvement on the over 40 years old YCbCr color model (wrongly known as YUV). It represents the most efficient method of encoding video so you get more (quality) with less (size), and it's been my goal to decode this for the last 2 years. It's based on the IPT color model which is a very good perceptual uniform color model but adapted for HDR -> IPT-PQ crosstalk 2%

This is where the bad news arise, some folks at ffmpeg and other places have been reverse engineering this model and a part of the model depends on something called the RPU which is nothing else than embedded metadata. Not only this metadata will specify the peak (programme or scene) white or colorimetry, but also a few things that make it difficult to decode without hurdles, that is scene based metadata transfer functions (the MMR or polynomial depending on DV profile) and also a trim pass (check the color tint jump from frame 191 to 192).

I managed to decode the static parts, that is the HDR and colors, but not the metadata since I haven't researched enough on how to do that, probably DoVi_Baker or the DoVi_tools can feed the RPU into frame properties but haven't had time to look at it.
Code:
ConvertBits(16)

deep_resize(1920,show=false)

z_ConvertFormat(pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",resample_filter_uv="bicubic",filter_param_a_uv=0.262015,filter_param_b_uv=0.368993,use_props=0) # use_props=0 speeds up

# Manually found the 1000 MasterLevel, and scale=1. GC (Gamut Compression) is also enabled here
ICtCp_to_RGB("709",1000,DoVi=true,GC=true,scale=1,tv_out=false)

# Tonemapping (LinWh=11.2 is default, but commonly 30 will make more sense and varies depending on title, better tonemappers will be implemented in the future)
TM_Hable(mode="Default",filmic=false,LinWh=31.2)

# avsresize or below block
z_ConvertFormat(pixel_type="YUV420P8",colorspace_op="rgb:linear:709:f=>709:709:same:l", approximate_gamma=false, use_props=0)

# Alternative to avsresize
CCTF("1886",false,tv_in=false,tv_out=true)
RGB_to_YUV()
DolbyVision IPTPQc2______________________________________________________________________Rec709 - 1886
........

This looks way better, and for the most IPTPQc2 content enough since not all titles make use of the RPU for the poly shaping or trim pass.
If you don't want to process and reencode the clip I made a GPU pixel shader to decode/watch the content on the fly.

For my GPU (GTX 1070) it's barely playable with some frame drops here and there but I suspect there's something wrong on the MPC-HC shader pipeline since I see two shader instances.

So back to your original question, unless you want to keep the IPTPQc2 model, you would apply SMDegrain() normally to the output of the above script as usual. The next call worked well for me:

Code:
SMDegrain(4,thSAD=250,mode="temporalsoften",refinemotion=true,prefilter=2)
Hi Dogway

Thanks for explaining all this thinks regarding colors, but as i mentioned I'm rookie in avisynth and all this is to hard to understand at this moment for me.

Based on you knowledge and experience I would like to know if it is possible use your awesome scripts to Denise dolby vision profile 5/8.1 without changing colors space like you did ( hope i understud it correctly ).

If it is what should I change in my call.

Could you give me how did you find thSAD value without many encodes.

Thanks :⁠-⁠)
  Reply With Quote
Old 3rd June 2023, 20:37   #2612  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 720
Quote:
Originally Posted by Dogway View Post
I remember those clips from long ago. I don't remember my solution back then but I came up now with something like this:

Code:
ConvertBits(16)
pre=CCD(13).ex_median("IQMV",Y=2,UV=3).ex_sbr(2)
SMDegrain(4,thSAD=270,mode="temporalsoften",UHDHalf=false,chroma=false,mfilter=ex_minblur(1,UV=2),refinemotion=true,prefilter=pre,LFR=true)

# If you want to remove chroma vertical stripes
src1=last
RatioResize(1/3.,"%",kernel="RobiSharp")
src=last
DeStripeV(3,4,2,UV=3)
ex_LFR(src,last,LFR=400,UV=3)
RatioResize(3,"%",kernel="RobiSharp")
MergeLuma(src1)

ConvertBits(8,dither=1)
I wouldn't use hqdn3d() as it's really slow. Your mdgr() looks legit fine, denoise/sharpness balance but for some reason it warps a little bit which is annoying, maybe it's due to the prefiltering and not really the filter.

If my result looks a bit soft you can add ex_unsharp() afterward to bring back some sharpness. And as you can already see I used UHDHalf=false, I also noticed the output to be softer when disabled.
Thanks man! Pretty stable temporally even with tr=4. Although the chroma flickers a bit still. Sharp too so no need for any further sharpening I think. Could not have thought of the destripe stuff, neat! I did not see the warping you mentioned on the mdgr-script but I will look closer at that. The prefiltering you put works great too.
anton_foy is online now   Reply With Quote
Old 3rd June 2023, 21:20   #2613  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
Quote:
Originally Posted by madey83 View Post
Based on you knowledge and experience I would like to know if it is possible use your awesome scripts to denoise dolby vision profile 5/8.1 without changing colors space like you did ( hope i understud it correctly ).

If it is what should I change in my call.
So you want to undo the IPTPQc2 model, but you still want to keep original color space Rec2020 and original transfer function PQ?

This is the code:
Code:
ConvertBits(16)

deep_resize(1920)
z_ConvertFormat(pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",resample_filter_uv="bicubic",filter_param_a_uv=0.262015,filter_param_b_uv=0.368993,use_props=0)

ICtCp_to_RGB("2020",10000,DoVi=true,GC=false,scale=1,tv_out=false)

z_ConvertFormat(pixel_type="YUV420P16",colorspace_op="rgb:linear:2020:f=>2020:st2084:same:l", approximate_gamma=false, nominal_luminance=203.0, use_props=0)

# 600 was found manually
pre=DGHDRtoSDR(mode="pq",white=600,gamma=1/2.4,tm=1.0).ex_minblur(2,UV=3)
SMDegrain(4,thSAD=250,mode="temporalsoften",refinemotion=true,prefilter=pre)

ConvertBits(10,dither=1)
@anton_foy: Yes, the vertical chroma stripes might or might not be visible, so it depends. If you don't see them on normal playback remove that block so the filter runs faster.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 3rd June 2023 at 21:34.
Dogway is offline   Reply With Quote
Old 4th June 2023, 09:42   #2614  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,209
"I think MVTools2 is in a pretty good state, to be honest I don't want more knobs, and I'm a firm believer that more knobs to adjust is not a better filter but the contrary. A well designed filter should only have a few handful but important parameters to play with which adjust to most if not all clips. Everything else should be reparametrized and best-fitted internally."

There are several levels of processing and control:

1. Level of compiled binary (or program source) - typically not reachable by enduser without full program recompile/debug.
2. Level of filter to AVS environment (expose all design control params).
3. Level of script function from script librabry (may have some params internally defaulted or some not very complex math interconnection)
4. Level of endusers script to rare or everyday usage
5. Level of GUI software using some AVS scripts internally

Each level may have less and less user-controllable params with hiding more and more params in the internal defaults (at each level). But to find best defaults and/or params interconnection we require many many test/production runs (or Computer-Aided optimizers runs). At the old decades when there were many users and develpers - the users activity with new features quickly shows best params and some 'advanced/power' users provide Level 3 API (script function from script library - like SMDegrain()) to upper-level content processing users. Now as some new processing modes and features to mvtools added - it exist only at Level 2 and need lots of tests to select best typical use cases params (if there will be found only one best param for all use cases - it can be hardcoded as internal default at Level 1). The design of SMDegrain of Level 3 API at the old mvtools features core (like mvtools-200x) takes about 10..15 years.

So after some (and not few) new features added to mvtools core filters for degrain it may already take many time to design at least more easy to use Level 3 API for script users. And we have greatly reduced number of practical users so the performance of designing Level 3 and higher API of total users pool is significantly reduced. Need long time waiting now *as things become slower and slower*.

At the more active civilization at 200x there was dual-stream process of development:
1. Developers make new features in compiled binary and provide Level 2 API to AVS users.
2. AVS users make tests and provide feedback to developers about best params or even ways to put more developement resources from practically found best processing modes and so on.
This 2-sided interaction make development of plugins faster and more effective.

Most of residual and new users like to have Level 5 or may be some Level 4 API and very hard to understand how lower levels working or even the processing algorithms in compiled binaries. But the path to design nice Level 5 API to endusers in freeware opensource non-commertial software may be very long. As I see typical Level 5 software like xvid4psp is not free at all (at pro-version).

Last edited by DTL; 4th June 2023 at 10:33.
DTL is offline   Reply With Quote
Old 4th June 2023, 19:38   #2615  |  Link
madey83
Guest
 
Posts: n/a
Quote:
Originally Posted by Dogway View Post
So you want to undo the IPTPQc2 model, but you still want to keep original color space Rec2020 and original transfer function PQ?

This is the code:
Code:
ConvertBits(16)

deep_resize(1920)
z_ConvertFormat(pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",resample_filter_uv="bicubic",filter_param_a_uv=0.262015,filter_param_b_uv=0.368993,use_props=0)

ICtCp_to_RGB("2020",10000,DoVi=true,GC=false,scale=1,tv_out=false)

z_ConvertFormat(pixel_type="YUV420P16",colorspace_op="rgb:linear:2020:f=>2020:st2084:same:l", approximate_gamma=false, nominal_luminance=203.0, use_props=0)

# 600 was found manually
pre=DGHDRtoSDR(mode="pq",white=600,gamma=1/2.4,tm=1.0).ex_minblur(2,UV=3)
SMDegrain(4,thSAD=250,mode="temporalsoften",refinemotion=true,prefilter=pre)

ConvertBits(10,dither=1)
@anton_foy: Yes, the vertical chroma stripes might or might not be visible, so it depends. If you don't see them on normal playback remove that block so the filter runs faster.
@Dogway,

thank you for that.
Is this part is static and no need to change it regarles of the source?
z_ConvertFormat(pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",resample_filter_uv="bicubic",filter_param_a_uv=0.262015,filter_param_b_uv=0.368993,use_props=0)

ICtCp_to_RGB("2020",10000,DoVi=true,GC=false,scale=1,tv_out=false)

z_ConvertFormat(pixel_type="YUV420P16",colorspace_op="rgb:linear:2020:f=>2020:st2084:same:l", approximate_gamma=false, nominal_luminance=203.0, use_props=0)


# 600 was found manually - Could you please giude how i should do that per source?

Is there any way to check value of thSAD without encoding...?
  Reply With Quote
Old 4th June 2023, 19:53   #2616  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
Quote:
Originally Posted by madey83 View Post
Is this part is static and no need to change it regarles of the source?
Code:
z_ConvertFormat(pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",resample_filter_uv="bicubic",filter_param_a_uv=0.262015,filter_param_b_uv=0.368993,use_props=0)

ICtCp_to_RGB("2020",10000,DoVi=true,GC=false,scale=1,tv_out=false)

z_ConvertFormat(pixel_type="YUV420P16",colorspace_op="rgb:linear:2020:f=>2020:st2084:same:l", approximate_gamma=false, nominal_luminance=203.0, use_props=0)
# 600 was found manually - Could you please giude how i should do that per source?

Is there any way to check value of thSAD without encoding...?
Yes, that part is a roundtrip concerning HDR so it only decodes IPTPQc2. 600 was found manually, because I don't know what units DGHDRtoSDR() uses for its 'white' argument. It should be related to MaxFALL I guess but I don't know.

'thSAD' was also found manually, under inspection with AvsPmod.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 4th June 2023, 20:56   #2617  |  Link
madey83
Guest
 
Posts: n/a
Quote:
Originally Posted by Dogway View Post
Yes, that part is a roundtrip concerning HDR so it only decodes IPTPQc2. 600 was found manually, because I don't know what units DGHDRtoSDR() uses for its 'white' argument. It should be related to MaxFALL I guess but I don't know.

'thSAD' was also found manually, under inspection with AvsPmod.
i've never done this, but when i try to open in AvsPmod sample.mkv with your scritp i've got this

https://imgur.com/a/dXfPcrq

Last edited by madey83; 4th June 2023 at 20:58.
  Reply With Quote
Old 4th June 2023, 21:21   #2618  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
You are not loading RGTools plugin it seems. Also missing MVTools2, MaskTools2, etc...

By the way I realized you can downscale also with z_converformat() so performance can be increased further.
Code:
z_ConvertFormat(1920,1080,pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",use_props=0,\
resample_filter="bicubic",filter_param_a=-0.990000,filter_param_b=0.06000)
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 5th June 2023, 08:41   #2619  |  Link
madey83
Guest
 
Posts: n/a
Quote:
Originally Posted by Dogway View Post
You are not loading RGTools plugin it seems. Also missing MVTools2, MaskTools2, etc...

By the way I realized you can downscale also with z_converformat() so performance can be increased further.
Code:
z_ConvertFormat(1920,1080,pixel_type="YUV444PS",colorspace_op="auto:auto:auto:l=>same:same:same:f",use_props=0,\
resample_filter="bicubic",filter_param_a=-0.990000,filter_param_b=0.06000)
Hi Dogway,

thank you for your patience to me and your help.

i was able to open example.mkv in avsPmod.

Regarding tr and thSAD you manually tried different vaulues or there is an option in avsPmod to calculate avg. value to enitire file?

Sorry, for to meny questions but all this is new to me...

Last edited by madey83; 5th June 2023 at 09:42.
  Reply With Quote
Old 5th June 2023, 17:13   #2620  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,367
Quote:
Originally Posted by madey83 View Post
Regarding tr and thSAD you manually tried different vaulues or there is an option in avsPmod to calculate avg. value to enitire file?
No, just inspection and experience/preference. If you use good prefilters you can lower thSAD.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Reply

Tags
avisynth, dogway, filters, hbd, packs

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:27.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.