Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 18th February 2023, 09:36   #201  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
I understand having 50+ different adjustments in mvtools for simple denoise work is not very easy for enduser. This is still in development and new complex processing like multi-pass blending (still about half of all existing ideas implemented - only spatial checks only, not temporal yet) require additional (and not very small) set of params. And it can not be separated to external 'filter' because it is all runtime single pass over frame processing to have best performance.

It were expected if some active users may have time to make tests and report some good set of settings for some use cases. But as time shows there are smaller and smaller number of users of AVS so it may take longer and longer time to wait if someone will put some time to make warp script with 'presets' (like SMDegrain). At 202x years the total world situation at this planet and around my living place changes significantly (with much worse perspective) so I currently put more time to prepare for possibly not very nice future and have lower time to put to development of this completely free project. So it was already ideas to write to pinterf about fixing some state of development and list most useful features being implemented more or less completely (not still left in debug state like optSearchOption=3 and 4 - multi-blocks search for onCPU processing in MAnalyse) in 2021..2022 for transfer to his 'main branch' and finally making 'official' releale of version after 2.7.45.

Now as I see some working MC RIFE version for tr up to 12 I have new ideas of adding some more mode to MDegrainN so it can work as blending engine for external source of motion compensated frames providing protection from too bad blends using same SAD (or any other implemented dissimilarity metric) for blocks method of analisys. While performance at GTX1060 card with tr=12 is about 20x times slower in compare with DX12-ME and current MDegrainN.
DTL is offline   Reply With Quote
Old 18th February 2023, 10:48   #202  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Quote:
Originally Posted by DTL View Post
I understand having 50+ different adjustments in mvtools for simple denoise work is not very easy for enduser. This is still in development and new complex processing like multi-pass blending (still about half of all existing ideas implemented - only spatial checks only, not temporal yet) require additional (and not very small) set of params. And it can not be separated to external 'filter' because it is all runtime single pass over frame processing to have best performance.

It were expected if some active users may have time to make tests and report some good set of settings for some use cases. But as time shows there are smaller and smaller number of users of AVS so it may take longer and longer time to wait if someone will put some time to make warp script with 'presets' (like SMDegrain). At 202x years the total world situation at this planet and around my living place changes significantly (with much worse perspective) so I currently put more time to prepare for possibly not very nice future and have lower time to put to development of this completely free project. So it was already ideas to write to pinterf about fixing some state of development and list most useful features being implemented more or less completely (not still left in debug state like optSearchOption=3 and 4 - multi-blocks search for onCPU processing in MAnalyse) in 2021..2022 for transfer to his 'main branch' and finally making 'official' releale of version after 2.7.45.

Now as I see some working MC RIFE version for tr up to 12 I have new ideas of adding some more mode to MDegrainN so it can work as blending engine for external source of motion compensated frames providing protection from too bad blends using same SAD (or any other implemented dissimilarity metric) for blocks method of analisys. While performance at GTX1060 card with tr=12 is about 20x times slower in compare with DX12-ME and current MDegrainN.
Yes! I have to say it you are really brilliant DTL!!! Plenty of progress in such a small amount of time! Now I have to try your implementation of diagonal blocks for MDegrain. Seriously I think many of your implementations users here on the forum (and elsewhere) have wanted for a long time. Best cheers to you again!!!

EDIT: maybe your "best" and "fastest" settings should be put as "DTL=TRUE/FALSE"?

Last edited by anton_foy; 18th February 2023 at 10:51.
anton_foy is offline   Reply With Quote
Old 18th February 2023, 20:37   #203  |  Link
magnetite
Registered User
 
Join Date: May 2010
Posts: 64
No worries DTL. I appreciate what you do. Your work has helped me a lot.

If you need some testing done, let me know. I'm not super experienced with testing, but I can help with some things.

Last edited by magnetite; 18th February 2023 at 20:49.
magnetite is offline   Reply With Quote
Old 27th February 2023, 13:38   #204  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
Found some VS attempt to use NVOF for motion compensation - https://bitbucket.org/mystery_keeper...ter/readme.txt . Trying to e-mail Asd-g about porting it to AVS for testing (not sure if solution from https://stackoverflow.com/questions/...ithub-com-user will work) . Not sure what is the minimum chip from NVIDIA is required to NVOF to work.

Support from NVIDIA looks like very limited https://forums.developer.nvidia.com/...nsation/218981

Last edited by DTL; 27th February 2023 at 13:41.
DTL is offline   Reply With Quote
Old 27th February 2023, 13:49   #205  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Quote:
Originally Posted by DTL View Post
Found some VS attempt to use NVOF for motion compensation - https://bitbucket.org/mystery_keeper...ter/readme.txt . Trying to e-mail Asd-g about porting it to AVS for testing (not sure if solution from https://stackoverflow.com/questions/...ithub-com-user will work) . Not sure what is the minimum chip from NVIDIA is required to NVOF to work.

Support from NVIDIA looks like very limited https://forums.developer.nvidia.com/...nsation/218981
Wow amazing DTL! Great to see although pretty rough that support is limited. But it is ported and working for VS now?
anton_foy is offline   Reply With Quote
Old 27th February 2023, 13:55   #206  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,156
I don't think so. I remember Asd-g's gpu don't support CUDA (only for NVIDIA gpu) so he only releases plugin support Vulkan
kedautinh12 is offline   Reply With Quote
Old 27th February 2023, 14:22   #207  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
It is not only NVIDIA but may be new enough and expensive only:
https://docs.nvidia.com/video-techno...-me/index.html
NVIDIA Turing and above GPUs
DTL is offline   Reply With Quote
Old 27th February 2023, 15:51   #208  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,666
Quote:
Originally Posted by DTL View Post
Found some VS attempt to use NVOF for motion compensation - https://bitbucket.org/mystery_keeper...ter/readme.txt .
The author of the plugin said it sucks: https://github.com/dubhater/vapoursy...ools/issues/60
Reel.Deel is offline   Reply With Quote
Old 28th February 2023, 19:57   #209  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
I do not see tests with real world shot content from the author. Only tests with black background at NVIDIA site.

The definitely good side of NVIDIA optical flow motion compensation that it is product of a professional fulltime job development team (I hope) and hardware manufacturer. I expect it to have better quality in compare with 'simple' motion search engine from the also NVIDIA chip MPEG encoder. Also it may use much more compute hardware resources for better quality and may have some settings to tweak.

So the plugin is only short interface from optical flow API to AVS and expected to be simple and stable. Also with update in hardware and drivers it is expected to have better in the future. While still working via the same plugin with AVS.
Though I still not have Turing or later hardware to test.
DTL is offline   Reply With Quote
Old 2nd March 2023, 09:06   #210  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
I found https://github.com/open-mmlab/mmdetection3d and https://github.com/bharathgs/Awesome-pytorch-list which lists the mm3d optical flow .pth. Maybe something interesting?

Edit: this link to convert to onnx to use for mlrt plugin https://github.com/open-mmlab/mmdete...ytorch2onnx.md

More lighter (faster?) approach: https://github.com/twhui/LiteFlowNet

Last edited by anton_foy; 2nd March 2023 at 10:12.
anton_foy is offline   Reply With Quote
Old 2nd March 2023, 11:48   #211  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
There is also newer https://github.com/twhui/LiteFlowNet3 .

As I see the main poor point or all these implementations - usage only 2 input images (I1 I2 or I0 I1). It is very poor approach for noise-deformed real images sets we have as input for temporal denoise process. May be many of motion-estimation / optical-flows algorithms were designed to work on clean images.

With 2 only input images the algorithm can not detect real static (or low speed motion) and low contrast low detailed areas deformed by noise only. And start to produce lots of errors motion vectors (of the very large length sometime). The better approach for motion pictures temporal denoising need to take in analysis as large as possible set of input frames to try to understand the shape and position of objecs in a sequence of a frames. May be some enthusiast can compose and send a message (e-mail ?) to many known motion-estimation or optical flow estimation engines designers with a request to make multi-input frames version of engine that can better work with more or less significantly degraded by natural noise image sequence from single scene objects. At github it may be can be made as opening issue with feature-request in each repository. And the training process of the models must be performed at the datasets with added natural (gauss) noise with target result of clean sources before adding noise.

Last edited by DTL; 2nd March 2023 at 11:52.
DTL is offline   Reply With Quote
Old 2nd March 2023, 12:32   #212  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Here he is asking about more input images using optical flow for video. Don't know if it is corresponding to what you look for though. https://discuss.pytorch.org/t/classi...a-videos/68922

Edit: Maybe still need to prefilter before optical flow then?

Last edited by anton_foy; 2nd March 2023 at 12:42.
anton_foy is offline   Reply With Quote
Old 2nd March 2023, 13:23   #213  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
Quote:
Originally Posted by anton_foy View Post
Edit: Maybe still need to prefilter before optical flow then?
No-no. The motion search must be 'Intelectual' itself. And based only on the true-source non-changed frames (they carry most of useful non-distorted data). Same as started with MPB feature (but it still not go in 'temporal' process - work for single current frame only).

The process is really easy but resources consuming:

Imagine you have some low contrast image in a several printed to paper copies. Each copy have added random gauss-distributed noise with zero mean value across each image point in 'temporal' dimension (in a different copy of printed image).

Now you cut each image to small block (like 8x8 samples with regular grid) with scissors and put each cut pieces in separate box (marked 'pieces of copy 1', pieces of copy2, ..., pieces of copy N). Put same cut picture to the box in random shuffled order like broken mosaic.

Next you call an AI robot with Tera/PetaFLOPS engine with GigaBytes memory to your room and ask to take mostly equal looking pieces from each box and arrange in the same location of arrangement grid.

After solving - ask to calculate Average() of the all pieces in each location of a grid and output resulting denoised image.

For moving scene denoising you print noised sequential frames of a cutscene to different sheets of paper - and cut and ask robot to find equal looking parts and get single denoised frame at the output. To get correct moving picture denoised sequence you keep 'current' frame from messing up into box and put its cut pieces into arrangement grid as a reference and ask robot to find mostly equally looking pieces in all other boxes (before and after frames).

That 'find mostly equally looking to current frame' task is the key of the process. But in real life you do not have the clean not-noised 'current' frame as ideal reference to search. So the better process is iterative in 'time':

Robot create first version of denoised frame using motion compensation to 'current noised frame'. Next create several first-generation denoised frames around current and check for the resulting and used motion vectors used (for real life action the motion vectors are not random like white noise Fourier spectrum in time axis). If MVs are no looks good - the MVs are corrected in some way (like MVLPF in some 'linear processing' way for examlpe). After correction of MVs the new set of frames created:

'frames denoise generation 1'. Next is again for each frame search and apply new MVs using 'frames denoise generation 1' as reference. And use initial source frames as source. Next again analyse if motion looks 'natural' and also all found as similair blocks in a sequence of frames looks mostly equal (so the temporal noise with zero mean is removed).

The total 'esa' exhaustive search/process is really not use search of MVs at all but brute-force try of all possible MVs for current block in a frame (the total count of possible MVs is limited) in degraining (averaging) iterations and analyse target conditions of total cutscene frames pool for best matching.

Target conditions of denoising for frame pool of cutscene (sequential set of frames for single scene or single movie):
1. All MVs look naturally (low freqnency enough main energy components of Fourier spectrum and non-equal energy spectrum like random noise for example).
2. All found objects in a frame pool looks mostly equal (lowest dissimilarity metric like lowest SAD or highest SSIM and so on) in each frame (that mean after backward transforms for each frame).

So it is expected from neural network to perform very multi-pass iterative processing over a supplied input pool of frames (+-tr from current frame for long movie) for creating each denoised or motion compensated frame for output.

The best final MVs for final MC and blending for current frame is a product of very multi-generation search over a large set of frames. Not simple 2 input frames as in typical motion search engines.

Motion search and denoise is iterative multi-generation process: Better denoise provide better MVs and better MVs provide better denoise. But all iterations must use same source set of frames as 'mostly true reference'. To prevent from accumulating of errors in both denoise and MVs in multi-generations.

Last edited by DTL; 2nd March 2023 at 13:46.
DTL is offline   Reply With Quote
Old 2nd March 2023, 19:48   #214  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
Make a test of multi-generation MAnalyse+MDegrainN search with version from https://github.com/DTL2020/mvtools/r.../r.2.7.46-a.19 release (it have SuperCurrent optional input for MAnalyse):

Script is
Code:
tr=6
super=MSuper(last,chroma=true, mt=false, pel=4)
src=last
mv_1_g0=MAnalyse(super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show1=MShow(super, mv_1_g0).Subtitle("input MAnalyse")

multi_vec=MAnalyse(super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
g1=MDegrainN(last,super, multi_vec, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen1=g1

super_g1=MSuper(gen1,chroma=true, mt=false, pel=4)
multi_vec_g2=MAnalyse (super_g1, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g1=MAnalyse(super_g1, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show2=MShow(super_g1, mv_1_g1).Subtitle("gen1 MAnalyse")

g2=MDegrainN(src,super, multi_vec_g2, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen2=g2

super_g2=MSuper(gen2,chroma=true, mt=false, pel=4)
multi_vec_g3=MAnalyse(super_g2, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g2=MAnalyse(super_g2, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show3=MShow(super_g2, mv_1_g2).Subtitle("gen2 MAnalyse")

g3=MDegrainN(src,super, multi_vec_g3, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen3=g3

super_g3=MSuper(gen3,chroma=true, mt=false, pel=4)
multi_vec_g4=MAnalyse(super_g3, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g3=MAnalyse(super_g3, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show4=MShow(super_g3, mv_1_g3).Subtitle("gen3 MAnalyse")

g4=MDegrainN(src,super, multi_vec_g4, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen4=g4

super_g4=MSuper(gen4,chroma=true, mt=false, pel=4)
multi_vec_g5=MAnalyse(super_g4, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g4=MAnalyse(super_g4, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show5=MShow(super_g4, mv_1_g4).Subtitle("gen4 MAnalyse")

g5=MDegrainN(src,super, multi_vec_g5, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen5=g5

super_g5=MSuper(gen5,chroma=true, mt=false, pel=4)
#multi_vec_g5=MAnalyse(super_g5, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=2)

mv_1_g5=MAnalyse(super_g5, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show6=MShow(super_g5, mv_1_g5).Subtitle("gen5 MAnalyse")



row1=StackHorizontal(show1, show2)
row2=StackHorizontal(show3, show4)
row3=StackHorizontal(show5, show6)
StackVertical(row1, row2, row3)


All MDegrainN accepts same input current and super clips and only MAnalyse in each generation uses one source from previous generation MDegrainN and one source from input super clip.

Result shows in several generations of refining of MVs as number of generation increases the number of significantly errorneous MVs at low contrast mostly noised static areas slowly decreases. The mostly visible effect in first 1..2 generations.

Last edited by DTL; 2nd March 2023 at 19:58.
DTL is offline   Reply With Quote
Old 3rd March 2023, 08:08   #215  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Quote:
Originally Posted by DTL View Post
Make a test of multi-generation MAnalyse+MDegrainN search with version from https://github.com/DTL2020/mvtools/r.../r.2.7.46-a.19 release (it have SuperCurrent optional input for MAnalyse):

Script is
Code:
tr=6
super=MSuper(last,chroma=true, mt=false, pel=4)
src=last
mv_1_g0=MAnalyse(super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show1=MShow(super, mv_1_g0).Subtitle("input MAnalyse")

multi_vec=MAnalyse(super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
g1=MDegrainN(last,super, multi_vec, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen1=g1

super_g1=MSuper(gen1,chroma=true, mt=false, pel=4)
multi_vec_g2=MAnalyse (super_g1, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g1=MAnalyse(super_g1, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show2=MShow(super_g1, mv_1_g1).Subtitle("gen1 MAnalyse")

g2=MDegrainN(src,super, multi_vec_g2, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen2=g2

super_g2=MSuper(gen2,chroma=true, mt=false, pel=4)
multi_vec_g3=MAnalyse(super_g2, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g2=MAnalyse(super_g2, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show3=MShow(super_g2, mv_1_g2).Subtitle("gen2 MAnalyse")

g3=MDegrainN(src,super, multi_vec_g3, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen3=g3

super_g3=MSuper(gen3,chroma=true, mt=false, pel=4)
multi_vec_g4=MAnalyse(super_g3, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g3=MAnalyse(super_g3, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show4=MShow(super_g3, mv_1_g3).Subtitle("gen3 MAnalyse")

g4=MDegrainN(src,super, multi_vec_g4, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen4=g4

super_g4=MSuper(gen4,chroma=true, mt=false, pel=4)
multi_vec_g5=MAnalyse(super_g4, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)

mv_1_g4=MAnalyse(super_g4, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show5=MShow(super_g4, mv_1_g4).Subtitle("gen4 MAnalyse")

g5=MDegrainN(src,super, multi_vec_g5, tr, thSAD=250, thSAD2=240, mt=false, wpow=4, thSCD1=500, adjSADzeromv=0.5, adjSADcohmv=0.5, thCohMV=16, MVLPFGauss=0.9, thMVLPFCorr=50, adjSADLPFedmv=0.9, IntOvlp=3)

gen5=g5

super_g5=MSuper(gen5,chroma=true, mt=false, pel=4)
#multi_vec_g5=MAnalyse(super_g5, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=2)

mv_1_g5=MAnalyse(super_g5, SuperCurrent=super, delta=1, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
show6=MShow(super_g5, mv_1_g5).Subtitle("gen5 MAnalyse")



row1=StackHorizontal(show1, show2)
row2=StackHorizontal(show3, show4)
row3=StackHorizontal(show5, show6)
StackVertical(row1, row2, row3)


All MDegrainN accepts same input current and super clips and only MAnalyse in each generation uses one source from previous generation MDegrainN and one source from input super clip.

Result shows in several generations of refining of MVs as number of generation increases the number of significantly errorneous MVs at low contrast mostly noised static areas slowly decreases. The mostly visible effect in first 1..2 generations.
So this is like prefiltering with MDegrain itself? Refining MVs using denoising/filtering only on MAnalyse.
Edit: Would the "SuperCurrent" be used to compare against the prior superclip (prefiltered with Mdegrain) to get a better estimation? But would not Mrecalculate do similar if using the original superclip
like this:

Code:
    super     = MSuper()
    Vec       = MAnalyse(super,...)
    prefilt   = MDegrainN(last,super,...)

    superfilt = MSuper(prefilt)
    Vec1      = MAnalyse(superfilt,...)
    Mvec      = MRecalculate(super,Vec1)
    MDegrainN(src,super,mvec,...)

Last edited by anton_foy; 3rd March 2023 at 09:55.
anton_foy is offline   Reply With Quote
Old 3rd March 2023, 12:23   #216  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
"Would the "SuperCurrent" be used to compare against the prior superclip (prefiltered with Mdegrain) to get a better estimation? "

I think no. The total idea of 2-inputs MAnalyse is to make search of 'previous generation degrained' block vs input full-true-non-distorted but +grained block. It should save from quick accumulating errors possible to come from 'prefilters'.

If you like to use MRecalculate - the SuperCurrent input can also be easily added there. But typically MRecalculate is used to refine blocksize or use different search params. For simple multi-generation search the single MAnalyse is enough. Also in each generation the params of search (and intermediate and final MDegrain) may be changed.

Old mvtools allow only search inside single input clip (or you need to try to merge different clips into single input with something like Interleave() and look if it cause correct fetching of 'src' and 'ref' frames of different sources). Search inside single clip cause accumulation of errors after 'prefiltering'.
DTL is offline   Reply With Quote
Old 3rd March 2023, 13:55   #217  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Quote:
Originally Posted by DTL View Post
The total idea of 2-inputs MAnalyse is to make search of 'previous generation degrained' block vs input full-true-non-distorted but +grained block. It should save from quick accumulating errors possible to come from 'prefilters'.
Sorry I am a little slow. Just understood it as prefiltering with MDegrain and only using that data to compare against the non-altered data using this line:

Code:
multi_vec_g2=MAnalyse (super_g1, SuperCurrent=super, multi=true, delta=tr, search=3, searchparam=2, overlap=0, chroma=true, mt=false, optSearchOption=1, truemotion=false, pnew=0, pzero=0, levels=4)
But anyway Great to see the progress even if I did not understand correctly

Edit: Yes I thought wrong about MRecalculate, I think I understand now what you mean with your last line of explanation.

Last edited by anton_foy; 3rd March 2023 at 14:03.
anton_foy is offline   Reply With Quote
Old 3rd March 2023, 17:46   #218  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
Playing with settings of MAnalyse I found disabled by default trymany option. After enabling it it looks like best search mode quickly providing stable enough MVs field in multi-generation search even with thresholding penalties of zero and new predictors are set to zero. Without trymany even in 5 generations there is no any convergence to some stable MVs field observed (may be some quantization noise play role in non-stability).

But it currently work good only with old SAD dismetric and crashes with divide by zero error somwhere with VIF dismetric - need debug and make new version.

Though enabling trymany at MAnalyse will significantly degrades performance because it enable refine search around each predictor (and total number of predictors around 6 or 7). So it looks may be enabled only in highest quality of MVs is required.

Multi-generation search with trymany enabled and default SAD dissimilarity metric for search best matching block:
DTL is offline   Reply With Quote
Old 3rd March 2023, 18:01   #219  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 703
Wow great find! Have you considered to try Zopti with your version of mvtools? Can't wait to try the new version of yours.
anton_foy is offline   Reply With Quote
Old 8th March 2023, 11:41   #220  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,070
Some important note for multi-generation MVs refinement: The thSAD for MDegrain need to be significantly reduced after 1st generation of MAnalyse using first generation of MDegrain output. Because SAD of mostly cleaned 'current' block with input noised block become about 2time lower. So thSAD for intermediate generations and last output MDegrain need to be reduced to about 0.5 of initial.

So better multi-generation MVs refinement is some like:
Code:
init_thSAD=400

s1=MSuper()
mv1 = MAnalyse(s1)
dg1 = MDegrain(s1, mv1, thSAD=init_thSAD)

1stgen_thSAD = (int)(init_thSAD/1.8) # divisor - subject to Zopti refine ?

s2=MSuper(dg1)
mv2 = MAnalyse(s2, SuperCurrent=s1) # or (s1, SuperCurrent=s2) - may be not visible difference
dg2=MDegrain(s1, mv2, thSAD=1stgen_thSAD)
Also it was found enabling trymany=true in MAnalyse while good refining zero MVs also may add some significantly bad MVs. So it is planned to add flags for predictors used in trymany mode to skip possibly bad predictors and to make performance visibly better.

Last edited by DTL; 8th March 2023 at 11:44.
DTL is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 00:45.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.