Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 18th November 2016, 21:02   #421  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
satd is yet again broken..
broken like not even activated
Code:
import vapoursynth as vs
core = vs.get_core()

clp = rule6
clp = core.fmtc.bitdepth(clp,bits=16,fulls=False,fulld=True)
clp = core.std.ShufflePlanes(clp,0,vs.GRAY)

sup = core.mv.Super(clp)
bv1a = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=5,isb=True)
bv2a = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=5,isb=True)
bv3a = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=5,isb=True)
fv1a = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=5,isb=False)
fv2a = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=5,isb=False)
fv3a = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=5,isb=False)

bv1b = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=0,isb=True)
bv2b = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=0,isb=True)
bv3b = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=0,isb=True)
fv1b = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=0,isb=False)
fv2b = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=0,isb=False)
fv3b = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=0,isb=False)

clpa = core.mv.Degrain3(clp, sup, bv1a, fv1a, bv2a, fv2a, bv3a, fv3a,thscd1=16320,thsad=2000)
clpb = core.mv.Degrain3(clp, sup, bv1b, fv1b, bv2b, fv2b, bv3b, fv3b,thscd1=16320,thsad=2000)

clp = core.std.Expr([clpa,clpb],"x y - abs 10000 *")

clp.set_output()
got a blank black clip, which means dct=5 is doing SAD actually...
feisty2 is offline   Reply With Quote
Old 9th February 2017, 10:01   #422  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
I added floating point support to MMask, you can back-port it to your branch and make your version of MMask work on higher precision if you want to.

https://github.com/IFeelBloated/vapo...src/MVMask.cpp

Last edited by feisty2; 9th February 2017 at 10:21.
feisty2 is offline   Reply With Quote
Old 15th February 2017, 17:21   #423  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,718
I've been trying to track a weird problem in which VapourSynth Editor will slowly use up all the memory after refreshing the script enough times. I use a custom denoising function to process the videos.

I was able to find out that feeding an external clip to mv.Super causes a memory leak:

Code:
Core freed but 12 filter instances still exist
Core freed but 12 filter instances still exist
Core freed but 458496000 bytes still allocated in framebuffers
Core freed but 458496000 bytes still allocated in framebuffers
This is the part where it happens:
Code:
prefilt = core.dfttest.DFTTest(feed, tbsize=1, sigma=5, sigma2=5, sbsize=16, sosize=8)

pelmdg = core.fmtc.resample(clip=clp, scale=2, kernel='spline64', center=False)
pelprefilt = core.fmtc.resample(clip=prefilt, scale=2, kernel='spline64', center=False)

superanalyse = core.mv.Super(clp, pel=2, chroma=True, rfilter=4, pelclip=pelprefilt)
supermdg = core.mv.Super(clp, pel=2, chroma=True, rfilter=4, levels=1, pelclip=pelmdg)
One question: is it even sensible to use a denoised external super clip upsized with a sharp method? Or is it generally better to let the internal functions do things?
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 15th February 2017, 17:55   #424  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
Quote:
Originally Posted by Boulder View Post
I've been trying to track a weird problem in which VapourSynth Editor will slowly use up all the memory after refreshing the script enough times. I use a custom denoising function to process the videos.

I was able to find out that feeding an external clip to mv.Super causes a memory leak:

Code:
Core freed but 12 filter instances still exist
Core freed but 12 filter instances still exist
Core freed but 458496000 bytes still allocated in framebuffers
Core freed but 458496000 bytes still allocated in framebuffers
This is the part where it happens:
Code:
prefilt = core.dfttest.DFTTest(feed, tbsize=1, sigma=5, sigma2=5, sbsize=16, sosize=8)

pelmdg = core.fmtc.resample(clip=clp, scale=2, kernel='spline64', center=False)
pelprefilt = core.fmtc.resample(clip=prefilt, scale=2, kernel='spline64', center=False)

superanalyse = core.mv.Super(clp, pel=2, chroma=True, rfilter=4, pelclip=pelprefilt)
supermdg = core.mv.Super(clp, pel=2, chroma=True, rfilter=4, levels=1, pelclip=pelmdg)
no, it should be
Code:
superanalyse = core.mv.Super(prefilt, pel=2, chroma=True, rfilter=4, pelclip=pelprefilt)
Quote:
One question: is it even sensible to use a denoised external super clip upsized with a sharp method? Or is it generally better to let the internal functions do things?
the difference is very small in general, but sensible, NNEDI is (kind of) noticeably better than internal functions.
feisty2 is offline   Reply With Quote
Old 15th February 2017, 18:40   #425  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,718
Quote:
Originally Posted by feisty2 View Post
no, it should be
Code:
superanalyse = core.mv.Super(prefilt, pel=2, chroma=True, rfilter=4, pelclip=pelprefilt)
Sorry, got that the wrong way around when investigating. The reference clip is 'prefilt' in the function itself

Quote:
the difference is very small in general, but sensible, NNEDI is (kind of) noticeably better than internal functions.
OK, I think I'll keep things intact for now.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 15th February 2017, 20:41   #426  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Quote:
Originally Posted by Boulder View Post
I've been trying to track a weird problem in which VapourSynth Editor will slowly use up all the memory after refreshing the script enough times. I use a custom denoising function to process the videos.

I was able to find out that feeding an external clip to mv.Super causes a memory leak:
Yes, there is a memory leak in Super. Does this DLL work better?
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 16th February 2017, 16:48   #427  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,718
Works great, no more leaks Thanks a lot!
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 12th April 2017, 22:57   #428  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
jackoneill : the fixed libmvtools.dll (fixed mem-leak) has some unwanted dependencies like libgcc_s_seh1.dll and libstdc++6.dll.
Because I don't know what the fix is for the mem-leak and on github ( https://github.com/dubhater/vapoursynth-mvtools ) I do not see any
fix for the mem-leak either, recompiling from the unfixed same source would not make much sense.

Could you please make a fixed (no-mem-leak) version for it ?

Last edited by Pat357; 12th April 2017 at 23:20.
Pat357 is offline   Reply With Quote
Old 14th April 2017, 14:41   #429  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Quote:
Originally Posted by Pat357 View Post
jackoneill : the fixed libmvtools.dll (fixed mem-leak) has some unwanted dependencies like libgcc_s_seh1.dll and libstdc++6.dll.
Because I don't know what the fix is for the mem-leak and on github ( https://github.com/dubhater/vapoursynth-mvtools ) I do not see any
fix for the mem-leak either, recompiling from the unfixed same source would not make much sense.

Could you please make a fixed (no-mem-leak) version for it ?
Sorry about that. I totally forgot.

Here is v18.
Code:
   * Super: Fix memory leak when pelclip is used.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 24th April 2017, 22:21   #430  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
In the MVtools doc is:
"Block sizes of 64x32, 64x64, 128x64, and 128x128 are supported."

Are smaller specified blksizes (like blksize=16) just ignored ? ..as I get no error.
Why only supporting the bigger blocks ?
Would you consider supporting also blksizes=16x16, 32x16, 32x32 or even 8x8, 16x8 ?
Pat357 is offline   Reply With Quote
Old 24th April 2017, 22:30   #431  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 203
Quote:
Originally Posted by Pat357 View Post
In the MVtools doc is:
"Block sizes of 64x32, 64x64, 128x64, and 128x128 are supported."

Are smaller specified blksizes (like blksize=16) just ignored ? ..as I get no error.
Why only supporting the bigger blocks ?
Would you consider supporting also blksizes=16x16, 32x16, 32x32 or even 8x8, 16x8 ?
They are supported, the documentation only states the differences between the vapoursynth version and the original avisynth one.
MonoS is offline   Reply With Quote
Old 5th June 2017, 21:16   #432  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Quote:
Originally Posted by feisty2 View Post
satd is yet again broken..
broken like not even activated
Code:
import vapoursynth as vs
core = vs.get_core()

clp = rule6
clp = core.fmtc.bitdepth(clp,bits=16,fulls=False,fulld=True)
clp = core.std.ShufflePlanes(clp,0,vs.GRAY)

sup = core.mv.Super(clp)
bv1a = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=5,isb=True)
bv2a = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=5,isb=True)
bv3a = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=5,isb=True)
fv1a = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=5,isb=False)
fv2a = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=5,isb=False)
fv3a = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=5,isb=False)

bv1b = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=0,isb=True)
bv2b = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=0,isb=True)
bv3b = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=0,isb=True)
fv1b = core.mv.Analyse(sup,delta=1,blksize=32,overlap=16,search=3,dct=0,isb=False)
fv2b = core.mv.Analyse(sup,delta=2,blksize=32,overlap=16,search=3,dct=0,isb=False)
fv3b = core.mv.Analyse(sup,delta=3,blksize=32,overlap=16,search=3,dct=0,isb=False)

clpa = core.mv.Degrain3(clp, sup, bv1a, fv1a, bv2a, fv2a, bv3a, fv3a,thscd1=16320,thsad=2000)
clpb = core.mv.Degrain3(clp, sup, bv1b, fv1b, bv2b, fv2b, bv3b, fv3b,thscd1=16320,thsad=2000)

clp = core.std.Expr([clpa,clpb],"x y - abs 10000 *")

clp.set_output()
got a blank black clip, which means dct=5 is doing SAD actually...
It is indeed doing SAD. I accidentally made dct=5..10 behave like dct=0 in v17. I think v19 will happen soon.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 7th June 2017, 20:37   #433  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
v19 is here.

Code:
   * Super: Fix small bug in SSE2 code used with rfilter=3 and 8 bit input.
   * Super: Fix bug in SSE2 code used with sharp=0 and 8 bit input.
   * Analyse, Recalculate: Fix bug that made dct=5..10 behave like dct=0 (bug introduced in v17).
   * Store SAD in 64 bit integers instead of 32 bit integers. This is required
     because YUV444P16 video with 128x128 blocks could produce SADs
     too large for 32 bit integers. Motion vectors produced by v18 or older
     will not work with this version. Only users who stored the motion vectors
     from Analyse/Recalculate on disk have to worry about this.
   * Degrains: Put an upper limit on the legal values of thsad/thsadc to avoid
     an overflow. The exact value of the limit depends on bit depth,
     subsampling, and block size. It's probably fairly high.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 14th June 2017, 21:07   #434  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 203
Hi, using a script like that
Code:
import vapoursynth as vs
import nnedi3_resample as edi
import havsfunc as has

core = vs.get_core()

def Denoise2(src, denoise, blksize, fast, truemotion):
	overlap = int(blksize / 2)
	pad = blksize + overlap
	
	src = core.fmtc.resample(src, src.width+pad, src.height+pad, sw=src.width+pad, sh=src.height+pad, kernel="point")
	
	super = core.mv.Super(src)
	
	rep = has.DitherLumaRebuild(src, s0=1)
	superRep = core.mv.Super(rep)
	
	bvec2 = core.mv.Analyse(superRep, isb = True, delta = 2, blksize=blksize, overlap=overlap, truemotion=truemotion)
	bvec1 = core.mv.Analyse(superRep, isb = True, delta = 1, blksize=blksize, overlap=overlap, truemotion=truemotion)
	fvec1 = core.mv.Analyse(superRep, isb = False, delta = 1, blksize=blksize, overlap=overlap, truemotion=truemotion)
	fvec2 = core.mv.Analyse(superRep, isb = False, delta = 2, blksize=blksize, overlap=overlap, truemotion=truemotion)
	
	fin = core.mv.Degrain2(src, super, bvec1,fvec1,bvec2,fvec2, denoise)
	
	fin = core.std.CropRel(fin, 0, pad, 0, pad)
	
	return fin

src = core.lsmas.LWLibavSource("").fmtc.bitdepth(bits=16)

den = Denoise2(src, 200, blksize=16, fast=False, truemotion=False)

den.set_output()
i get poor denoising in bright area of the image, this doesn't happens when i first upscale the clip to 444 using
Code:
 res = edi.nnedi3_resample(src, src.width ,src.height, sigmoid=True, invks=True, csp=vs.YUV444P16, curves="709")
i've checked the luma plane and there are no differences with the original(tried with a makediff and fmtc.histluma() call)

Is this a "known issue" or am i doing something wrong?
MonoS is offline   Reply With Quote
Old 15th June 2017, 01:26   #435  |  Link
VS_Fan
Registered User
 
Join Date: Jan 2016
Posts: 98
Quote:
Originally Posted by MonoS View Post
Is this a "known issue" or am i doing something wrong?
It is most probably related to the luma:chroma SAD ratio weighting.

Pinterf explained it in this mvtools for avisynth forum post. He also recently released a version (2.7.18.22 – 2017-05-12) of mvtools for avisynth with a new parameter "scaleCSAD" to fine tune this ratio. See this forum post

Both in avisynth and vapoursynth, instead of converting to 444, you could filter (mdegrain, etc) planes separately, and then combine them together again with ShufflePlanes.
VS_Fan is offline   Reply With Quote
Old 15th June 2017, 18:40   #436  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 203
Quote:
Originally Posted by VS_Fan View Post
It is most probably related to the luma:chroma SAD ratio weighting.

Pinterf explained it in this mvtools for avisynth forum post. He also recently released a version (2.7.18.22 – 2017-05-12) of mvtools for avisynth with a new parameter "scaleCSAD" to fine tune this ratio. See this forum post
I took a look at those two posts and i can't understand how the thsad of the chroma planes should influence the luma plane.

Quote:
Originally Posted by VS_Fan View Post
Both in avisynth and vapoursynth, instead of converting to 444, you could filter (mdegrain, etc) planes separately, and then combine them together again with ShufflePlanes.
If i denoise both sending the planes parameter with 0 or using ShufflePlanes to denoise only the luma i get the same poor performance in bright spot.

I should also mention that i'm talking about the luma plane, chroma planes, afaik are ok.

EDIT: i'll send a sample asap
MonoS is offline   Reply With Quote
Old 16th June 2017, 03:31   #437  |  Link
VS_Fan
Registered User
 
Join Date: Jan 2016
Posts: 98
Quote:
Originally Posted by MonoS View Post
I took a look at those two posts and i can't understand how the thsad of the chroma planes should influence the luma plane.
It’s not the thSAD parameter (well, not only). It’s related to the mvtools internal SAD calculations made during ‘analyze’ or ‘recalculate’ to find the motion vectors. The luma:chroma weighting is:
  • 4:2 for YV12 (4:2:0 subsampling)
  • 4:4 for YV16 (4:2:2 subsampling)
  • 4:8 for YV24 (4:4:4 subsampling)
That means: with 444 subsampling mvtools will base its calculations on twice as much chroma data than luma data. That’s why you get “cleaner” results. The chroma planes have typically less noise than the luma plane. So, with such a low value for thSAD (denoise=200) mvtools picks up the right vectors easier for chroma planes than it can with luma data.

I saw three ways to improve your script:
  • The ‘denoise’ value (thSAD parameter) is half of the default value. Leave it at default (400)
  • You are resizing the clip prior to processing with mvtools, with the very basic ‘point’ kernel, and at the end you are cropping the borders. You should avoid that. Use the hpad & vpad parameters for super instead. And if you really want crop the borders, you can resize after mdegrain with a better kernel and then crop.
  • DitherLumaRebuild “allows tweaking for pumping up the darks” (comment by the author). This may be leading you to oversaturate the bright areas. Try without it. You could use some other prefilter.

Like this:
Code:
def Denoise2(src, denoise, blksize, fast, truemotion):
    overlap = int(blksize / 2)
    pad = blksize #+ overlap
    
    #src = core.fmtc.resample(src, src.width+pad, src.height+pad, sw=src.width+pad, sh=src.height+pad, kernel="point")
    
    super = core.mv.Super(src, hpad=pad, vpad=pad)
    
    #rep = has.DitherLumaRebuild(src, s0=1)
    # Optional - Some other prefilter:
    rep = core.dfttest.DFTTest(clip=src, tbsize=1, sigma=2.0)
    superRep = core.mv.Super(rep, hpad=pad, vpad=pad)
    
    bvec2 = core.mv.Analyse(superRep, isb = True, delta = 2, blksize=blksize, overlap=overlap, truemotion=truemotion)
    bvec1 = core.mv.Analyse(superRep, isb = True, delta = 1, blksize=blksize, overlap=overlap, truemotion=truemotion)
    fvec1 = core.mv.Analyse(superRep, isb = False, delta = 1, blksize=blksize, overlap=overlap, truemotion=truemotion)
    fvec2 = core.mv.Analyse(superRep, isb = False, delta = 2, blksize=blksize, overlap=overlap, truemotion=truemotion)
    
    fin = core.mv.Degrain2(src, super, bvec1,fvec1,bvec2,fvec2, denoise)
    
    #fin = core.std.CropRel(fin, 0, pad, 0, pad)
    
    return fin

src = core.lsmas.LWLibavSource("").fmtc.bitdepth(bits=16)

den = Denoise2(src, 400, blksize=16, fast=False, truemotion=False)

den.set_output()

Last edited by VS_Fan; 16th June 2017 at 03:39.
VS_Fan is offline   Reply With Quote
Old 16th June 2017, 22:39   #438  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 203
So, if i understand correctly, those weighting are using during the analyze function to search for the proper motion vector, doing the analysis in 444 change some values and "improve" the denoising on the luma plane, am i correct?

Regarding your suggestion:
I usually use thsad around 150 up to 500 to obtaining different level of denoising, 200 for me is for a low-to-mid denoising.
i'm not resizing the clip, i'm simply padding it as I, when i did extensive test some years ago, noticed bad denoising on the bottom and right edge, even with padding, so i've started to pad my clip by myself, this method achieved very nice results.
AFAIK DitherLumaRebuild is commonly used for prefiltering the clip before doing motion analysis, i found this trick in one of cretindesalpes post and on vs QTGMC port.

Anyway i think you are right suggesting to use a stronger denoising, using 400 on the 420 clip it obtain similar result in those areas with weak denoising, but i would prefere to avoid using such strong, in my opinion, thsad.
MonoS is offline   Reply With Quote
Old 17th June 2017, 18:33   #439  |  Link
VS_Fan
Registered User
 
Join Date: Jan 2016
Posts: 98
Quote:
Originally Posted by MonoS View Post
So, if i understand correctly, those weighting are using during the analyze function to search for the proper motion vector, doing the analysis in 444 change some values and "improve" the denoising on the luma plane, am i correct?
Right, but resampling to 444 doesn’t necessarily “improve” denoising. It just gives different results: For YUV colorspaces the amount of data used to represent luma and chroma for any pixel in each frame depends on the chroma subsampling. Mvtools’ analyze filter uses all data for each pixel to construct the blocks, unless you specify chroma=False.

From the mvtools doc at avisynth’s site: At analysis stage plugin divides frames by small blocks and try to find for every block in current frame the most similar (matching) block in second frame (previous or next). The relative shift of these blocks is motion vector. The main measure of block similarity is sum of absolute differences (SAD) of all pixels of these two blocks compared. SAD is a value which says how good the motion estimation was.
Quote:
Originally Posted by MonoS View Post
I usually use thsad around 150 up to 500 to obtaining different level of denoising, 200 for me is for a low-to-mid denoising.
This is my personal preference: For my 4:2:2 video sources I process each plane separately. I consider luma and chroma very different animals, so I tweak the corresponding thSAD and even thscd1 & thscd2 to lower values for chroma.
Quote:
Originally Posted by MonoS View Post
i'm not resizing the clip, i'm simply padding it as I, when i did extensive test some years ago, noticed bad denoising on the bottom and right edge, even with padding, so i've started to pad my clip by myself, this method achieved very nice results.
I can see now. There could have been a bug in earlier versions of the plugin, but you don’t need to do that any more
Quote:
Originally Posted by MonoS View Post
Anyway i think you are right suggesting to use a stronger denoising, using 400 on the 420 clip it obtain similar result in those areas with weak denoising, but i would prefere to avoid using such strong, in my opinion, thsad.
You could try mdegrain1, which will risk a lot less detail destruction while you use larger values for thsad.
VS_Fan is offline   Reply With Quote
Old 19th June 2017, 22:16   #440  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 203
So what may be happening is this: Upscaling the chroma planes mvtools think that less of the image is changed because we have 2*2 chroma pixel that are very similar, so the same thsad result in more similar blocks and so more strong denoise, am i right?
MonoS is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:11.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.