Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
27th May 2013, 16:13 | #2 | Link |
Registered User
Join Date: Sep 2007
Location: Europe
Posts: 602
|
I use it quite a lot to clean up DSLR-shot footage (from a hacked Panasonic GH2). The results are very good and the GUI is intuitive for adjusting the levels of spatial NR and the various ways in which the filters respond.
The only down side is that, on content with a lot of motion (eg handheld camera content), small areas in the image can sometimes appear to stick in place. It's difficult to describe, but perhaps some part of the temporal filtering is too aggressive at locking things into position. It's a minor complaint, though. It relies on a noise profile to work effectively. For footage where I can't generate one of those, I use the 3rd example script on this page: http://avisynth.org/mediawiki/Denoisers - which I'm quoting below in case that wiki changes in the future. This is not my code: Code:
source = last pred = source # to get stronger denoising, put denoisers here, they will change how motion vectors are predicted backward_vec2 = pred.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=2, idx = 1, truemotion=true) backward_vec1 = pred.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=2, idx = 1, truemotion=true) forward_vec1 = pred.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=2, idx = 1, truemotion=true) forward_vec2 = pred.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=2, idx = 1, truemotion=true) maskp1 = mvmask(kind=1, vectors=forward_vec1, ysc=255).UtoY() maskp2 = mvmask(kind=1, vectors=forward_vec2).UtoY() maskp3 = mvmask(kind=1, vectors=backward_vec1, ysc=255).UtoY() maskp4 = mvmask(kind=1, vectors=backward_vec2).UtoY() maskf = average(maskp1, 0.25, maskp2, 0.25, maskp3, 0.25, maskp4, 0.25).spline36resize(source.width, source.height) smooth = pred.fft3dfilter(bw=16, bh=16, ow=8, oh=8, bt=1, sigma=4, plane=0) source2 = maskedmerge(source, smooth, maskf) source3 = source2.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=2) source3 ttempsmooth(maxr=7) gradfun2db(1.51) Those two NR solutions are my go-to tools for that kind of content. NeatVideo with its profiling I find has a slight edge in terms of speed and from memory can sometimes keep more temporal detail, but the latter script is also very good and has saved me many times in cases where I can't generate a noise profile. |
29th May 2013, 09:06 | #4 | Link |
Registered User
Join Date: Sep 2007
Location: Europe
Posts: 602
|
The method I've had most success with to lessen the denoising is to add this to the end:
Code:
denoised=last merge(source,denoised,0.5) #0.5 = 50% mix of both, 0= fully source, 1=fully denoised) Code:
maskf = average(maskp1, 0.25, maskp2, 0.25, maskp3, 0.25, maskp4, 0.25).spline36resize(source.width, source.height) |
30th May 2013, 12:54 | #5 | Link |
Registered User
Join Date: Feb 2009
Location: USA
Posts: 676
|
I have the demo for Neat Video which I played around with in After Effects. Just based on previewing individual frames, it seems to be really good at degraining. I played around with it using my Jurassic Park BD source. I wasn't trying to be competent, but even for futzing around, it really took out a lot of grain, especially on dark scenes and I'm sure if someone learned to use it and understood how to tweak it really well, it would be a very good tool.
That's my thoughts on the matter. edit: I posted a link in another thread to some before/after shots from my Jurassic Park Blu Ray. For those interested in seeing what the filter can do for grainy sources. http://forum.doom9.org/showthread.ph...60#post1631160 Last edited by osgZach; 1st June 2013 at 12:28. |
1st June 2013, 22:26 | #6 | Link |
Registered User
Join Date: Jan 2011
Location: Donetsk
Posts: 58
|
My opinion: filter NeatVideo does strong smoothing and the image to become wax, as wax museum from Madame Tussauds. Used, for example
Code:
function dfttestMCmod (clip input, bool "Y", bool "chroma", int "ftype", float "sigma", float "sigma2", float "pmin", float "pmax", \ int "sbsize", int "smode", int "sosize", int "dither", int "tmode", int "tosize", int "swin", int "twin", float "sbeta", \ float "tbeta", bool "zmean", float "f0beta", int "tr", bool "mdg", int "pp", int "thSAD", int "mdgthSAD", \ int "thSCD1", int "thSCD2", int "limit", int "blksize", int "pel", int "dct", int "search", bool "lsb") { o = input chroma = default(chroma, true) # dfttest-related options Y = default(Y, true) U = chroma?true:false V = chroma?true:false ftype = default(ftype, 0) sigma = default(sigma, 2.0) sigma2 = default(sigma2, 2.6) pmin = default(pmin, 0.0) pmax = default(pmax, 500.0) sbsize = default(sbsize, 12) smode = default(smode, 1) sosize = default(sosize, 9) dither = default(dither, 1) tmode = default(tmode, 0) tosize = default(tosize, 0) swin = default(swin, 0) twin = default(twin, 7) sbeta = default(sbeta, 2.5) tbeta = default(tbeta, 2.5) zmean = default(zmean, true) f0beta = default(f0beta, 1.0) lsb = default(lsb, true) # mvtools-related options tr = default(tr, 2) tr = (tr>5) ? 5 : (tr<1) ? 1 : tr mdg = default(mdg, true) pp = default(pp, 2) mdgthSAD = default(thSAD, 135) thSAD = default(thSAD, 10000) thSCD1 = default(thSCD1, 150) thSCD2 = default(thSCD2, 64) limit = default(limit, 96) blksize = default(blksize, 8) # 32:16:8:4 pel = default(pel, 2) overlap = blksize/2 dct = default(dct, 0) search = default(search, 5) planes = chroma?4:0 # Pre-ME denoising pp = (pp >= 2) ? o.dfttest(sstring="0.0:4.0 0.18:9.0 0.36:7.0 1.0:15.0",tbsize=1,lsb=true).DitherPost(mode=6) : (pp == 1) ? o.DeGrainMedian(limitY=3,limitUV=4,mode=1) : o # MSuper pp_super = pp.MSuper(pel=pel,chroma=chroma) # Motion vector search b5vec = (tr>=5) ? \ MAnalyse(pp_super,isb=true,search=search,delta=5,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP b4vec = (tr>=4) ? \ MAnalyse(pp_super,isb=true,search=search,delta=4,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP b3vec = (tr>=3) ? \ MAnalyse(pp_super,isb=true,search=search,delta=3,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP b2vec = (tr>=2) ? \ MAnalyse(pp_super,isb=true,search=search,delta=2,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP b1vec = MAnalyse(pp_super,isb=true, search=search,delta=1,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) f1vec = MAnalyse(pp_super,isb=false,search=search,delta=1,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) f2vec = (tr>=2) ? \ MAnalyse(pp_super,isb=false,search=search,delta=2,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP f3vec = (tr>=3) ? \ MAnalyse(pp_super,isb=false,search=search,delta=3,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP f4vec = (tr>=4) ? \ MAnalyse(pp_super,isb=false,search=search,delta=4,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP f5vec = (tr>=5) ? \ MAnalyse(pp_super,isb=false,search=search,delta=5,overlap=overlap,blksize=blksize,truemotion=false,chroma=chroma,dct=dct) : NOP # Optional MDegrain o_super = mdg ? o.MSuper(pel=pel,chroma=chroma,levels=1) : o mdegrained = (tr>=3 && mdg) ? o.MDegrain3(o_super,b1vec,f1vec,b2vec,f2vec,b3vec,f3vec,thSAD=mdgthSAD,thSCD1=thSCD1,thSCD2=thSCD2,limit=limit,plane=planes,lsb=true).DitherPost(mode=6) : \ (tr==2 && mdg) ? o.MDegrain2(o_super,b1vec,f1vec,b2vec,f2vec,thSAD=mdgthSAD,thSCD1=thSCD1,thSCD2=thSCD2,limit=limit,plane=planes,lsb=true).DitherPost(mode=6) : \ (mdg) ? o.MDegrain1(o_super,b1vec,f1vec,thSAD=mdgthSAD,thSCD1=thSCD1,thSCD2=thSCD2,limit=limit,plane=planes,lsb=true).DitherPost(mode=6) : o degrained = (mdg) ? mdegrained : o # Motion Compensation degrained_super = degrained.MSuper(pel=pel,chroma=chroma,levels=1) b5clip = (tr>=5) ? \ degrained.MCompensate(degrained_super,b5vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP b4clip = (tr>=4) ? \ degrained.MCompensate(degrained_super,b4vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP b3clip = (tr>=3) ? \ degrained.MCompensate(degrained_super,b3vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP b2clip = (tr>=2) ? \ degrained.MCompensate(degrained_super,b2vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP b1clip = degrained.MCompensate(degrained_super,b1vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) f1clip = degrained.MCompensate(degrained_super,f1vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) f2clip = (tr>=2) ? \ degrained.MCompensate(degrained_super,f2vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP f3clip = (tr>=3) ? \ degrained.MCompensate(degrained_super,f3vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP f4clip = (tr>=4) ? \ degrained.MCompensate(degrained_super,f4vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP f5clip = (tr>=5) ? \ degrained.MCompensate(degrained_super,f5vec,thSAD=thSAD,thSCD1=thSCD1,thSCD2=thSCD2) : NOP # Create compensated clip interleaved = (tr>=5) ? Interleave(f5clip,f4clip,f3clip,f2clip,f1clip,degrained,b1clip,b2clip,b3clip,b4clip,b5clip) : \ (tr==4) ? Interleave(f4clip,f3clip,f2clip,f1clip,degrained,b1clip,b2clip,b3clip,b4clip) : \ (tr==3) ? Interleave(f3clip,f2clip,f1clip,degrained,b1clip,b2clip,b3clip) : \ (tr==2) ? Interleave(f2clip,f1clip,degrained,b1clip,b2clip) : \ Interleave(f1clip,degrained,b1clip) # Perform dfttest filtered = interleaved.dfttest(Y=Y,U=U,V=V,ftype=ftype,sigma=sigma,sigma2=sigma2,pmin=pmin,pmax=pmax,sbsize=sbsize,smode=smode,sosize=sosize, \ tbsize=tr*2+1,dither=dither,tmode=tmode,tosize=tosize,swin=swin,twin=twin,sbeta=sbeta,tbeta=tbeta,zmean=zmean,f0beta=f0beta,lsb=lsb) output = filtered.SelectEvery(tr*2+1,tr) return(output) } Code:
#avstp.dll #RemoveGrainSSE2.dll #mvtools2.6.0.5.dll #RepairSSE2.dll #degrainmedian.dll #flash3kyuu_deband.dll #dfttest.dll #dither.dll #WarpSharp.dll #mt_masktools-26.dll #AddGrainC.dll #dither.avsi #mt_xxpand_multi.avsi #dfttestMCmod.avsi #minblur.avs #Contrasharpening.avs #LSFmod.avsi setmtmode(2) setmemorymax(768) source = last den = source.dfttestMCmod(chroma=true,tr=2,sigma=1.0,sigma2=1.5,f0beta=1.0,pp=2,mdg=true,\ blksize=16,pel=2,mdgthSAD=135,thSAD=300,thSCD1=256,thSCD2=96,limit=102,lsb=true) # ===== DeBanding ===== db1 = den.GradFun3 (thr=1.4*0.3, smode=0, radius=16, lsb_in=true, lsb=true) DB = db1.Dither_add_grain16 (var=0.1, uvar=0, soft=2) # AddGrain # DB = den.f3kdb(16, 56, 48, 48, 0, 0, input_mode=1, output_mode=1) output = DB.DitherPost(mode=-1) # ===== Sharpening ===== sharp8 = Contrasharpening(output, source) # sharp8 = output.LSFmod(defaults="slow", preblur="ON", strength=100) lsbctr = Dither_merge16_8 (DB,sharp8.Dither_convert_8_to_16(), DitherBuildMask(sharp8, output)) lsb_out = lsbctr.DitherPost(mode=7, ampo=1.0, ampn=0.7) lsb_out Code:
#avstp.dll #RemoveGrainSSE2.dll #RepairSSE2.dll #Warpsharp.dll #mvtools2.6.0.5.dll #degrainmedian.dll #AddGrainC.dll #dither.dll #dfttest.dll #TEdgeMask.dll #mt_masktools-26.dll #flash3kyuu_deband.dll #SmoothAdjust-ICL-x86.dll #dither.avsi #f3kgrain_v0.4.avsi #GrainFactoryLite_v1.2.avsi #LumaDBLite_v0.7.avsi #mt_xxpand_multi.avsi #dfttestMCmod.avsi #HighPassSharp.avs #FineSharp.avs #O16mod_v1.6.1.avsi #LSFmod.avsi setmtmode(2) setmemorymax(1536) W = width(last) H = height(last) dfttestMCmod(chroma=true,tr=2,sigma=1.0,sigma2=1.5,f0beta=1.0,dither=1,pp=2,mdg=true,thSAD=300,\ blksize=16,pel=2,mdgthSAD=135,thSCD1=256,thSCD2=96,limit=102,lsb=true) # ==== DeBanding ==== # SetMTMode(3) # DB = last.LumaDBL(g1str=7, g2str=4, g3str=2, g1soft=5, g2soft=10, g3soft=20, g1size=1.0, g2size=0.6, g3size=0.1, lsb=true, lsb_in=true) # DB = last.LumaDBL(g1str=7, g2str=4, g3str=2, g1soft=5, g2soft=10, g3soft=20, g1size=1.0, g2size=0.6, g3size=0.1, lsb=true, lsb_in=true, ditherC="f3kdb_dither()") # SetMTMode(2) f3kdb(16, 64, 48, 48, 0, 0, dynamic_grain=true, input_mode=1, output_mode=1) # GradFun3(smode=1, thr=0.5, radius=16, lsb=true, lsb_in=true) # GradFun3(smode=0, thr=0.35, radius=12, lsb=true, lsb_in=true) # GradFun3 (thr=1.4*0.3, smode=0, radius=16, lsb_in=true, lsb=true) DB = last#.Dither_add_grain16 (var=0.1, uvar=0, soft=2) # add grain f = DB.DitherPost (mode=-1) # 16 -> 8 bits # ==== Sharpening ==== # s = f.LSFmod(defaults="slow",preblur="ON",strength=100) # 8 -> 8 for resize < 1280х720 # s = f.FineSharp() # 8 -> 8 for resize > 1280х720 s = f.HighPassSharp(r=0.15) # 8 -> 8 for resize > 1280х720 mask = mt_lutxy (s, f, "x y != 255 0 ?", u=0, v=0) # 8, 8 -> 8 s16 = s.U16 () # 8 -> 16 Dither_merge16_8 (DB, s16, mask) # 16, 16, 8 -> 16 Dither_Resize16(W, H) OUTPUT_BIT_DEPTH = 10 (OUTPUT_BIT_DEPTH == 16) ? Eval(""" Dither_convey_yuv4xxp16_on_yvxx() #16bit """) : (OUTPUT_BIT_DEPTH == 10) ? Eval(""" Down10(10, stack=false, dither=-2) #10bit """) : Down10(8) #8bit Code:
#avstp.dll #RemoveGrainSSE2.dll #RepairSSE2.dll #mvtools2.6.0.5.dll #flash3kyuu_deband.dll #mt_masktools-26.dll #dither.dll #medianblur.dll #AddGrainC.dll #GradFun2DB.dll #minblur.avs #dither.avsi #Gradfun2dbmod.avs #ContraHD.avs #ContraSharpening.avs setmtmode(2) setmemorymax(640) source = last blksize = 8 # or 16 overlap = blksize/2 chroma = true lambda = 768 planes = chroma?4:0 chroma_threshold = chroma?7:0 search = 5 psuper = source.removegrain(11).MSuper(pel=2, sharp=2, chroma=chroma) ssuper = source.MSuper(pel=2, sharp=2, chroma=chroma, levels=1) vb3 = MAnalyse(psuper, isb=true, truemotion=false, delta=3, blksize=blksize, overlap=overlap, search=search, chroma=chroma, lambda=lambda) vb2 = MAnalyse(psuper, isb=true, truemotion=false, delta=2, blksize=blksize, overlap=overlap, search=search, chroma=chroma, lambda=lambda) vb1 = MAnalyse(psuper, isb=true, truemotion=false, delta=1, blksize=blksize, overlap=overlap, search=search, chroma=chroma, lambda=lambda) vf1 = MAnalyse(psuper,isb=false, truemotion=false, delta=1, blksize=blksize, overlap=overlap, search=search, chroma=chroma, lambda=lambda) vf2 = MAnalyse(psuper,isb=false, truemotion=false, delta=2, blksize=blksize, overlap=overlap, search=search, chroma=chroma, lambda=lambda) vf3 = MAnalyse(psuper,isb=false, truemotion=false, delta=3, blksize=blksize, overlap=overlap, search=search, chroma=chroma, lambda=lambda) cf3 = MCompensate(source, ssuper, vf3, thSAD=200, thSCD1=256, thSCD2=96) cf2 = MCompensate(source, ssuper, vf2, thSAD=200, thSCD1=256, thSCD2=96) cf1 = MCompensate(source, ssuper, vf1, thSAD=200, thSCD1=256, thSCD2=96) cb1 = MCompensate(source, ssuper, vb1, thSAD=200, thSCD1=256, thSCD2=96) cb2 = MCompensate(source, ssuper, vb2, thSAD=200, thSCD1=256, thSCD2=96) cb3 = MCompensate(source, ssuper, vb3, thSAD=200, thSCD1=256, thSCD2=96) interleave(cb3, cb2, cb1, source.MDegrain3(ssuper, vf1,vb1,vf2,vb2,vf3,vb3,thSAD=125,thSCD1=160,thSCD2=90,limit=72,lsb=true).DitherPost(mode=6), cf1, cf2, cf3) Temporalsoften(3,7,chroma_threshold,15,2) selectevery(7,3) # SHARPENING # ContraHD(last,source,cf1,cb1,3) ContraSharpening(last,source) # DeBanding f3kdb(sample_mode=2,dynamic_grain=false,keep_tv_range=false,dither_algo=3,y=52,cb=36,cr=36,grainY=0,grainC=0) # GradFun2DBmod(thr=1.4,thrC=1.6,mode=2,str=0.3,strC=0.0,temp=50,adapt=64) Last edited by Tempter57; 13th June 2013 at 19:40. |
1st June 2013, 23:57 | #7 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
Quote:
One of the things I learned when I first came to this forum a decade ago and started denoising videos (among other things) is that when you use your first denoiser, you are so happy to no longer see the noise that you may not notice all the bad things that have also been done at the same time. This is especially true because sometimes the bad things don't show up on a short test clip. Also, sometimes the bad things don't show up on a freeze frame, but become evident only when you play the resulting video. This is especially true of temporal desnoising. About every six months, for the past five years, I've tried Neat on various test clips after someone raves about the results, but I almost always find that if you adjust Neat to the point that you begin to get decent noise reduction, you also start losing too much detail, at least to my tastes. Of course no single noise reduction plugin works well all the time, and so you have to be prepared to do something different with each new video source. Therefore it is quite possible that on the OP's source Neat performs really well. Last edited by johnmeyer; 1st June 2013 at 23:58. Reason: didn't see typo until after posting |
|
2nd June 2013, 00:10 | #8 | Link | |
Registered User
Join Date: Sep 2007
Location: Europe
Posts: 602
|
Quote:
|
|
2nd June 2013, 00:24 | #9 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
Quote:
There is no free lunch. However, having said that, for my tastes, the various motion-compensated noise reduction techniques pioneered in these forums often do a remarkable job at reducing noise without introducing artifacts that are large enough to call attention to themselves. Finally, everyone sees things differently. I grew up watching B&W TVs and playing 78 rpm records (LPs had just been invented), so I am not as bothered by a little noise, but I really value the advances that have brought more detail to video and I hate to lose that. Last edited by johnmeyer; 2nd June 2013 at 00:25. Reason: added last sentence to first paragraph |
|
2nd June 2013, 09:30 | #10 | Link | |
Registered User
Join Date: Feb 2009
Location: USA
Posts: 676
|
Quote:
Seriously. Just turn on Advanced mode and you can tweak all kinds of stuff. Expected noise, strength of denoising, high/low/medium frequency. I could have left my JP shots a lot more grainy/noisy than I did. I don't think they really look like wax. I have to reserve my final judgement for the actual video at full playback speed. I haven't applied any sharperning yet, so it may be a little soft/blurry in some places. but my final encode with use --tune film. (-1:-1) so maybe that will help. Still, I have plenty of room to knock down the settings and still have good results. Last edited by osgZach; 2nd June 2013 at 09:37. Reason: stuff |
|
2nd June 2013, 01:26 | #11 | Link | |
Registered User
Join Date: Sep 2007
Location: Europe
Posts: 602
|
Interesting - I've usually had very good results, but then again, a lot of what I'm using it for is quite sedate interviews, which would explain it.
Quote:
For me, it's always a case of: film grain, leave it alone; video noise, reduce if necessary. (Especially if it's DSLR-originated heavy chroma noise - yeuch). |
|
2nd June 2013, 03:59 | #12 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
Quote:
I've also done hundreds of hours of film restoration and have come to the same conclusion about film grain: reducing film grain is generally not a good thing, and leads to bad outcomes. There are some exceptions to this, such as a standard 8mm amateur film shot under available light using Tri-X emulsion (very, very large grain) that I did three years ago. Some smoothing definitely made it look better. Graduation The grain reduction artifacts are quite obvious, but the size of the original grain made the viewing experience obnoxious. I also did motion stabilization, gamma adjustment, and a host of other things, although I left the frame rate at 16 fps which does look bad during pans, but is exactly how it would have looked when the film was projected. Last edited by johnmeyer; 2nd June 2013 at 04:02. Reason: Added last sentence |
|
3rd June 2013, 17:35 | #14 | Link |
Registered User
Join Date: Jul 2009
Posts: 111
|
I made some testing. I found that denoising color channel Cr and Cb work really good. I need to let the Y channel at 30% or the picture become too blurry or waxed.
In advanced mode I can set the filter separately. But I think the Noise reduction on High grain is not very good. I leave it to 30% combine with a SMDegrain Tr-1 or even 0. Neat Video run really fast on my I7 3770K. Its faster than SMDegrain tr-3 or higher. feisty2, can you give me your code for mdegrain+TNLMeans please ! Thanks johnmeyer, I am using your script. I made few modification for my own purpose but I really like it. I start this thread to see people opinion about Neat Video. The new version can use GPU AMD to render. BTW I tried Red Giant - Products - Magic Bullet Denoiser II 1.4 on After Effect. Its doing a very good job, but it lack of control and its painfully slow. Neat Video do a good job I think. Is there a denoiser in avisynth or virtualdub that perform very well in denoising high grain that I could give a try. |
4th June 2013, 02:18 | #15 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
bv3 = super_search.MAnalyse(isb=true, delta=3, blksize=16, overlap=8, chroma=false) bv2 = super_search.MAnalyse(isb=true, delta=2, blksize=16, overlap=8, chroma=false) bv1 = super_search.MAnalyse(isb=true, delta=1, blksize=16, overlap=8, chroma=false) fv1 = super_search.MAnalyse(isb=false, delta=1, blksize=16, overlap=8, chroma=false) fv2 = super_search.MAnalyse(isb=false, delta=2, blksize=16, overlap=8, chroma=false) fv3 = super_search.MAnalyse(isb=false, delta=3, blksize=16, overlap=8, chroma=false) MDegrain3(super_search, bv1, fv1, bv2, fv2, bv3, fv3, plane=0) use MDegrainN if you want stronger effect,use smdegrain for safer result here's what I use for grainy source (extremely slow...) a=last prefilter=TNLMeans (ax=16,ay=16,az=0) a.smdegrain (tr=6,pel=4,thSAD=400,Search=5,lsb=true,mode=-1,blksize=8,prefilter=prefilter) before and after denoise Last edited by feisty2; 4th June 2013 at 02:35. |
|
5th June 2013, 01:51 | #16 | Link | |
Registered User
Join Date: Jul 2009
Posts: 111
|
Quote:
|
|
5th June 2013, 03:12 | #17 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
you can find MdegrainN in mvtools2 in dither package in mdegrainn there're 3 parameters control strength of denoising,"thSAD","TR",and "blksize" all larger means stronger effect |
|
4th June 2013, 08:46 | #18 | Link |
Registered User
Join Date: Feb 2009
Location: USA
Posts: 676
|
Both those look nearly identical. Posting scaled previews doesn't do much to show what the effects are.
But let's be honest about something here. While some of us around here, are here because technically, we are more capable users who can actually learn do to stuff for ourselves; I don't have all day to sit around coming up with multi-line filter calls, trying to do stuff I truly do not and will never understand. A lot of times involving math and other calculations that are simply above my time or patience to understand. And I am loathe to have someone write a script for me, without a very good reason. People don't help those who don't help themselves after all. There comes a fine line between saying "well, we can do this is Avisynth so everything else is useless", and "functional simplicity". Neat Video gives me "functional simplicity". I only have to worry about a few sliders and if temporal denoising is really so bad and evil for a particular clip, I can move the slider to the "disabled" state (magic, I know!). I honestly cannot think of any Avisynth equivalent (very short, succint and easy to tweak) that has given me close to the kind of results I was looking for - and I tried them before looking at Neat Video. It literally took a minute or so of finding out how certain tweaks altered the image to get the result I wanted - mostly carely filtering the Y channel got rid of a lot of that, and still left a significant amount of detail. I actually went back and made a more noisy clip, that really knocked down the grain but still left a lot in. Ended up with a muxed clip of around 15GB (my original filter came in at just 8GB) - which is just about acceptable for me as well. It was certainly better than the 20-ish GB clips I was getting from an unfiltered or HQDN3D'd clip with the same x264 settings. I also can render my clips at anywhere from 5 - 15 fps depending on how much I want to process. Put it on the GPU only, or use a combination of GPU/CPU. This is a significant advantage for me. Even if its a little slower in Avisynth than Vdub, I could probably still pipe a script through to x264 and filter+encode in one go, leaving the CPU free to encode. And still be done before a similar script w/AVS filters. Avisynth and its various filters are an amazing set of tools, and I could not live without them for certain things. But that doesn't change the reality of "the right tool for the right job". Trying to pimp AVS scripts in a thread asking for opinions on a retail filtering product isn't going to change that, nor does said product diminish Avisynth either. Last edited by osgZach; 4th June 2013 at 08:49. |
4th June 2013, 09:11 | #19 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
and MDegrain is really an almost magical denoiser, it produces very nice results (no artifacts) and little detail loss, I tried neat video and magicbullet denoiser the detail loss is far worse than mdegrain especially neat video and they can only remove grain but mdegrain can remove almost all kinds of noise,grain,block,gibbs... so generally I think mdegrain is better than commercial denoisers in most cases |
|
5th June 2013, 01:15 | #20 | Link |
Registered User
Join Date: Sep 2012
Posts: 156
|
Co-sign wholeheartedly. Never using specific denoising filters singularly (sometimes still use 'em for pre-filtering when really required) since I learned how to employ Manalyse/MDegrain (or MAnalyse+MCompensate+MDegrain).
|
|
|