Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
15th December 2007, 05:48 | #61 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
[continued]
Four Zoomings to quickly point out differences: vs. vs. vs. vs. Hope this is sufficient to make my point: removing grain is okay if it annoys, but I'd want my result to keep the "natural" look. Sharp soup I like to eat, but not to watch. Edit - the script: (compact version) Code:
# LoadPlugins: FFT3DFilter.dll / mt_masktools.dll / MVTools.dll / RemoveGrain.dll / Repair.dll AVCSource("X:\original.dga",deblock=true) o = last fft = o.fft3dfilter(sigma=16,sigma2=10,sigma3=6,sigma4=4,bt=5,bw=16,bh=16,ow=8,oh=8) fftD = mt_makediff(o,fft) b3vec1 = fft.MVAnalyse(isb = true, delta = 3, pel = 2, overlap=4, sharp=2, idx = 1) b2vec1 = fft.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=2, idx = 1) b1vec1 = fft.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=2, idx = 1) f1vec1 = fft.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=2, idx = 1) f2vec1 = fft.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=2, idx = 1) f3vec1 = fft.MVAnalyse(isb = false, delta = 3, pel = 2, overlap=4, sharp=2, idx = 1) NR1 = o .MVDegrain3(b1vec1,f1vec1,b2vec1,f2vec1,b3vec1,f3vec1,thSAD=300,idx=2) NR1D = mt_makediff(o,NR1) DD = mt_lutxy(fftD,NR1D,"x 128 - abs y 128 - abs < x y ?") NR1x = o.mt_makediff(DD,U=2,V=2) NR2 = NR1x.MVDegrain3(b1vec1,f1vec1,b2vec1,f2vec1,b3vec1,f3vec1,thSAD=300,idx=3) s = NR2.minblur(1,1) allD = mt_makediff(o,NR2) ssD = mt_makediff(s,s.removegrain(11,-1)) ssDD = ssD.repair(allD,1) .mt_lutxy(ssD,"x 128 - abs y 128 - abs < x y ?") NR2.mt_adddiff(ssDD,U=2,V=2) return(last) #------------------------------------------ # Taken from MCBob.avs: function MinBlur(clip clp, int r, int "uv") { uv = default(uv,3) uv2 = (uv==2) ? 1 : uv rg4 = (uv==3) ? 4 : -1 rg11 = (uv==3) ? 11 : -1 rg20 = (uv==3) ? 20 : -1 medf = (uv==3) ? 1 : -200 RG11D = (r==1) ? mt_makediff(clp,clp.removegrain(11,rg11),U=uv2,V=uv2) \ : (r==2) ? mt_makediff(clp,clp.removegrain(11,rg11).removegrain(20,rg20),U=uv2,V=uv2) \ : mt_makediff(clp,clp.removegrain(11,rg11).removegrain(20,rg20).removegrain(20,rg20),U=uv2,V=uv2) RG4D = (r==1) ? mt_makediff(clp,clp.removegrain(4,rg4),U=uv2,V=uv2) \ : (r==2) ? mt_makediff(clp,clp.medianblur(2,2*medf,2*medf),U=uv2,V=uv2) \ : mt_makediff(clp,clp.medianblur(3,3*medf,3*medf),U=uv2,V=uv2) DD = mt_lutxy(RG11D,RG4D,"x 128 - y 128 - * 0 < 128 x 128 - abs y 128 - abs < x y ? ?",U=uv2,V=uv2) clp.mt_makediff(DD,U=uv,V=uv) return(last) } Note#2: The script uses MVDegrain3. If you don't have MVTools v1.9.0, you can either use MVDegrain2 instead (with slightly worse results), or make a donation to Fizick (recommended!) to get the goodies more early.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 16th December 2007 at 03:03. |
15th December 2007, 07:39 | #62 | Link |
Registered User
Join Date: Jan 2007
Posts: 24
|
Didée, both download links for your clip are... mis-pasted, shall we say? The results look great, though, and the zoomed-in screen-captures definitely show that your encode removes grain at a lower cost to detail—especially facial detail and skin features.
|
15th December 2007, 07:41 | #63 | Link |
Registered User
Join Date: Jan 2007
Posts: 943
|
Didée, please, could you:
1) correct the links to your samples 2) post the final script you used for encoding EDIT: added: 3) the x264 commandline you used Thank you. Last edited by jeffy; 15th December 2007 at 07:45. Reason: 3) |
15th December 2007, 15:51 | #64 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
The links are correct now. Sorry, it was in the middle of the night ...
Quote:
Code:
l = AviSource("processed_left_half.avi") .crop(0,0,640,528) r = AviSource("processed_right_half.avi") .crop(64,0,640,528) stackhorizontal( l, r ) The commandline was Code:
--bframes 3 --b-pyramid --ref 4 --deblock -3:-3 --crf 18.75 --ipratio 1.25 --partitions all --direct temporal --weightb --me hex --subme 5 --mixed-refs --bime --8x8dct --no-fast-pskip --aq-strength 0.2 --aq-sensitivity 12.5 --deadzone-inter 5 --deadzone-intra 9 -o "X:\easy.mkv" "X:\glue.left.right.avs"
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
|
15th December 2007, 15:57 | #65 | Link | |
*Space Reserved*
Join Date: May 2006
Posts: 953
|
use avinaptic to get the info.
Quote:
Settings that aren't, and shouldn't be used for this type of processing, is b-rdo, subme-7, and trellis-2 for one. Also, sometimes, cpu usage plays a part in this as well, so mixed references is usually a no no If you'd' like to see what the experts tends to leans towards, then you can get an idea from this commandline: --pass 2 --bitrate xxxx --keyint 250 --min-keyint 25 --ref 5 --no-fast-pskip --bframes 3 --b-pyramid --bime --weightb --filter -2,-2 --subme 6 --trellis 1 --partitions all --8x8dct --direct auto --ratetol 1.5 --me umh --threads auto --thread-input --cqmfile "Prestige CQM.cfg" --progress --no-psnr --no-ssim --output "C:\Output Video.mkv" "C:\Input Video.avs" Of course, getting to know what bitrate should be used, a crf encode is usually done first of a rate between 18.0 - 22.0. Mostly, 20.0 nice grain/quality/cpu usage/encoding speed. Adaptive quantization most of the times isn't needed with the above. Last edited by Terranigma; 15th December 2007 at 16:00. |
|
15th December 2007, 16:39 | #66 | Link | |
Registered User
Join Date: Jan 2007
Posts: 943
|
Quote:
Could you please post *cough* all the scripts you used until you reached these final encodes in their final form? (I don't think think I will find a better wording.) Last edited by jeffy; 15th December 2007 at 17:01. Reason: Quote code tag |
|
15th December 2007, 21:00 | #67 | Link |
Angel of Night
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
|
The interesting thing about yours, Didée, is that it seems to meld the grain into what looks like fine detail. It's probably not the detail that was on the film prints, but it's essentially the same fake detail that the eye creates out of the grain.
Frankly, I'd love to use whatever you did on some detail-free videos, by first overlaying them with layers of grain. I've been trying to do that for years, but I could never find the right add/remove cycle to keep it from dancing and swimming. |
16th December 2007, 00:47 | #68 | Link |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
Very nicely done Didee.. Your script looks like it retains much more detail than Zep's does, with that said I have to hand it to both of you.. Both are an amazing improvement over the original, well done
I'd really love to see the filter chain you used to degrain the movie like that Didee! Also, it seems like the reason why yours was a slightly higher bitrate could because of the more detail yours retains. I noticed in the close up of faces yours had more skin detail than Zep's. |
16th December 2007, 00:56 | #69 | Link |
*Space Reserved*
Join Date: May 2006
Posts: 953
|
Perhaps because zep's was oversharpened? Sure, it may look great while playing, but when zoomed, you can see the detail that was destroyed. It's never good to oversharpen; there's a detail analysis out there that talks about edge-enhancement; maybe you could find it.
|
16th December 2007, 04:41 | #70 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Quote:
@ Terranigma: Thanks for the tips. In fact it's not too surprising ... it's a few details which are hard to judge from the theoretical side, while practice then may be different even more. (And doing much of testing is no fun for me ... if you had my PC, you would understand why...) BTW, the latest plaything of Dark Shikari (new AQ) seems to come along very nice. I like it much. Hope that it's "generic" enough to become an official feature... * * * OK, I added the script to post#61. The only change is that I added a small check that ultimately should have been there, but wasn't ... In my opinion, the script's processing is rather easy, technically. It's kind of slow because quite some MV-stuff is used - but the mere processing chain is in no way fancy, but pretty straightforward. (It's a stripped-down-to-bare-bone version of really fancy grain removal scripts...) The script again, blown up with some comments and hook-in points for easier modification: Code:
AVCSource("X:\original.dga",deblock=true) o = last fft = o.fft3dfilter(sigma=16,sigma2=10,sigma3=6,sigma4=4,bt=5,bw=16,bh=16,ow=8,oh=8) # "srch" is a prefiltered clip on which the motion serach is done. # Here, we simply use FFT3DFilter. There're lots of other possibilities. Basically, you shouldn't use # a clip with "a tiny bit of filtering". The search clip has to be CALM. Ideally, it should be "dead calm". srch = fft # "spat" is a prefiltered clip which is used to limit the effect of the 1st MV-denoise stage. # For simplicity, we just use the same FFT3DFilter. There're lots of other possibilities. spat = fft spatD = mt_makediff(o,spat) # motion vector search (with very basic parameters. Add your own parameters as needed.) b3vec1 = srch.MVAnalyse(isb = true, delta = 3, pel = 2, overlap=4, sharp=2, idx = 1) b2vec1 = srch.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=2, idx = 1) b1vec1 = srch.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=2, idx = 1) f1vec1 = srch.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=2, idx = 1) f2vec1 = srch.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=2, idx = 1) f3vec1 = srch.MVAnalyse(isb = false, delta = 3, pel = 2, overlap=4, sharp=2, idx = 1) # 1st MV-denoising stage. Usually here's some temporal-median filtering going on. # For simplicity, we just use MVDegrain. NR1 = o .MVDegrain3(b1vec1,f1vec1,b2vec1,f2vec1,b3vec1,f3vec1,thSAD=300,idx=2) NR1D = mt_makediff(o,NR1) # limit NR1 to not do more than what "spat" would do DD = mt_lutxy(spatD,NR1D,"x 128 - abs y 128 - abs < x y ?") NR1x = o.mt_makediff(DD,U=2,V=2) # 2nd MV-denoising stage. We use MVDegrain. NR2 = NR1x.MVDegrain3(b1vec1,f1vec1,b2vec1,f2vec1,b3vec1,f3vec1,thSAD=300,idx=3) # contra-sharpening: sharpen the denoised clip, but don't add more to any pixel than what was removed previously. # (Here: a simple area-based version with relaxed restriction. The full version is more complicated.) s = NR2.minblur(1,1) # damp down remaining spots of the denoised clip allD = mt_makediff(o,NR2) # the difference achieved by the denoising ssD = mt_makediff(s,s.removegrain(11,-1)) # the difference of a simple kernel blur ssDD = ssD.repair(allD,1) # limit the difference to the max of what the denoising removed locally. ssDD = SSDD.mt_lutxy(ssD,"x 128 - abs y 128 - abs < x y ?") # abs(diff) after limiting may not be bigger than before. NR2.mt_adddiff(ssDD,U=2,V=2) # apply the limited difference. (sharpening is just inverse blurring) return(last) 1) Make a pre-filtering with FFT3DFilter. This one will be used for analysis and probability comparisons. 2) search motion vectors on the (calm) pre-filtered clip. 3) Cut down the grain's amplitude by: - 3a) Make a motion-compensated denoising (optimally by temporal medianfiltering with temp.radius=2, but I'm having serious issues with the [only] available filter) - 3b) Limit the effect of 3a) by that of a spatial filter 4) Make a motion-compensated temporal averaging of the clip with flattened grain 5) Sharpen the denoised clip. This is done in such a way that the sharpener is not allow to "add" more than what was previously "removed" by the denoising. ... and that's all, nothing spectacular. The biggest difference to Zep's denoising is - most probably, since we didn't see his script up to now - the usage of fft3dfilter. It seems that he used to much of it, or at to strong settings. All the ringing and texture echoing is a typical side-effect fft3dfilter, when not used with enough caution. Judging from the overall look, I'd guess that Zep first did an initial filtering with fft3dfilter, and then continued to further process the result with MV-denoising. (Some places look like fft3d-banding that has been temporally averaged, but it's hard to judge after x264-compression.) In comparison, my script never applies fft3dfilter in the chain that produces the output clip. Instead, it is only used as a "brake" to keep the 1st temporal filter within reasonable bounds. (Which is essential in the bigger scripts where the 1st stage is done with median filtering.) @ foxyshadis Yeah, the effect of producing "fixed grainage" ... it's interesting that the effect is almost impossible to avoid when strictly using only temporal averaging. Earlier I tried to get rid of it (it looks poor when the grainage is too strong), but that alwas comes at the cost of serious detail loss. I settled for keeping the effect low, then it looks quite convincing, mostly. Can't tell if you get the same effect with synthetically generated noise. I suspect the effect comes from the fact that compressed film grain has pretty much invariance in its variance (hope this makes sense), where the gaussian noise of the usual noise generators seems to be much more regular. So, it could be you'd need a way to make the added noise, well, more "dirty".
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 16th December 2007 at 04:44. |
|
16th December 2007, 05:00 | #71 | Link |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
How do you convert the parameters for MVDegrain3 to MVDegrain 2? I'm looking at the code and the documentation but I'm having a very hard time converting it.
Edit: Figured it out. To anyone else confused, just replace the MVDegrain3 function with the following in both places it appears in: MVDegrain2(b1vec1,f1vec1,b2vec1,f2vec1,thSAD=300,idx=2) Edit 2: Love how the filter works, I got a 43% reduction in bitrate when I used the filter chain. I also skewed the results slightly because I had Dark Shikari's new AQ running in both the degrained and straight from source encodes. Last edited by Sagekilla; 16th December 2007 at 05:41. |
16th December 2007, 17:32 | #72 | Link |
dumber every day
Join Date: Dec 2006
Location: Planet Earth
Posts: 154
|
Didée
As always very nice script, I really like the contra-sharpening section I think I will put that to some use in my overly complicated script Seeing the results of degrain3 is also very nice. There were a couple of things I did not quite understand so if you could be so kind as to help me 1) on the nr1= line you use idx=2, should that be idx=1 or does the call to mvdegrain3 need to point to a different cache then the idx=1 vectors? 2) on the nr2= line you reuse the original motion vectors in mvdegrain3 but on a different (denoised) frame. At what point do motion vectors become invalid for reuse? I was under the impression that once you changed the center frame that new wing vectors should be computed. I'm also confused by idx in this line but the answer to 1 should help here. Thanks! EDIT: Ah now I see what you did, that center frame is not changed but limited. Should have followed the nr1x creation a bit more closely before I typed in my question number 2, now I see why those vectors can be used. Still confused by idx. Last edited by Spuds; 16th December 2007 at 17:53. |
16th December 2007, 17:36 | #73 | Link |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
Also, I'd love to see this more complicated version if you know how to/have it! As slow as the script runs with my insane x264 settings (0.8 fps, 20 hours for 60k frames!) I have no qualms with running a several day long encode to degrain 300 at a higher quality. Even a simple little suggestion on how to improve it more would be great I wish I could test out MVDegrain3 to see how much better it works, but there's no way for me to make a donation to get it (Seeing as I'm only 17 = no way for me to make that donation)
|
16th December 2007, 18:40 | #74 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Spuds: don't think too complicated!
Motion vectors never get "invalid" in that sense. With idx, it's always the same game: vec = clip1.MVAnalyse( ..., idx = _idx1 ) foo = clip2.MVToolsFunction( vec,.., idx = _idx2 ) As long as clip1 == clip2 (identical), you can (and should) use identical values for _idx1 and _idx2. As soon as clip1 != clip2 (not identical), you have to use different values for _idx1 and _idx2. The vectors stored in 'vec' stay valid as long as clip1 and clip2 are of same size, and are in sync. (E.g. If one of both clips got trim()'ed, then you need new vectors, obviously.)
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
16th December 2007, 20:27 | #75 | Link |
AviSynth plugger
Join Date: Nov 2003
Location: Russia
Posts: 2,183
|
Note in addition: If you use single MVTools function with some input clip, idx may be simply omitted (it will be set to unique numbers internally for every such call).
Didée, you must provide some name for your new script! LowFrequencyFlickerRemover (LFFR) or somewhat like MiddleFreqDegrain ? Sagekilla, be patient. |
16th December 2007, 21:26 | #76 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Some things get posted in form of a Function(). Some things get posted in form of a plain script.
Functions do have names. Plain scripts are just nameless plain scripts. Usually, there is a reason for the decision of whether to post something as function or as plain script. (If it has to be by all means, how about YAMCGC_EV - "Yet another motion compensated grain cruncher (easy version)" ...) :P
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
16th December 2007, 21:27 | #77 | Link | |
Registered User
Join Date: Jul 2002
Posts: 587
|
Quote:
|
|
16th December 2007, 21:31 | #78 | Link | |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
Quote:
|
|
16th December 2007, 21:33 | #79 | Link | |
Registered User
Join Date: Jul 2002
Posts: 587
|
Quote:
Also I use pretty high x.264 settings. When I do get the best script from this thread and all the great ideas from everyone. I plan to run a HQ-INSANE (Every setting maxed out) MEGA final run which should look awesome for only take like a week to encode Going by what Didee said I have done more tests today and I need to better be able to selectively mask areas so the cons he was talking about are not so bad. |
|
16th December 2007, 21:35 | #80 | Link | |
x264aholic
Join Date: Jul 2007
Location: New York
Posts: 1,752
|
Quote:
But, back on topic though, was your script (generally speaking) about the same methodology that Didee was using for degraining? |
|
|
|