Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
31st May 2008, 17:50 | #61 | Link |
Registered User
Join Date: Jan 2005
Location: cz
Posts: 704
|
Some idea, maybee worth maybee not.
o=original lines b=bobbed (computed) lines t=original tempgaussed T=bobbed tempgaussed -=no lines orig: o-o -o- bobbed: obo bob bobbed tempgaused: tTt TtT to do (is easy): oTo ToT n=new way of using TG to do: ono non Lets use the TempGausMC usual way for deinterlace: Use the original lines & the deinterlaced-TempGaussed. So because TGMC changes original lines too, after joining o with bt will improve sharpness, but increase flicker. So let the TG more change the b, not the o. Ive seen somewhere that for mcdenoisig is used some duplication of frames, to increase their weight.It was something like: ooobOOO , o=(1.frame ),O=(2.frame) so the b=bobbed one will be changed more than the o=O=original. |
1st June 2008, 04:42 | #62 | Link |
Registered User
Join Date: Jan 2004
Location: Here, there and everywhere
Posts: 1,197
|
@Terka, Cant say I fully understand what you are suggesting…..assuming it was directed to me.
Basically, this is what I was thinking of as a working basis for progressive sources (noting disclaimer at end of post): Either this Code:
function TempGaussMC_mod(clip clp, int "tr0", int "tr1", int "tr2", float "sharpness", bool "rep0", bool "rep1", int "border", int "blocksize", int "overlap", string "EdiMode", int "draft") { tr0 = default( tr0, 2 ) # temporal radius for temporal Gauss before motion compensation (1 or 2) tr1 = default( tr1, 2 ) # temporal radius for temporal Gauss with motion compensation (1 or 2) tr2 = default( tr2, 1 ) # temporal radius for final MVDegrain (1, 2 or 3) rep0 = default( rep0, true ) # repair temporalsoften-defects for searchclip rep1 = default( rep1, true ) # repair MVDegrain-defects for output border = default( border, 1 ) # 1 = padd borders internally to catch "half scanlines" at top + bottom (broadcast material) bs = default( blocksize,8) # Blocksize for motion search ovlp = default( overlap,bs/2) # Overlap size for ME blocks sharpness = default( sharpness, 0.25+(tr1+tr2)/8.) # "inloop" sharpening to counteract softening, 0.0 to 1.0, or more if you like EdiMode = default( EdiMode, "NNEDI") # interpolator to use: "NNEDI", "EEDI2" draft = default( draft, 0 ) # '1' outputs a quick draft, and '2' is even more draft'ier :p trmax = (tr1 > tr2) ? tr1 : tr2 nullclip = blankclip(clp,width=16,height=16) clpo = (border==0) ? clp \ : clp.pointresize(clp.width(),clp.height()+8, 0,-4,-0,clp.height()+8.001 ) clps = clpo.AssumeBFF().SeparateFields() eeddbl = mt_Average(clps.SelectEven().EEDI2(field=0, maxd=4), \ clps.SelectOdd().EEDI2(field=1, maxd=4), U=3,V=3 ) nnedbl = mt_average(clpo.nnedi(field=0),clpo.nnedi(field=1),U=3,V=3) edidbl = (EdiMode=="NNEDI") ? nnedbl \ : eeddbl dbdbl = mt_average(clpo.Bob().SelectEven(),clpo.Bob().SelectOdd(),U=3,V=3) t1 = dbdbl.temporalsoften(1,255,255,32,2) t2 = dbdbl.temporalsoften(2,255,255,32,2) t = (tr0==0) ? dbdbl \ : (tr0==1) ? t1.merge(dbdbl,0.25) \ : t1.merge(t2,0.357).merge(dbdbl,0.125) tD = mt_makediff(dbdbl,t,U=3,V=3) tD1 = tD.mt_inpand(mode="vertical",U=3,V=3).mt_deflate(U=3,V=3) .mt_expand(U=3,V=3) tD2 = tD.mt_expand(mode="vertical",U=3,V=3).mt_inflate(U=3,V=3) .mt_inpand(U=3,V=3) tDD = tD.mt_lutxy(tD1,"x 129 < x y 128 < 128 y ? ?",U=3,V=3).mt_lutxy(tD2,"x 127 > x y 128 > 128 y ? ?",U=3,V=3) t2 = (rep0==true) ? t.mt_adddiff(tDD,U=3,V=3) \ : t searchclip = t2.removegrain(11).removegrain(11) searchclip = (rep0==true) ? searchclip \ : searchclip.mt_lutxy(edidbl,"x 2 + y < x 2 + x 2 - y > x 2 - y ? ?",U=3,V=3) #bs = 16 #ovlp = 4 tm = false pel = 2 shrp = 2 bvec3 = (trmax>=3) ? searchclip.MVAnalyse(isb=true, delta=3,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip bvec2 = (trmax>=2) ? searchclip.MVAnalyse(isb=true, delta=2,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip bvec1 = searchclip.MVAnalyse(isb=true, delta=1,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) fvec1 = searchclip.MVAnalyse(isb=false,delta=1,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) fvec2 = (trmax>=2) ? searchclip.MVAnalyse(isb=false,delta=2,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip fvec3 = (trmax>=3) ? searchclip.MVAnalyse(isb=false,delta=3,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip mvdg1 = edidbl.MVDegrain1(bvec1,fvec1, thSAD=800,idx=7) mvdg2 = (tr1>1) ? edidbl.MVDegrain1( bvec2,fvec2,thSAD=800,idx=7) : nullclip stage1 = (tr1==0) ? edidbl \ : (tr1==1) ? mvdg1.merge(edidbl,0.25) \ : mvdg1.merge(mvdg2,0.2).merge(edidbl,0.0625) stage1a = stage1.mt_lutxy(stage1.removegrain(11),"x x y - "+string(sharpness)+" * +",U=3,V=3) stage1b = (sharpness==0.0) ? stage1 : stage1a.repair(stage1a.repair(edidbl,12),1) stage2 = (tr2==0) ? stage1b \ : (tr2==1) ? stage1b.MVDegrain1(bvec1,fvec1, thSAD=400,idx=8) \ : (tr2==2) ? stage1b.MVDegrain2(bvec1,fvec1,bvec2,fvec2, thSAD=400,idx=8) \ : stage1b.MVDegrain3(bvec1,fvec1,bvec2,fvec2,bvec3,fvec3,thSAD=400,idx=8) tD_2 = mt_makediff(dbdbl,stage2,U=3,V=3) tD1_2 = tD_2.mt_inpand(mode="vertical",U=3,V=3).mt_deflate(U=3,V=3) .mt_expand(U=3,V=3) tD2_2 = tD_2.mt_expand(mode="vertical",U=3,V=3).mt_inflate(U=3,V=3) .mt_inpand(U=3,V=3) tDD_2 = tD_2.mt_lutxy(tD1_2,"x 129 < x y 128 < 128 y ? ?",U=3,V=3).mt_lutxy(tD2_2,"x 127 > x y 128 > 128 y ? ?",U=3,V=3) stage3 = (draft==2) ? t .subtitle("Draft 2") \ : (draft==1) ? t2 .subtitle("Draft 1") \ : (rep1==true) ? stage2.mt_adddiff(tDD_2,U=3,V=3) \ : stage2 (border==0) ? stage3 \ : stage3.crop(0,4,-0,-4) return( last ) } Code:
function TempGaussMC_mod2(clip clp, int "tr0", int "tr1", int "tr2", float "sharpness", bool "rep0", bool "rep1", int "border", int "blocksize", int "overlap", string "EdiMode", int "draft") { tr0 = default( tr0, 2 ) # temporal radius for temporal Gauss before motion compensation (1 or 2) tr1 = default( tr1, 2 ) # temporal radius for temporal Gauss with motion compensation (1 or 2) tr2 = default( tr2, 1 ) # temporal radius for final MVDegrain (1, 2 or 3) rep0 = default( rep0, true ) # repair temporalsoften-defects for searchclip rep1 = default( rep1, true ) # repair MVDegrain-defects for output border = default( border, 1 ) # 1 = padd borders internally to catch "half scanlines" at top + bottom (broadcast material) bs = default( blocksize,8) # Blocksize for motion search ovlp = default( overlap,bs/2) # Overlap size for ME blocks sharpness = default( sharpness, 0.25+(tr1+tr2)/8.) # "inloop" sharpening to counteract softening, 0.0 to 1.0, or more if you like EdiMode = default( EdiMode, "NNEDI") # interpolator to use: "NNEDI", "EEDI2" draft = default( draft, 0 ) # '1' outputs a quick draft, and '2' is even more draft'ier :p trmax = (tr1 > tr2) ? tr1 : tr2 nullclip = blankclip(clp,width=16,height=16) clpo = (border==0) ? clp \ : clp.pointresize(clp.width(),clp.height()+8, 0,-4,-0,clp.height()+8.001 ) clps = clpo.AssumeBFF().SeparateFields() eeddbl = mt_Average(clps.SelectEven().EEDI2(field=0, maxd=4), \ clps.SelectOdd().EEDI2(field=1, maxd=4), U=3,V=3 ) nnedbl = mt_average(clpo.nnedi(field=0),clpo.nnedi(field=1),U=3,V=3) edidbl = (EdiMode=="NNEDI") ? nnedbl \ : eeddbl t1 = clpo.temporalsoften(1,255,255,32,2) t2 = clpo.temporalsoften(2,255,255,32,2) t = (tr0==0) ? clpo \ : (tr0==1) ? t1.merge(clpo,0.25) \ : t1.merge(t2,0.357).merge(clpo,0.125) tD = mt_makediff(clpo,t,U=3,V=3) tD1 = tD.mt_inpand(mode="vertical",U=3,V=3).mt_deflate(U=3,V=3) .mt_expand(U=3,V=3) tD2 = tD.mt_expand(mode="vertical",U=3,V=3).mt_inflate(U=3,V=3) .mt_inpand(U=3,V=3) tDD = tD.mt_lutxy(tD1,"x 129 < x y 128 < 128 y ? ?",U=3,V=3).mt_lutxy(tD2,"x 127 > x y 128 > 128 y ? ?",U=3,V=3) t2 = (rep0==true) ? t.mt_adddiff(tDD,U=3,V=3) \ : t searchclip = t2.removegrain(11).removegrain(11) searchclip = (rep0==true) ? searchclip \ : searchclip.mt_lutxy(edidbl,"x 2 + y < x 2 + x 2 - y > x 2 - y ? ?",U=3,V=3) #bs = 16 #ovlp = 4 tm = false pel = 2 shrp = 2 bvec3 = (trmax>=3) ? searchclip.MVAnalyse(isb=true, delta=3,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip bvec2 = (trmax>=2) ? searchclip.MVAnalyse(isb=true, delta=2,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip bvec1 = searchclip.MVAnalyse(isb=true, delta=1,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) fvec1 = searchclip.MVAnalyse(isb=false,delta=1,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) fvec2 = (trmax>=2) ? searchclip.MVAnalyse(isb=false,delta=2,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip fvec3 = (trmax>=3) ? searchclip.MVAnalyse(isb=false,delta=3,truemotion=tm,pel=pel,sharp=shrp,blksize=bs,overlap=ovlp,idx=6) : nullclip mvdg1 = edidbl.MVDegrain1(bvec1,fvec1, thSAD=800,idx=7) mvdg2 = (tr1>1) ? edidbl.MVDegrain1( bvec2,fvec2,thSAD=800,idx=7) : nullclip stage1 = (tr1==0) ? edidbl \ : (tr1==1) ? mvdg1.merge(edidbl,0.25) \ : mvdg1.merge(mvdg2,0.2).merge(edidbl,0.0625) stage1a = stage1.mt_lutxy(stage1.removegrain(11),"x x y - "+string(sharpness)+" * +",U=3,V=3) stage1b = (sharpness==0.0) ? stage1 : stage1a.repair(stage1a.repair(edidbl,12),1) stage2 = (tr2==0) ? stage1b \ : (tr2==1) ? stage1b.MVDegrain1(bvec1,fvec1, thSAD=400,idx=8) \ : (tr2==2) ? stage1b.MVDegrain2(bvec1,fvec1,bvec2,fvec2, thSAD=400,idx=8) \ : stage1b.MVDegrain3(bvec1,fvec1,bvec2,fvec2,bvec3,fvec3,thSAD=400,idx=8) tD_2 = mt_makediff(clpo,stage2,U=3,V=3) tD1_2 = tD_2.mt_inpand(mode="vertical",U=3,V=3).mt_deflate(U=3,V=3) .mt_expand(U=3,V=3) tD2_2 = tD_2.mt_expand(mode="vertical",U=3,V=3).mt_inflate(U=3,V=3) .mt_inpand(U=3,V=3) tDD_2 = tD_2.mt_lutxy(tD1_2,"x 129 < x y 128 < 128 y ? ?",U=3,V=3).mt_lutxy(tD2_2,"x 127 > x y 128 > 128 y ? ?",U=3,V=3) stage3 = (draft==2) ? t .subtitle("Draft 2") \ : (draft==1) ? t2 .subtitle("Draft 1") \ : (rep1==true) ? stage2.mt_adddiff(tDD_2,U=3,V=3) \ : stage2 (border==0) ? stage3 \ : stage3.crop(0,4,-0,-4) return( last ) } Can’t see what parameters other than maybe the temp gauss weightings might be open for further tweaking, and I didn’t touch them. Any thoughts Didee? (hopefully other than ‘why are you still blindly screwing around with a script that wasn’t created for this purpose’ ) Writes hasty disclaimer: the above scripts are in no way intended to be taken as accepted modifications of the TempGaussMC function created by Didee for interlaced sources. They merely serve to illustrate possible approaches for applying a similar treatment to progressive/frame-based material that exhibits various forms of aliasing artifact.
__________________
Nostalgia's not what it used to be Last edited by WorBry; 1st June 2008 at 20:57. |
5th June 2008, 01:54 | #63 | Link |
Registered User
Join Date: Jan 2004
Location: Here, there and everywhere
Posts: 1,197
|
Didee,
After playing around with the first of the two scripts I posted above (“TempGaussMC_Mod") I discovered that setting Tr0=0 (i.e. disabling the pre-mocomp temp gauss) appeared to better preserve the definition and integrity of high contrast line patterns (like the striped shirt in 25p clip I provided), whilst still retaining good anti-aliasing effect and minimal flickering/shimmering of objects that, in the original clip, were fairlystable - more or less the outcome I have been aiming for. I hadn’t realized that Tr0=0 was possible until I followed the script process path. So, in this case the searchclip that derives the motion vectors for the mocomp gauss becomes entirely modulated by the kernel blur (the contrasharpen bit), which I assume is entirely spatial.; possibly the reason why the edges of the stripes still appeared a bit rough. So, starting with Tr0=1, I tried decreasing the strength of the pre-mocomp gauss, by incrementally increasing the weighting of the ‘un-softened’ dbdbl from default 0.25 through to 0.95. To my eyes, the optimum seems to be around 0.55 – 0.60. Changing the EEDI2 maxd value from 4 to 6 (originally 8) also seemed optimal. Question is: a) How does one derive the weightings for the tr0=2 gauss? Simple extrapolation would give: Code:
t1 = dbdbl.temporalsoften(1,255,255,32,2) t2 = dbdbl.temporalsoften(2,255,255,32,2) t = (tr0==0) ? dbdbl \ : (tr0==1) ? t1.merge(dbdbl,0.6) \ : t1.merge(t2,0.8568).merge(dbdbl,0.3) b) Is it valid to make such changes without impacting the post-mo-comp gauss weightings, which seem OK as they are? Thanks a lot. BTW- how’s the new update coming on? I’m keen to start using TempGaussMC on my interlaced DV stuff, but will hold off if there are going to be further improvements.
__________________
Nostalgia's not what it used to be Last edited by WorBry; 8th June 2008 at 00:44. |
8th June 2008, 02:19 | #64 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
@ Terka:
Well, where to start ... [deep breath] ... What you bring up there is by all means valid. But do you realize that, in last instance, the result is something totally different ... ? You can't just hop in and "change the o's less, and give more weight to the o's when doing the averaging for the b's". Consequentially, one would have to leave the o's perfectly untouched, and give ALL weights to the o's when doing the averaging for the b's. But then, this is nothing else but putting the compensations of the neighbor frames into the "b" lines of the current frame. And that's basically just what MVBob or MCBob are already doing. The "strength" of TempGaussMC comes from the fact that changing the original data is allowed. This also allows to handle the problem of "the data missing in this frame is also missing in the neighbor frames" (vertical motion by an odd numer of pixels). For a lossless MV-Bobber this case is problematic, whereas TempGaussMC inherently uses a "prophet-and-mountain" - way to handle this case: if the prohet won't come to the mountain, then the mountain has to come to the prophet. Also, when using the strict way of giving 100% weight to the compensations, then new problems arise from the side of motion-compensation: you then need to be *very* careful in checking whether the compensation is valid or erroneous; and ideally should also check whether the compensation for the current "b" line actually is delivering an "o" from the neighbor, or just the neighbor's "b" in turn. In contrast, the way TempGaussMC is working is much more fuzzy, that's why it can get away with much less checking of all those things. And that's the bottom line: TempGaussMC works surprisingly good with a rather simplistic approach, and it should be left that way. Trying to make any improvements to the basic methodology will only result in something that we already have, in shape of MV/MC-Bob. @ WorBry: Regarding the weightings, I'm not quite sure ... perhaps the easiest way is to just use the "repeating" way (which is a valid way to get gaussian weightings for arbitrary radii), just with your preferred weighting instead: Code:
t1 = dbdbl.temporalsoften(1,255,255,32,2).merge(dbdbl,0.6) # for gauss, this weight is 0.25 t2 = t1 .temporalsoften(1,255,255,32,2),merge(t1, 0.6) # for gauss, this weight ALSO is 0.25 t = (tr0==0) ? dbdbl \ : (tr0==1) ? t1 \ : t2 Still, for your case of AA'ing progressive input this is a try'n'error thing without obvious justification. For the case of bobbing, the temp-gauss'ing deals a very specific and necessary purpose: to average-away the bob jumping. For your case, the effect (if any) is more or less ~random~. What has to be avoided in your case is that the motion search between frames snaps in to the aliasing. And in the case of zero motion, the effect of the initial temporal averaging is exactly zero. When there is motion, then it may or may not help, IMHO that's unpredictable. My guess is that with exhaustive investigation, you might find that a weight of X is ideal for this spot, weight Y is ideal for that spot, weight Z is ideal for other spots ... Lately I started looking at something else: Let EDI do a self-validation by using a "back-interpolation" instance, in order to judge whether a given EDI interpolation at a given place seems reasonable or not. Method: construct a frame consisting ONLY of EDI-interpolation. From this constructed frame, again construct a frame consisting ONLY of EDI-interpolation. In places where EDI does "good" interpolation, the 2nd interpolation should be reasonably close to the original frame. In places where EDI does "bad" interpolation, the 2nd interpolation can be expected to be fairly different to the original frame. Quickly knocking together the first part (full interpolation & back-interpolation) and looking at the resulting differences, it seems pretty reasonable, but I've yet to see if one can get a tight enough grip on the problematic areas.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
8th June 2008, 04:06 | #65 | Link | |
Registered User
Join Date: Jan 2004
Location: Here, there and everywhere
Posts: 1,197
|
Quote:
This inconsistent and seemingly erratic pattern of aliasing in frame-mode DV material has been noted by others and some have suggested that it stems from the offset timing of the green CCD in the pixel shift, resulting in some color/hue edges being more affected than others, although how they know this for certain when the process involved in constructing the 'pseudo-progressive' frame has never been fully disclosed (at least for the GS400). A similar explanation has also been offered for a slight loss in chroma resolution observed in frame-mode, compared to normal interlaced. I was thinking of playing around with chroma thresholds to see if I could get a better result, but, even if this 'green CCD timing' is the source, given the complexities involved in transforming an analogue RGB footprint into a digital YUV image, not to mention all of the other electronic jiggery-pokery going on inside the cam (internal sharpening for one), it seems a bit pointless. So, thanks for the advice on the gauss weightings; I'll persist a bit more with my tests, more to satisfy my curiosity, but agree that the best outcome will probably be one of a subjective compromise. Probably also what the Panasonic engineers had in mind when they developed the technology, maximum vertical luma resolution and filmic-like motion cadence being the highest priority. This Edi self-validation concept sounds interesting. Does this mean that you'll likely be holding off on the pending TempGaussMC update that you indicated earlier? Just going back to the ‘super-resolution’ temporal filter that you eluded to (not sure if the smiley meant 'cool' or 'yeah, right' ) ; I see Scharfis_Brain experimented with ‘super-resolution’ for image enhancement some time back and am curious as to whether he or anyone has developed this further. As I recall there is a commercial soft out there (Video Enhancer) that claims to use super-resolution. Looked at it briefly for up-scaling but was not particularly impressed.
__________________
Nostalgia's not what it used to be |
|
9th June 2008, 01:34 | #67 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Quote:
Temporal Super-Resolution is a over-hyped concept. Basically it works, but "good" it works only on footage with a very certain characteristic, namely "decimated" resolution. On "blurred" resolution, it works not. The latter case is the usual one, the former is what is hard to find at all, except for one special case: interlacing. And truly, filters like MVBob, MCBob and, in a fuzzy way, TempGaussMC in fact are REAL temporal super-resolution filters. Put shortly, in "decimated" resolution there is information missing in a frame, like the missing scanlines in interlaced footage, or due to point.sampling in the initial frame scanning process. In this case, it is ~relatively~ easy to pull the missing information from the neighbor frames, and put it into the current frame. In the case of "blurred" resolution, like e.g. a DVD source or such, there is no information "missing". The original master has had bigger resolution, and by downscaling to the final production resolution you get "blurring", because several pixels of the bigger master are sampled-together to get one pixel of the smaller output. In this case, it's basically so that the same blurred information is present in consecutive frames, and because of that, there is no "new" or "additional" information in the neighbours that you could put into the current. NB, the presence of compression artefacts (even very small ones) doesn't exactly help on the matter. And that's why I had put the cool-smiley in that place: just because of "what's in a name". Something might be a TSR operation basically, but that doesn't mean that it's worthwile to name it so. But for reasons of PR and possibly making money, it's clever to have an impressing name for the child .... If you'd try to sell a primitive sharpen filter in the name of "a 3x3 pixel matrix filter", then people are bored. But, if you sell it as "Deconvolution filter, man!", then it goes "Aah!" and "Ooh!" ...
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
|
13th June 2008, 21:47 | #69 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
I'm fiddling. The new stuff grew more than I had thought, and now there's need to check lots of things.
A small appetizer of what things will look like. (Interlaced source not included, construct with "source50p.SeparateFields().SelectEvery(4,0,3).Weave()" )
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
18th June 2008, 21:31 | #73 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Almost there, it's almost there. It's a pleasure to see what it pulls out of some difficult content. Especially with really sharp & detailed sources, the differences are astounding.
Wanna see some filters trying their luck at Stockholm? Amazing! (Sorry, forgot to reset the framerate from RawSource's default of 25fps to the real 59.94fps ... but hey, the slo-mo is good to make the bobbing issues even more obvious) Just a bit more twiddling with a few smaller things, and TempGauss should go beta.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
18th June 2008, 23:34 | #74 | Link |
Registered User
Join Date: Jan 2004
Location: Here, there and everywhere
Posts: 1,197
|
Lookin’ good; hey whilst you’re still fiddling with the new version, why not also give it a shot with the ‘CrowdRun’ HD sequence from:
ftp://vqeg.its.bldrdoc.gov/HDTV/SVT_MultiFormat/ Plenty of moving detail there plus some conspicuous bob-shimmer (e.g. the tree tops) I’ve put up matched (1 second) 1080/50i, 1080/50p and 720/50p HuffYuv-YV12 samples: http://rapidshare.com/files/12322283..._YV12.avi.html Note: Top Field First http://rapidshare.com/files/12322665..._YV12.avi.html http://rapidshare.com/files/12323093...p_YV12.avi.htm Intrigued by the 'new question' posed in Terka’s last post, I was in the middle of doing a little comparison (purely subjective) of bob-deinterlacing 1080/50i (TempGauss_alpha3, MCBob, YadifMod-NNEDI) versus upsizing 720/50p (NNEDIResize_YV12/LimitedSharpen) to 1080/50p, but if you’re nearly there with alpha4 or a beta, it seems it bit redundant now. Suffice it say that encoding time for TempGaussMC_alpha3 at default (Tr0=2, Tr1=2, Tr2=1, edimode="NNEDI") tipped one hour (for 50 output frames) on my AMD XP2800+, 1GB DDR RAM; that would be more 60 hours for processing 1 minute of 1080/50i footage. Imagine if the source was AVCHD. For HD sources at least, the faster Yadif edi mode (with say Tr0=1, Tr1=1, Tr2=1) would seem a more practical proposition (encoding time 9 min 51 sec). MCBob 0.3u ran at 28 min 47sec and YadifMod-NNEDI 3 min 2 sec (....plus the bob-shimmer). On the upscaling side, NNEDIResize_YV12 plus a simple LanczosResize took 7 min 42 sec; not bad quality but more blurred than the bob-deinterlaced outputs. Hadnt got around to LimitedSharpen (Edit: NNEDIResize_YV12 plus LimitedSharpen (dest_x=1920, dest_y=1080, Smode=1, strength=40) took 8 min 43 sec) Edit: Ooops, noticed that I'd labelled the uploaded samples ParkRun instead of CrowdRun. Ah well, what's in a name. Edit2:If anyone’s interested , here are sample outputs (first 5 frames) from the comparative tests: http://rapidshare.com/files/12369150...tests.zip.html http://rapidshare.com/files/12362834...tests.zip.html Since TempGaussMC incorporates MVDegrain (limited to Tr2=1 in these tests), I also tested the other methods with and without MVDegrain1. Maybe folks more experienced with Limited Sharpen, or maybe See-Saw, could improve on the up-scaled outputs. Edit3: Here also is a small (file size) direct (cropped) comparison of: Reference 1080 50p TempGaussMC_alpha3 (221 NNEDI) MCBob 0.3u NNEDIResize_YV12 + LimitedSharpen http://rapidshare.com/files/12370236...izeLS.avi.html Perhaps not the best example to demonstrate shimmer/flicker supression but it highlights relative preservation of definition.
__________________
Nostalgia's not what it used to be Last edited by WorBry; 20th June 2008 at 04:52. |
19th June 2008, 11:15 | #75 | Link | ||
Registered User
Join Date: Dec 2002
Location: UK
Posts: 1,673
|
Quote:
Quote:
Cheers, David. |
||
20th June 2008, 22:49 | #76 | Link |
Registered User
Join Date: Sep 2002
Location: Germany
Posts: 352
|
@Didée,
your sample is absolutely stunning! Really looking forward getting my hands on the beta! However, besides removing nearly all the flicker and showing much more detail than the other contenders, there are some minor areas with errors. Here are 2 frames for illustration: I hope this will help you to eliminate them. Greetings, Malcolm |
24th June 2008, 10:27 | #77 | Link |
LaTo INV.
Join Date: Jun 2007
Location: France
Posts: 701
|
@Didée
TempGaussMC is amazing! Thanks a lot for this stuff But I have noticed one problem, TTMC produce some strange lines... It look like residual combing, but it isn't because it's horizontal and... vertical. Screenshots of my problem: http://latoninf.free.fr/lines/ Maybe you could fix this, I hope... |
24th June 2008, 11:45 | #78 | Link |
Registered User
Join Date: Dec 2002
Location: UK
Posts: 1,673
|
Ouch.
There's no source clip to be sure, but I think this is the inherrant problem with deinterlacing. Very fine vertial detail can be very fine vertical detail, or fast movement (including a flash of light!). The two cannot be disambiguated. mcbob errs on one side, ttmc on the other. mcbob messes up very fine detail, ttmc messes up very fast changes. Still, magician Didée to the rescue! Maybe there's a threshold to tweak or a blur weighting to adjust somewhere... (A source clip might help) Cheers, David. |
24th June 2008, 12:37 | #79 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Those effects are known, and mostly resolved.
Malcom: The detail loss is because too much structure may get lost during the 1st (not MC'ed) gaussian stage. (BTW, it wasn't only the pole ... also much of those little waves on the water were smeared away.) => Solved, by giving a bit more bias towards the original. LaTo/2BDecided: What you're seeing there is MVTools when it actually misses scenechanges. (Light flashes can be considered as such, in this context). => Probably solved, by lowering SAD thresholds in some MVTools-filters. (I hope it'll work out reliably enough ... generally, SAD is only a rough (and relatively poor) measurement: in high detail/high contrast areas, even related blocks can have a rather big SAD, whereas in low detail/low contrast areas, even unrelated blocks may have a rather low SAD. Well, what to do. SAD really is not a versatile measure, still in MVTools it is *THE* decisive element. TempGaussMC actually has to play one or two severe tricks to sail around the darn SAD cliffs...)
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 24th June 2008 at 15:53. |
24th June 2008, 13:57 | #80 | Link | |
LaTo INV.
Join Date: Jun 2007
Location: France
Posts: 701
|
Quote:
With TempGaussMC_alpha3M(scd1=100) the lines go out Last edited by LaTo; 16th November 2008 at 16:20. |
|
Tags |
deinterlace, flickering |
Thread Tools | Search this Thread |
Display Modes | |
|
|