Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
29th July 2006, 05:28 | #1 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
NTSC tools: auto NTSC to PAL with 24p, 30p, 60i detection (v 0.93)
okay, i re-read this thread:
http://forum.doom9.org/showthread.php?t=67161 a few days ago after being linked from the recent monty-python thread. i remembered trying it out and seeing 30p pans get decimated as film, so i came up with this: Code:
#### ## ## NTSC tools 0.93, by Sal, aka Mug ## ## takes an NTSC clip, finds Film, 30p and 60i parts and gives them special treatment ## use it to convert to PAL with 4% speed-up (ie 23.976 to 25 fps) ## #### function AutoPAL (clip c, bool "speedup", float "th_film", int "th_film2", int "th_prog", int "th_bob", float "tol", bool "show", int "mode", string "filter", bool "blend", int "deblocker", int "IVTCtype", int "precision") { speedup=default(speedup,false) blend=default(blend,true) th_film=default(th_film,.4)#.15) th_film2=default(th_film2,5) th_prog=default(th_prog,4) th_bob=default(th_bob,8) tol=default(tol,1.5) show=default(show,false) mode=default(mode,1) filter=default(filter,"last") deblocker=default(deblocker,0) IVTCtype = default(IVTCtype,4) # 1 = telecide, 2 = tfm, 3 = tfm(pp=0), 4 = tdeint(mode=2), 5 = blindmatch ...formerly 2 precision = default(precision,1) data=NTSCanalyse(c,th_film=th_film,th_prog=th_prog,th_film2=th_film2,tol=tol,precision=precision) NTSCconvert(c,data,th_bob=th_bob,speedup=speedup,show=show,mode=mode,filter=filter,blend=blend,deblocker=deblocker, IVTCtype=IVTCtype) c.height==576? deblocker==0? c : c.deblocker(quant=deblocker,ipp=true) : last } function NTSCanalyse (clip c, float "th_film", int "th_film2", int "th_prog", float "tol", int "precision", bool "show") { # output clip is teeny coloured clip in same space as input # red = FILM # green = progressive # blue = interlaced show=default(show,false) order = c.getparity()==true? 1 : 0 global isfade=true global filmnum=-4 global progtele=0 global th_film=default(th_film,.3) global th_film2=default(th_film2,10) #global th_prog=default(th_prog,2) global th_prog=default(th_prog,30) global tol=default(tol,.1) global precision=default(precision,1) #d = c.horizontalreduceby2().bob(1,0,height=c.height/2).converttoyv12() d = c.bicubicresize(32*precision,c.height,1/3.,1/3.,16,0,c.width-32,c.height).separatefields().bicubicresize(32*precision,32*precision,1/3.,1/3.,0,8,32*precision,(c.height/2.)-16).converttoyv12() global film_c = d global film_d = d.selectevery(2,0) global f01 = mt_lutxy(film_c.selecteven(),film_c.selectodd(),expr="x y - abs")#.mt_inpand(mode=mt_rectangle(0,2)) global f02 = mt_lutxy(film_c.selectevery(2,0), film_c.selectevery(2,2),expr="x y - abs") global f13 = mt_lutxy(film_c.selectevery(2,1), film_c.selectevery(2,3),expr="x y - abs") global b1f1 = mt_lutxy(film_c.selectevery(2,-1), film_c.selectevery(2,1),expr="x y - abs") global b02 = mt_lutxy(film_c.selectevery(2,0), film_c.selectevery(2,-2),expr="x y - abs") global f24 = mt_lutxy(film_c.selectevery(2,2), film_c.selectevery(2,4),expr="x y - abs") global b13 = mt_lutxy(film_c.selectevery(2,-1), film_c.selectevery(2,-3),expr="x y - abs") global b24 = mt_lutxy(film_c.selectevery(2,-2), film_c.selectevery(2,-4),expr="x y - abs") global f35 = mt_lutxy(film_c.selectevery(2,3), film_c.selectevery(2,5),expr="x y - abs") global b35 = mt_lutxy(film_c.selectevery(2,-3), film_c.selectevery(2,-5),expr="x y - abs") global NTSCanalyse_red = blankclip(film_d,width=8,height=4,color=$ff0000) global NTSCanalyse_green = blankclip(film_d,width=8,height=4,color=$00ff00) global NTSCanalyse_blue = blankclip(film_d,width=8,height=4,color=$0000ff) film_d1 = scriptclip(NTSCanalyse_blue,"isfilm==1? NTSCanalyse_red : isprog==1? NTSCanalyse_green : NTSCanalyse_blue") #film_d2 = frameevaluate(film_d1,"isprog= f01.yplanemax(tol) <= th_prog ? 1 : 0") film_d2 = frameevaluate(film_d1,"isprog= f01.yplanemax() <= th_prog ? 1 : 0") film_d3 = frameevaluate(film_d2,""" \ isfilm= filmnum!=current_frame? 0 : (b1f1.yplanemax() < th_film2) ? 1 : \ (f02.yplanemax() < th_film2) && (b35.yplanemax() < th_film2) ? 1 : \ (b02.yplanemax() < th_film2) && (f35.yplanemax() < th_film2) ? 1 : \ (b13.yplanemax() < th_film2) && (f24.yplanemax() < th_film2) ? 1 : \ (f13.yplanemax() < th_film2) && (b24.yplanemax() < th_film2) ? 1 : \ (f24.yplanemax() < th_film2) && (b13.yplanemax() < th_film2) ? 1 : \ (b24.yplanemax() < th_film2) && (f13.yplanemax() < th_film2) ? 1 : 0 """) film_d4 = frameevaluate(film_d3,""" \ filmnum= (b1f1.averageluma() < th_film) ? current_frame : \ (f02.averageluma() < th_film) && (b35.averageluma() < th_film) ? current_frame : \ (b02.averageluma() < th_film) && (f35.averageluma() < th_film) ? current_frame : \ (b13.averageluma() < th_film) && (f24.averageluma() < th_film) ? current_frame : \ (f13.averageluma() < th_film) && (b24.averageluma() < th_film) ? current_frame : \ (f24.averageluma() < th_film) && (b13.averageluma() < th_film) ? current_frame : \ (b24.averageluma() < th_film) && (f13.averageluma() < th_film) ? current_frame : filmnum """) show==true? overlay(c,film_d4) : film_d4 } function NTSCconvert (clip c, clip data, bool "speedup", int "th_bob", int "mode", bool "show", string "filter", bool "blend", int "deblocker",int "IVTCtype") { # mode 1 = blend non-film frames to 50i # mode 2 = try to motion-compensate non-film frames to 25p or 50i speedup=default(speedup,true) th_bob=default(th_bob,2) blend=default(blend,true) mode=default(mode,1) show=default(show,false) filter = default(filter,"last") deblocker = default(deblocker,0) IVTCtype = default(IVTCtype,4) # 1 = telecide, 2 = tfm, 3 = tfm(pp=0), 4 = tdeint(mode=2) outnum = speedup==true? 48000 : 50000 outden = speedup==true? 1001 : 1000 order = c.getparity()==true? 1 : 0 c = deblocker==0? c : c.deblocker(quant=deblocker,ipp=true) d = c #global bobbed = d.leakkernelbob(order=order,threshold=th_bob,sharp=true,twoway=true).assumeframebased() global bobbed = d.tdeint(1,order,expand=4,mthreshl=th_bob) NTSCconvert_60i = mode==1? bobbed.blendfps(outnum/float(outden),1.25).changefps(outnum,outden).resizetopal(false,false) : bobbed.salfps(outnum/float(outden),protection2=60).changefps(outnum,outden).resizetopal(false,false) NTSCconvert_30p = mode==1? d.blendfps(outnum/float(outden),1.25).changefps(outnum/2,outden).resizetopal(false,false) : d.salfps(outnum/float(2*outden),protection2=50).resizetopal(false,false).changefps(outnum/2,outden) IVTCtype==1? d.telecide(order=order,guide=1).decimate().resizetopal(false,false) :\ IVTCtype==2? d.tfm(order=order,pp=6,chroma=true).decimate().resizetopal(false,false) :\ IVTCtype==3? d.tfm(order=order,pp=0,chroma=false).decimate().resizetopal(false,false) :\ d.tdeint(2,-1,expand=4,tryweave=true,cthresh=3,mi=24,mthreshl=6).decimate().resizetopal(false,false) eval(filter) NTSCconvert_24pfilter = last global NTSCconvert_24p = speedup==true? \ NTSCconvert_24pfilter.changefps(outnum/2,outden) : \ (blend==true)? NTSCconvert_24pfilter.blendfps(50,1.25).changefps(outnum,outden).interlace(order=1): \ NTSCconvert_24pfilter.changefps(outnum,outden).interlace(order=1) NTSCconvert_60i eval(filter) interlace(order=1) global NTSCconvert_60i = last #NTSCconvert_30p #eval(filter) #mode==1? last : last.interlace(order=1) #global NTSCconvert_30p = last #NTSCconvert_24p #eval(filter) # #global NTSCconvert_24p = last global NTSCconvert_data = data.assumetff().medianblurt(0,0,2).blendfps(outnum/float(outden*2)).changefps(outnum/2,outden) #global NTSCconvert_data = data.changefps(outnum/2,outden) global NTSCconvert_red = blankclip(NTSCconvert_data,color=$ff0000) global NTSCconvert_green = blankclip(NTSCconvert_data,color=$00ff00) global NTSCconvert_blue = blankclip(NTSCconvert_data,color=$0000ff) global NTSCconvert_isred = mt_lutxy(NTSCconvert_red,NTSCconvert_data,yexpr="x y - abs",y=3,u=1,v=1) global NTSCconvert_isgreen = mt_lutxy(NTSCconvert_green,NTSCconvert_data,yexpr="x y - abs",y=3,u=1,v=1) scriptclip(NTSCconvert_24p,"yplanemax(NTSCconvert_isred)==0 ? NTSCconvert_24p : NTSCconvert_60i") #scriptclip(NTSCconvert_24p,"yplanemax(NTSCconvert_isred)==0 ? NTSCconvert_24p : yplanemax(NTSCconvert_isgreen)==0 ? NTSCconvert_30p : NTSCconvert_60i") speedup==true? last.assumefps(25,true).SSRC(48000) : last show==true? overlay(last,data.changefps(outnum/2,outden)) : last assumetff() c.height==576? c : last } function NTSCfilter (clip c, clip data, string filter, bool "show") { show=default(show,false) order = c.getparity()==true? 1 : 0 bobbed = c.assumeframebased().leakkernelbob(order=order,threshold=2) bobbed.selecteven() eval(filter) evn=last bobbed.selectodd() eval(filter) odd=last interleave(evn,odd) interlace(order=order) global NTSCfilter_60i = last c eval(filter) global NTSCfilter_30p = last c.tdeint(2,1,tryweave=true,cthresh=2,mi=24,mtnmode=1,mthreshl=2).decimate() eval(filter) changefps(c.framerate*2).interlace(order=order) global NTSCfilter_24p = last global NTSCfilter_data = data global NTSCfilter_red = blankclip(NTSCfilter_data,color=$ff0000) global NTSCfilter_green = blankclip(NTSCfilter_data,color=$00ff00) global NTSCfilter_isred = mt_lutxy(NTSCfilter_red,NTSCfilter_data,yexpr="x y - abs",y=3,u=1,v=1) global NTSCfilter_isgreen = mt_lutxy(NTSCfilter_green,NTSCfilter_data,yexpr="x y - abs",y=3,u=1,v=1) scriptclip(NTSCfilter_30p,"yplanemax(NTSCfilter_isred)==0 ? NTSCfilter_24p : yplanemax(NTSCfilter_isgreen)==0 ? NTSCfilter_30p : NTSCfilter_60i") show==true? overlay(last,NTSCfilter_data) : last } function NTSC120 (clip c, clip data, bool "show") { show=default(show,false) order = c.getparity()==true? 1 : 0 bobbed = c.tdeint(1,order,tryweave=true,cthresh=4,mi=24,mtnmode=1,mthreshl=8)#leakkernelbob(order=order,threshold=2) global NTSCfilter_60i = bobbed.changefps(120000,1001) global NTSCfilter_30p = c.changefps(120000,1001) global NTSCfilter_24p = bobbed.decimate().changefps(120000,1001) global NTSCfilter_data = data.changefps(120000,1001) global NTSCfilter_red = blankclip(NTSCfilter_data,color=$ff0000) global NTSCfilter_green = blankclip(NTSCfilter_data,color=$00ff00) global NTSCfilter_isred = mt_lutxy(NTSCfilter_red,NTSCfilter_data,yexpr="x y - abs",y=3,u=1,v=1) global NTSCfilter_isgreen = mt_lutxy(NTSCfilter_green,NTSCfilter_data,yexpr="x y - abs",y=3,u=1,v=1) scriptclip(NTSCfilter_30p,"yplanemax(NTSCfilter_isred)==0 ? NTSCfilter_24p : yplanemax(NTSCfilter_isgreen)==0 ? NTSCfilter_30p : NTSCfilter_60i") show==true? overlay(last,NTSCfilter_data) : last } function NTSCtoFPS (clip c, clip data, int "num", int "den", bool "show") { show=default(show,false) num=default(num,25000) den=default(den,1000) order = c.getparity()==true? 1 : 0 bobbed = c.tdeint(1,order,tryweave=true,cthresh=4,mi=24,mtnmode=1,mthreshl=8)#leakkernelbob(order=order,threshold=2) matched = c.tfm(order=order,pp=6,micmatching=0).decimate() global NTSCtoFPS_60i = bobbed.salfps(num/float(den),protection2=30,iterate=4) global NTSCtoFPS_30p = c.salfps(num/float(den),protection2=30,iterate=4) global NTSCtoFPS_24p = bobbed.decimate().salfps(num/float(den),protection2=30,iterate=4) global NTSCtoFPS_data = data.changefps(num,den) global NTSCtoFPS_red = blankclip(NTSCtoFPS_data,color=$ff0000) global NTSCtoFPS_green = blankclip(NTSCtoFPS_data,color=$00ff00) global NTSCtoFPS_isred = mt_lutxy(NTSCtoFPS_red,NTSCtoFPS_data,yexpr="x y - abs",y=3,u=1,v=1) global NTSCtoFPS_isgreen = mt_lutxy(NTSCtoFPS_green,NTSCtoFPS_data,yexpr="x y - abs",y=3,u=1,v=1) scriptclip(NTSCtoFPS_30p,"yplanemax(NTSCtoFPS_isred)==0 ? NTSCtoFPS_24p : yplanemax(NTSCtoFPS_isgreen)==0 ? NTSCtoFPS_30p : NTSCtoFPS_60i") show==true? overlay(last,NTSCtoFPS_data) : last } function resizetopal (clip c, bool "soft", bool "speedup") { soft=default(soft,false) speedup=default(speedup,true) soft==false? c.addborders(0,2,0,2).lanczosresize(c.width,576,0,2+(c.height-483.84)/2,c.width,483.84) : c.addborders(0,2,0,2).bicubicresize(c.width,576,1,0,0,2+(c.height-483.84)/2,c.width,483.84) speedup==true? last.assumefps(25,true).SSRC(48000) : last c.height==576? c : last } function interlace (clip interlace_c, int "order") { order=default(order,1) interlace_d= interlace_c.framecount()%2==0? interlace_c : interlace_c.assumefps(interlace_c.framerate)++interlace_c.selectevery(interlace_c.framecount,interlace_c.framecount-1).assumefps(interlace_c.framerate) order==1? interlace_d.assumeframebased().assumetff().separatefields().selectevery(4,0,3).weave() : \ interlace_d.assumeframebased().assumebff().separatefields().selectevery(4,0,3).weave() } function deblocker (clip c,int "quant",int "th",int "radius", bool "deblock",bool "depump",bool "conv",bool "ipp",int "modh", int "modv") { quant=default(quant,8) th=default(th,3) radius=default(radius,8) deblock=default(deblock,true) depump=default(depump,false) conv=default(conv,false) ipp=default(ipp,false) modh=default(modh,20) modv=default(modv,40) c = (c.width*c.height%256==0)? c : mod16(c)#.limiter() blurd = (conv==true)?c.wideblur(4,conv): (depump==true)? c.bilinearresize(4*(c.width/8),4*(c.height/8)) : c.bilinearresize(4*(c.width/8),4*(c.height/8)).bicubicresize(c.width,c.height,1,0) highpass=(conv==true)?mt_makediff(blurd,c,y=3,u=3,v=3): (depump==true)? mt_makediff(blurd.bicubicresize(c.width,c.height,1,0),c,y=3,u=3,v=3) : mt_makediff(blurd,c,y=3,u=3,v=3) deblocked=(deblock==true) ? highpass.blindpp(quant=quant,cpu=4,ipp=ipp,moderate_h=modh,moderate_v=modv) : highpass nopump=mt_makediff(blurd,deblocked,y=3,u=3,v=3) depumped=blurd.temporalsoften(2,th,th,32,2)#.removedirt() pump=(conv==true)?mt_makediff(depumped,deblocked,y=3,u=3,v=3):mt_makediff(depumped.bicubicresize(c.width,c.height,1,0),deblocked,y=3,u=3,v=3) (depump==true) ? pump : nopump (c.width*c.height%256==0)? last : unmod16(last) } function combblur(clip c, int threshold) { mt_merge(c.mt_convolution("1","1 2 1",y=3,u=3,v=3).mt_convolution("1","-1 6 -1",y=3,u=3,v=3),c,c.combmask(threshold,threshold,y=3,u=3,v=3).mt_expand(mode=mt_rectangle(0,1),y=3,u=3,v=3).mt_inflate(y=3,u=3,v=3),y=3,u=3,v=3) } it probably needs to be optimised. while things are film it's quite fast (well, not really), but once things get 30p or 60i, things slow down a tad, especially when mocomp is on. usage: this is a client-server model, like mvtools and depan. this makes things less efficient, but it's all i could think of as a way to framerate-convert the results of film detection - make 3 different colours in a tiny 8x4 clip that represent the footage type, then changefps those to get to 48fps, then compare colours and treat the footage accordingly on the client end. here's an example: Code:
...source clip... data=last.NTSCanalyse() NTSCconvert(last,data) # or it could be last.NTSCconvert(data) ...filters... Code:
...source... autoPAL() Code:
...source... data=last.ntscanalyse() ntscfilter(last,data,"...filters...") Code:
...source... autoPAL(speedup=true,filter="...filters...") requires: -clouded's motionprotectedfps and motion package -a nice deinterlacer like tdeint. field-matching is a good idea, as my script just works on the frames you give it. -masktools v2 have fun y'all!
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 26th October 2006 at 03:48. Reason: update - ver 0.93 |
30th July 2006, 16:16 | #2 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
updated to "ver 0.9". see first post...
i thought i'd give it a version number so it gives me a 1.0 to aim for, that does everything i want. for now, it's giving very good output on "clone high". i'll have to make a burn and watch it on a TV now, just to see how it goes. i hope to see no artefacts whatsoever (though mocomped mode will probably show some, blend mode is just fine for animation).
__________________
sucking the life out of your videos since 2004 |
31st July 2006, 21:23 | #3 | Link |
Registered User
Join Date: Sep 2003
Location: Athens Greece
Posts: 84
|
Maybe a stupid question, but does this function works also on Pure NTSC Interlaced matterial?
Or i shall stick with (motionprotectedFPS) that i am currently using? I am currently aimed to NTSC(i) -> PAL(i) If the above function also works on 30i source, is this script good enough? Oh.. I named a file (NTSCconvert.avsi), and placed into plugins. Then i wrote this script and right now i encode the clip. Mpeg2Source("test.d2v",idct=0) TDeint(mode=1) assumetff() data=last.NTSCanalyse() NTSCconvert(last,data) ConvertToYUY2(interlaced=true) lanczosresize(720,576) Regards George |
1st August 2006, 14:38 | #4 | Link |
Pig on the wing
Join Date: Mar 2002
Location: Finland
Posts: 5,717
|
So this one could be (ab)used to do the conversion (restoring the PAL content as it is possible nowadays) I was looking for in the MP thread? I don't know when I shall have the time to try things out, but IIRC R24 didn't like being called more than once by a script
__________________
And if the band you're in starts playing different tunes I'll see you on the dark side of the Moon... |
1st August 2006, 15:05 | #5 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
eh... i probably posted that script too early. i've made quite a few changes over the last couple of days, on account of it apparently working completely differently on my home and work computers.
it's frustrating because if it gives a good result on something, i come back the next day and it's failing on the same sample. but it is getting there. to recap: - this script is for converting hybrid NTSC stuff to PAL in an optimal way. - it converts everything to a 48 or 24fps intermediate clips, then speeds up the final output (including audio) to 25fps, and of course resizes it to 720x576. this means audio needs to be re-encoded or it'll be out of sync. - it'll attempt to detect 3:2 telecine and output 25p for those sections. it'll also try to find 30p parts and 60i parts, and convert them separately (though the most current version simply treats nonfilm as 60i for safety). - you can tell it to either field-blend or motion-compensate (via motionprotectedfps with a little more protection) the nonfilm parts. when finished, 30p will become 25p and 60i will become 50i. @ boulder: if you give it blended stuff, it'll detect it as 60i unfortunately. it's not meant for un-doing standards conversions, but rather for doing them in as progressive-friendly a way as possible. though i can see the use of unblending stuff, as i just got a pile of DVD extras that were shot in PAL, converted to NTSC, then given to us to convert back to PAL. grrr. @ kle500: good luck with that encode if it doesn't output script errors in grey subtitling, or simply crash avisynth, then you're lucky enough to have a computer it likes however, it'll still act strangely, but you're probably fine using it if there's no random seeking going on. if it fails, it'll lean toward 60i and you'll hopefully just get a normal field-blend at half the speed... i should probably have mentioned it's not quite ready for serious encodes yet - i'm having to tweak it too much for each clip i try, and really what i want is a "one size fits all" script that handles everything without artefacts or suboptimal conversion. it's not there yet, but it's close (the one i've got but haven't posted yet is closer, and when it's reliable enough i'll post it). current issues are: - my home computer behaves differently with it, and it seems every day i need to rethink and rewrite parts for scant reason (at least in my mind). this is getting better though, so hopefully in a week or so it'll be stable. - 60i is being detected as 30p or sometimes even film and ugly blurred fields result. only happens on the japanese intro to "desert punk" that i've seen, which has a mix of 30p and 60i and a little film. a workaround is to treat all nonfilm as 60i... - film is sometimes detected too much, sometimes not enough, and curiously it doesn't change much when the film threshold is raised... i'm on it, but it sure is weird. - scenechanges are very likely to be spotted as 60i even if they're not, and interlaced, blended scenechanges aren't exactly good for compression. should be theoretically easy to fix, but very fiddly in the current script. - i know it can be sped up a lot - i've already replaced one of the mt_luts with a frameevaluate and it doesn't seem to have affected anything but speed (which is good). it may be possible to do all comparisons via frameevaluates, but it's all a matter of how long i can go without my brain imploding with those backward-forward inside-out conditional filters
__________________
sucking the life out of your videos since 2004 |
2nd August 2006, 11:04 | #6 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
ver 0.91 posted above... copypasta it into an avsi to run it. hopefully i didn't forget any helper functions, as i have hundreds and tend to forget others don't have them...
should be much much more stable, and quite a bit faster too (it's running at about 70% the speed of a standard deinterlace-resize-blendfps-reinterlace script, which is better than i expected). converttopal() = 38fps autopal() = ~26fps (speed varies between modes of course) all with just mpeg2source(...) and assumetff in their paths.
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 2nd August 2006 at 11:08. |
2nd August 2006, 16:16 | #8 | Link |
HDConvertToX author
Join Date: Nov 2003
Location: Cesena,Italy
Posts: 6,552
|
omg ! AWESOME script !!
BHH P.S me too with IanB
__________________
HDConvertToX: your tool for BD backup MultiX264: The quick gui for x264 AutoMen: The Mencoder GUI AutoWebM: supporting WebM/VP8 |
3rd August 2006, 01:09 | #10 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
PAL only has 2 major footage types - interlaced and progressive (well, there's field-shifted film, but that doesn't come up often and can be handled with telecide).
but i'll consider modifying it to make it PAL friendly (and amending the current one to detect input height + fps to make sure it doesn't work on already PAL stuff). on the width friendly script - i remember there was a way to break conditionalfilters over lines, but it involved inserting ascii codes into the script. maybe something i'll do later. for now ctrl+a and a text editor should do i'll have to document it too, as those metrics are sort of messy. i'm also considering giving it a more catchy name...
__________________
sucking the life out of your videos since 2004 |
3rd August 2006, 02:11 | #11 | Link |
ангел смерти
Join Date: Nov 2004
Location: Lost
Posts: 9,556
|
Just need a triple quote. Like:
scriptclip(source, """(counter==0 && !sc ? final : (counter==1 ? dupframe : source))#.subtitle(string(fs)) \ .subtitle("Bdiff:"+string(b_diff,"%1.2f")).subtitle("Avg:"+string(AverageLuma(single),"%1.2f"),align=9) \ .subtitle("Cdiff:"+string(c_diff,"%1.2f"),align=4).subtitle("Avg:"+string(AverageLuma(single.trim(1,0)),"%1.2f"),align=6) """) (example ripped out of my old debugging version of fixblendivtc...) |
3rd August 2006, 10:26 | #12 | Link | |
Pig on the wing
Join Date: Mar 2002
Location: Finland
Posts: 5,717
|
Quote:
Chances are that it would work just fine Still, I need to see a commented function first, and it might be that I still couldn't do the trick EDIT: just did a small test on that sample clip of mine (originally 25p, now a blended one?) and it looks like it is recognized as interlaced so it doesn't work without modificating the analysing parts as well. Dang However, the 60i stuff converted back to PAL (without modding your function) looks really good at least on the PC, no jerkiness whatsoever.
__________________
And if the band you're in starts playing different tunes I'll see you on the dark side of the Moon... Last edited by Boulder; 3rd August 2006 at 10:46. |
|
3rd August 2006, 10:34 | #13 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
haha! is it that long since i played with conditional filters? i swear the ascii thing was the only way to do it way back when (though i probably just didn't think of the triple-quotes).
now, one thing i'd like to crack is fade detection. i might have a go at it tonight - with any luck it wont slow the process down except in extreme cases, because it's already slower than i'd like it to be. another thing i'd like is to refine the 30p/60i metric. maybe use a good old fashioned spatial comb mask? i also noticed the few frames before a piece of interlaced footage come to a complete stop can sometimes be detected as film. not sure whether this is actually noticable as spoiled motion, but i suspect it is (though on a TV it may well not be due to the inherent jumpiness of the scanlines and the very small amount of motion involved). i've now tested this on about 10 different sources (anime, video, analog, digital, sharp, blurry, crawly, clean, etc) and it seems to be very robust. excellent . this could finally mean that film with edits can be sped-up automatically without having to split and treat separately. once i get fade detection going it'll be ready for public consumption (and production work, which is after all what i wrote it for).
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 3rd August 2006 at 10:36. |
8th August 2006, 02:27 | #15 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
woot you got fade detection stable, so no combing anymore between transitions (those ugly lines) ? , i have to test this great work
autoPAL(speedup=false) would give me 24p right ?
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 8th August 2006 at 02:29. |
8th August 2006, 05:39 | #16 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
well, it'll give you PAL 50i...
speedup=true will give you 25p (for film) you could slow it back down to 24p, but then the hybrid parts would be in 48i which is probably not what you want if you're maintaining NTSC. you could use NTSC120 though: Code:
data=last.ntscanalyse() ntsc120(last,data) selectevery(5,0)
__________________
sucking the life out of your videos since 2004 |
20th August 2006, 21:37 | #17 | Link |
Huh?
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
|
I'm currently trying to repair an extremely badly done DVD transfer made from my elder brother's Bar Mitzvah. The people who did it really screwed up, the source was NTSC (probably interlaced as it was filmed on a VHS camcorder) and they converted it to PAL (which, as far as I can tell, is interlaced too). Would Restore24 help me out? If so, where can I download the latest package? If not, then I'd like to request Mug Funky to create the reciprocal of this function.
P.S: upon ripping in IFO mode, what I got was 4 M2V's and a VOB, all of which had to be loaded in DGIndex for the D2V to be created. While creating the D2V, DGIndex reported "Picture Error" and at the end it told me that the field order was corrected. Should I expect nasty surprises when trying to re-encode or did DGIndex solve it?
__________________
Read Decomb's readmes and tutorials, the IVTC tutorial and the capture guide in order to learn about combing and how to deal with it. |
21st August 2006, 01:28 | #18 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
hmm. it's not really possible to recover 60i from 50i unless motion-compensation has been used (and even then it'll lose quite a bit).
i recently scripted a "restore24 lite" for my own use (sadly some dearly loved old cartoons from the 80's came in as NTSC which had been converted from PAL, and i have to make them PAL again), but it seems to be outputting duped frames in isolated situations that i can't quite tie down yet. it's got promise though basically it takes a bobbed clip, spots a blend via edgemasking, then chooses the adjacent frame that is most similar to the current frame, then decimates by 1 in 6 to get back to PAL. it's probably not post-worthy yet, and at any rate is only good for 25p-to-60i-and-back-again stuff.
__________________
sucking the life out of your videos since 2004 |
21st August 2006, 03:14 | #19 | Link |
Huh?
Join Date: Sep 2003
Location: Uruguay
Posts: 3,103
|
I could post a short sample encoded in Lagarith for you to look.
__________________
Read Decomb's readmes and tutorials, the IVTC tutorial and the capture guide in order to learn about combing and how to deal with it. |
21st August 2006, 04:32 | #20 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
not much point - i can tell you now there's not much hope of un-blending a 60i to 50i conversion.
clouded was working on something that looked promising (finding the blend pattern and using it to subtract frames from each other just the right amount to reverse the blending), but he's not been around for months. there are some experimental filters around that might help. i don't have a clue how to use them though...
__________________
sucking the life out of your videos since 2004 |
Thread Tools | Search this Thread |
Display Modes | |
|
|