Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
11th January 2004, 19:26 | #41 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Oooops, there was a syntactical error in the function TomsBobSpecial.
Fixed now, sorry. Interesting that no-one mentioned it before ... oh well. LiquidGhost: yes, it works under AS 2.5. In the meantime, the script has been greatly restructured. Bobbing is now performed by KernelBob, a small trick to educate SmartDecimate was added (help it to avoid mismatches), and the h-u-g-e memory allocation has dropped by magnitudes (the posted version may eat up several hundred MB's in the long run). And since I should take some free time from office within the next two months (old holiday hanging over), there is hope to get that darn thing to a really usuable state in forseeable time. Unless Murphy comes to visit again - Didée
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
12th January 2004, 08:39 | #44 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Quote:
-> Try SetMemoryMax(160) on the beginning of your script. Next version has (will have) a normal memory usage. BTW, timeline random access in VirtualDub is not recommended. PLus (for now), don't force great load on your system while the script is working in the background. - Didée
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
|
1st March 2004, 01:49 | #46 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,653
|
I've played a lot with this great restore24 and found it very useful.
but since it doesn't seem to handle BFF & TFF streams right, I've made some minor modifications to make it more handy: 1st: I've made a bobber especially for blended clips: http://forum.doom9.org/showthread.php?s=&threadid=71510 save its code to a file called bobmatcher.avs 2nd: modified restore24: Code:
# # Restore24 v0.4c # This script tries to restore 24 progressive frames # out of 25 interlaced frames with blended fields resulting # from 24p -> TC -> NTSC -> FieldBlendDecimation -> PAL # Function Restore24(clip worki,clip outi ) { ###### WORKING STREAM ######----------------------------------------------------------------------- global work = blankclip(worki).loop().trim(1,50)+worki ###### OUTPUT STREAM ######----------------------------------------------------------------------- global out = blankclip(outi).loop().trim(1,50)+outi global out0=blankclip(out).trim(1,1)+out # .subtitle("p") global out1=out.trim(1,0) # .subtitle(" n") # global out =out.subtitle(" c") ###### WORKING STREAM ######---------------------------------------------------------------------- work=work.greyscale().levels(0,1.75,255,16,235).crop(8,8,-8,-8) .BicubicResize(work.width/2,work.height/2-32,.0,.5) ###### BUILD EDGEMASK ######---------------------------------------------------------------------- edgeRGB = work.ConvertToRGB32().\ GeneralConvolution(0," -1 -1 -1 -2 10 -2 -1 -1 -1 ") ##################################################################################### ###### This following part is implemented with native AviSynth commands ###### ##################################################################################### # ###### PRECEEDING & PROCEEDING ###### #global edge_c = edgeRGB.ConvertToYUY2().levels(0,0.5,160,0,255).levels(0,0.5,160,0,255) #global edge_p2 = BlankClip(edge_c).trim(1,2)+edge_c #global edge_p1 = edge_p2.trim(1,0) #global edge_n1 = edge_c .trim(1,0) #global edge_n2 = edge_c .trim(2,0) ###### WEAKEN STATIC PARTS, BIAS TO MOTION ###### #global edge_p2_dark_n = layer(edge_p2, edge_p1.levels(0,1.0,255,255,0),"darken").ConvertToYV12() #global edge_p1_dark_p = layer(edge_p1, edge_p2.levels(0,1.0,255,255,0),"darken").ConvertToYV12() #global edge_n2_dark_p = edge_p1_dark_p.trim(3,0) #global edge_n1_dark_n = edge_p2_dark_n.trim(3,0) #global edge_n1_dark_p = edge_p1_dark_p.trim(2,0) #global edge_c_dark_n = edge_p2_dark_n.trim(2,0) #global edge_c_dark_p = edge_p1_dark_p.trim(1,0) #global edge_p1_dark_n = edge_p2_dark_n.trim(1,0) ###### BACK'N FORTH, FORTH'N BACK ... ###### #global edge_p2 = edge_p2.ConvertToYV12() #global edge_p1 = edge_p1.ConvertToYV12() #global edge_c = edge_c .ConvertToYV12() #global edge_n1 = edge_n1.ConvertToYV12() #global edge_n2 = edge_n2.ConvertToYV12() ################################################# ###### End of native implementation ###### ################################################# #------------------------------------------------------------------------ ########################################################################################## # ##### The following part is implemented with MaskTools v1.4.0. Only tested on Athlon! ##### ###### To try it, un-comment it, and comment-out the above native part. ###### ########################################################################################## ## # ###### PRECEEDING & PROCEEDING ###### global edge_c = edgeRGB.ConvertToYV12() global edge_c = edge_c.levels(0,0.5,160,0,255) #.levels(64,8.0,128,0,255) #.levels(0,0.25,127,0,255) global edge_p2 = BlankClip(edge_c).trim(1,2)+edge_c global edge_p1 = edge_p2.trim(1,0) global edge_n1 = edge_c .trim(1,0) global edge_n2 = edge_c .trim(2,0) # ####### WEAKEN STATIC PARTS, BIAS TO MOTION ###### global edge_p2_dark_n = YV12layer(edge_p2, edge_p1.invert(),"mul",chroma=false) global edge_p1_dark_p = YV12layer(edge_p1, edge_p2.invert(),"mul",chroma=false) # global edge_n2_dark_p = edge_p1_dark_p.trim(3,0) global edge_n1_dark_n = edge_p2_dark_n.trim(3,0) global edge_n1_dark_p = edge_p1_dark_p.trim(2,0) global edge_c_dark_n = edge_p2_dark_n.trim(2,0) global edge_c_dark_p = edge_p1_dark_p.trim(1,0) global edge_p1_dark_n = edge_p2_dark_n.trim(1,0) # ######################################################## ###### End of implementation with MaskTools ###### ######################################################## ###### INITIALIZING MORE VARS ###### global btest_p = 0 global btest_p1 = 0 global btest_pc = 0 global btest_c = 0 global btest_cn = 0 global btest_n = 0 global btest_n1 = 0 global btest_p2_n = 0 global btest_p2 = 0 global btest_p1_p = 0 global btest_p1 = 0 global btest_p1_n = 0 global btest_c_p = 0 global btest_c = 0 global btest_c_n = 0 global btest_n1_p = 0 global btest_n1_n = 0 global btest_n2_p = 0 global IsBlend_n = false global IsBlend_c = false global IsBlend_p1 = false global IsBlend_p2 = false global frametype_p3 = 0 global frametype_p2 = 0 global frametype_p1 = 0 global frametype_c = 0 global frametype_n = 0 global single_ahead = false global in_pattern = false global pattern_guidance = 0 global count_p=0 global count_n=0 global P2_motion_btest = 0 global P1_motion_btest = 0 global N1_motion_btest = 0 global N2_motion_btest = 0 ##### DEBUGGING, CHANGES ALL TIME ###### function ShowAll () { stackvertical(stackhorizontal(edge_p2_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40), \ edge_n2_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40) \ ), \ stackhorizontal(edge_p1_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40), \ edge_n1_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40) \ ), \ stackhorizontal(edge_p1_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40), \ edge_n1_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40) \ ), \ stackhorizontal(edge_c_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40), \ edge_c_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40) \ ), \ Calculate(work).crop(0,0,-0,-16).ConvertToYV12(), \ out.crop(0,16,-0,-16).ConvertToYV12() \ ) } function ShowMetrics( clip clip ) { clip=clip.subtitle("detected is "+string(IsBlend_c), y=16) clip=clip.subtitle("diff prev= "+string(btest_c_p-btest_p1_n) + " nxt ="+string(btest_c_n-btest_n1_p) , y=32) clip=clip.subtitle("ratio prv= "+string(btest_c_p/btest_p1_n) + " nxt ="+string(btest_c_n/btest_n1_p) , y=48) clip=clip.subtitle("pattern lock is "+string(in_pattern), y=64) clip=clip.subtitle("pattern guideance = "+string(pattern_guidance), y=80) clip=clip.subtitle("single ahead is "+string(single_ahead),y=96) clip=clip.subtitle("frametype_p3 = "+string(frametype_p3) + " IsBlend_p3 = "+string(IsBlend_p3),y=112) clip=clip.subtitle("frametype_p2 = "+string(frametype_p2) + " IsBlend_p2 = "+string(IsBlend_p2),y=128) clip=clip.subtitle("frametype_p1 = "+string(frametype_p1) + " IsBlend_p1 = "+string(IsBlend_p1),y=144) clip=clip.subtitle("frametype_c = "+string(frametype_c) + " IsBlend_c = "+string(IsBlend_c), y=160) clip=clip.subtitle( " IsBlend_n = "+string(IsBlend_n), y=176) return( clip ) } ###### REPLACE FUNCTIONS ###### function PutCurr( ) { global count_p = count_p+1 global count_n = count_n+1 global frametype_c = 0 return( out ) } function PutPrev( ) { global count_p = 0 global count_n = count_n+1 global frametype_c = -1 return( out0 ) } function PutNext( ) { global count_p = count_p+1 global count_n = 0 global frametype_c = 1 return( out1 ) } ###### REPLACE DECISION, SAFETY CHECK TO AVOID SINGLES & TRIPLES ###### function PREV( ) { # it's no good idea to put a 'prev' if the decision two frames back was 'next'. (frametype_p3 == 1) ? PutNext() : PutPrev() return( last ) } function NEXT( ) { # it's a bad idea to put a 'next' if two frames earlier we put a 'prev': # this is very likely to leave a 'single' frame (frametype_p2 == -1) ? PutPrev() : PutNext() return( last ) } function CURR( ) { PutCurr() return( last ) } ###### REPLACE BLEND WITH MOST SIMILAR NEIGHBOR ###### function UseMostSimilar( ) { # The frame which blend-test's ratio is closer to 1 should be more similar ratio_p = abs(btest_c_p/btest_p1_n) ratio_n = abs(btest_c_n/btest_n1_p) (ratio_p > ratio_n) ? PREV() : NEXT() # putPrev() : putNext() return( last ) } ###### REPLACE BLEND ACC. TO PATTERN GUIDANCE ###### # currently: only detect by pattern, but guidance not used (SOFT PATTERN) function UsePattern( ) { pattern_guidance == 1 ? NEXT() : NOP # PutNext() : NOP pattern_guidance == 0 ? CURR() : NOP # PutCurr() : NOP pattern_guidance ==-1 ? PREV() : NOP # PutPrev() : NOP return( last ) } ###### CHECK IF A SINGLE FRAME IS PROBABLY AHEAD ###### function CheckSingleAhead( ) { # if brightness of *both* edges n+1 & n+2 is greater than past two frames, that means more motion # in them, whilst n+1 can't be a double of n+2, cause it was bright enough to outrace n-1 & n-2 # Seems good theoretically, but it seems not to work quite as expected. No tragedy, 'cause false # decisions should be caught by the safety check in NEXT() & PREV() #global single_ahead = ( ( (btest_n1_p > btest_c_p) && (btest_n2_p > btest_c_p) ) # \ && ( (btest_n1_p > btest_p1_p) && (btest_n2_p > btest_p1_p) ) # \ ) global single_ahead = ( ( (btest_n1_n > btest_p2_n) && (btest_n2_p > btest_p2_n) ) \ && ( (btest_n1_n > btest_p1_p) && (btest_n2_p > btest_p1_p) ) \ ) ### Is this second variant better??? ### in_pattern = single_ahead ? false : true return( last ) } ###### CHECK IF HISTORY-OF-BLENDS SHOWS A PATTERN ###### function CheckPattern( ) { #--- not bad, but still to improve global in_pattern = \ ( (IsBlend_c == false) && (frametype_p1 == 0) && (frametype_p2 != 0) && (frametype_p3 == 0) && (Isblend_n == false) ) # \ || ( (IsBlend_c == false) && (frametype_p2 == 0) && ( (frametype_p1 != 0) || (frametype_p3 != 0) ) && (Isblend_n == true) ) # SOFT: use pattern only to detect blends that arn't detected by metrics # \ ( (IsBlend_c == false) && (frametype_p1 == 0) && (frametype_p2 != 0) && (frametype_p3 == 0) && (Isblend_n == false) ) # \ || ( (IsBlend_c == false) && (IsBlend_p2 == true) && (frametype_p1 == 0) && (frametype_p3 == 0) )# && (Isblend_n == true) ) # STRONG: use pattern even if blend is detected at n+1: OVERRIDE! - requires atleast one 'real' detection in the past global pattern_guidance = in_pattern ? frametype_p2 : 99 # currently not used return( last ) } ###### REPLACE DECISION ###### function Replace( ) { # If next frame is detected as a single, replace current blend with it to double it. Else, use # pattern guidance, if a pattern is actually locked. If not, simply use most similar neighbor. CheckSingleAhead() single_ahead ? NEXT() : UseMostSimilar() # SOFT pattern: replace undetected blends # ( in_pattern ? UsePattern() : UseMostSimilar() ) # STRONG pattern: replace acc. to pattern return( last ) } ###### DO WE HAVE A BLEND TO REPLACE ? ###### function Evaluate( ) { # If current frame is detected or predicted as a blend, replace it. Else it's clean, put it out CheckPattern() ( IsBlend_c || in_pattern ) ? Replace() : PutCurr() #( IsBlend_c || ( in_pattern && (pattern_guidance != 0)) ) ? Replace() : PutCurr() # Pattern guidance is not yet final debug ? ShowMetrics() : NOP return( last ) } ###### CALCULATE METRICS ###### function Calculate( clip clip ) { c99=scriptclip(out, "Evaluate()") c16=FrameEvaluate(c99, "global IsBlend_n = (btest_n1_p<btest_c_n)&&(btest_n1_n<btest_n2_p)") c15=FrameEvaluate(c16, "global IsBlend_c = IsBlend_n") c14=FrameEvaluate(c15, "global IsBlend_p1 = IsBlend_c") c13=FrameEvaluate(c14, "global IsBlend_p2 = IsBlend_p1") c12=FrameEvaluate(c13, "global IsBlend_p3 = IsBlend_p2") c11=FrameEvaluate(c12, "global frametype_p1 = frametype_c") c10=FrameEvaluate(c11, "global frametype_p2 = frametype_p1") c9 =FrameEvaluate(c10, "global frametype_p3 = frametype_p2") c8=FrameEvaluate(c9, "global btest_n1_n = AverageLuma(edge_n1_dark_n)") # c7=FrameEvaluate(c8, "global btest_n2_p = AverageLuma(edge_n2_dark_p)") # c6=FrameEvaluate(c7, "global btest_p2_n = btest_p1_n") # c5=FrameEvaluate(c6, "global btest_p1_p = btest_c_p") # c4=FrameEvaluate(c5, "global btest_p1_n = btest_c_n") # enhanced blend-test - c3=FrameEvaluate(c4, "global btest_c_p = btest_n1_p") # test with darkened c2=FrameEvaluate(c3, "global btest_c_n = btest_n1_n") # static edges, making c1=FrameEvaluate(c2, "global btest_n1_p = btest_n2_p") # motion more important return(c1) } ###### DO IT ###### function DoIt () { Calculate(work) AlreadyBobbed = last assumeframebased() separatefields.selectevery(4,1,2).weave SmartDecimate(24,50, bob=AlreadyBobbed, tel=0.9, t_max=0.0000050, console=false) # assumefps(25) # this should really be done externally, not within Restore24 ... trim(24,0) return(last) } ###### DEBUG, OR NORMAL OUTPUT ? ###### global debug = false # true # debug ? ShowAll() : DoIt() return( last ) }
__________________
Don't forget the 'c'! Don't PM me for technical support, please. Last edited by scharfis_brain; 1st March 2004 at 01:52. |
1st March 2004, 01:50 | #47 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,653
|
to use the modified restore24, write a avs-script similar to this:
Code:
setmemorymax(256) import("C:\x\avs-scripts\restore24.avs") import("C:\x\avs-scripts\bobmatcher.avs") loadplugin("C:\x\masktools.dll") #for restore24 loadplugin("C:\x\avisynth_c.dll") loadcplugin("C:\x\smartdecimate.dll") loadplugin("C:\x\tomsmocomp.dll") #for tomsbobsoft loadplugin("C:\x\kerneldeint140.dll") #for bobmatcher loadplugin("C:\x\mpeg2dec.dll") loadplugin("C:\x\mpeg2dec3.dll") mpeg2source("yousource.d2v",cpu=4,iPP=true) converttoyuy2(interlaced=true) restore24(tomsbobsoft(last),bobmatcher(last)) the first is the testing-stream and the second one is the working stream the testing-stream is used to get the edgemask. therefore it is needed, that no thresholded deinterlacing is applied. just pure bobbing. the working-stream can be deinterlaced according to you wishes. you can use kernelbob, dgbob or bobmatcher. it is up to you.
__________________
Don't forget the 'c'! Don't PM me for technical support, please. Last edited by scharfis_brain; 18th April 2004 at 22:05. |
16th April 2004, 19:04 | #48 | Link |
Fascinated Lurker
Join Date: Feb 2002
Location: Durham, UK
Posts: 243
|
ugh this is incredibly hard to follow now that it has been edited and spread across different threads.
I cannot get restore 24 to work under any circumstances. I have all the plugins and I believe they are the versions needed here. I am using Avisynth 2.55 alpha march 30th and have created restore24.avs and bobmatcher.avs as they exist in their final state in these two threads. The closest I get to this filter working is "invalid arguments to the function GeneralConvolution... restore24.avs line 26" Can someone please tell me which gods I need to pray to in order to get this thing to work? I wouldn't mind so much if the original posts had been kept but the way this thread has been edited makes it practically impossible to work out what changes have happened. (Also, as a side, wouldn't it be worth extracting the MaskTools code from the old mpeg2dec and make a new dll with it and save having all these different versions of filter around? I have so many mpeg2dec files I dont know which one is which anymore.) |
17th April 2004, 15:59 | #49 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,391
|
Oh-oh ... indeed, Restore24 is currently broken.
That is because a bug in GeneralConvolution that was present for a very long time got fixed some time back. I am planning to pick up the work on Restore24 again in the near future - the function posted in this thread is by far not the end of the story . The latest development versions were already much more sophisticated, but far too unstable to be made public. For the time being, take the modified script that scharfis_brain posted above, and make the following changes manually: replace this part: Code:
###### BUILD EDGEMASK ######----------- edgeRGB = work.ConvertToRGB32().\ GeneralConvolution(0," -1 -1 -1 -2 10 -2 -1 -1 -1 ") Code:
###### BUILD EDGEMASK ######----------- edgeRGB = work.DEdgeMask(0,255,0,255,"5 0 -5 10 0 -10 5 0 -5",setdivisor=true,divisor=6) Code:
###### PRECEEDING & PROCEEDING ###### global edge_c = edgeRGB.ConvertToYV12() global edge_c = edge_c.levels(0,0.5,160,0,255,false) #.levels( ... ... Wth a little luck, this summer I can present a truly pattern-guided version of Restore24. But developing and bugfixing of the advanced version is such a major P.I.T.A., you can hardly imagine. - Didée edit: changed the kernel for DEdgeMask(). I accidentally posted a 90° rotated kernel - it worked, but this one works better.
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 17th April 2004 at 19:35. |
18th April 2004, 21:06 | #53 | Link |
Registered User
Join Date: Nov 2003
Location: Denmark
Posts: 38
|
the version i have is from 10. jan 2004 (1.4.6)
its the one from http://www.avisynth.org/warpenterprises/ edit: alright found a newer version in the forums and it fixed the problem, but now it says it need yv12 data :/ Last edited by Dallemon; 18th April 2004 at 21:11. |
18th April 2004, 21:11 | #54 | Link |
Fascinated Lurker
Join Date: Feb 2002
Location: Durham, UK
Posts: 243
|
http://forum.doom9.org/showthread.php?s=&threadid=67232
it's in this forum, you just had to look a few thread down |
18th April 2004, 21:13 | #55 | Link |
Registered User
Join Date: Nov 2003
Location: Denmark
Posts: 38
|
thx but i found the search button :P
edit: argh if i change the converttoyuy2 to yv12 it says it cant crop with uneven numbers and if it stays @ yuy2 it says it needs yv12 data :/ Last edited by Dallemon; 18th April 2004 at 21:17. |
18th April 2004, 22:02 | #56 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,653
|
please be sure, you have read my loong post above!
you have to load the plugins in th following order: masktools.dll #for restore24 mpeg2dec.dll #for bobmatcher mpeg2dec3.dll #for loading the d2v
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
18th April 2004, 22:12 | #57 | Link |
Registered User
Join Date: Nov 2003
Location: Denmark
Posts: 38
|
i did
edit: except i have no idea which version of mpeg2dec.dll to use :/ edit2: the version i have is the one from the avisynth filter collection edit3/5: the new script looks like so setmemorymax(256) import("C:\AviSynth 2.5\scripts\restore24.avs") import("C:\AviSynth 2.5\scripts\bobmatcher.avs") loadplugin("C:\AviSynth 2.5\plugins\masktools.dll") loadplugin("C:\AviSynth 2.5\plugins\avisynth_c.dll") loadcplugin("C:\AviSynth 2.5\plugins\smartdecimate.dll") loadplugin("C:\AviSynth 2.5\plugins\tomsmocomp.dll") loadplugin("C:\AviSynth 2.5\plugins\kerneldeint.dll") loadplugin("C:\AviSynth 2.5\plugins\mpeg2dec.dll") loadplugin("C:\AviSynth 2.5\plugins\mpeg2dec3.dll") mpeg2source("C:\KORN_LIVE_DISC1\VIDEO_TS\KoRn - Live.d2v",CPU=4,iPP=true) converttoyuy2(interlaced=true) restore24(tomsbobsoft(last),bobmatcher(last)) edit4: ok i see you have changed the dlls that needed to loaded Last edited by Dallemon; 18th April 2004 at 22:22. |
18th April 2004, 22:40 | #58 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,653
|
Okay, does that work for you?
I've just cleaned up the post above. I didn't change anything important, I've just knocked out some dlls because they aren't needed anymore. EDIT: be sure to set the setmemorymax(x) correctly. x should be about 2/3 of your RAM. If your have 384 MB of RAM, use setmemorymax(256). (If x is set too high, the encoding process will end in a very funny harddrive swapping party )
__________________
Don't forget the 'c'! Don't PM me for technical support, please. Last edited by scharfis_brain; 18th April 2004 at 22:43. |
|
|