|
Registered User
Join Date: Oct 2011
Location: Germany
Posts: 39
|
Cross correlation between a real video border and an edge model
This is part 3 of posting 69 and 82.
Basing on Border Detection 0.3 by jmac698 (same thread, here) I have modified the script that calculates cross correlation between the border of a real video and an edge model, whose parameters can be adjusted (edge blur, white level and number of black pixels of the edge).
You are welcome to do your own tests.
PHP Code:
#AvsP script
#loadplugin("J:\plugins\Gscript.dll") # For y=0 ... loop
loadplugin("J:\plugins\GRunT.dll") # For use of Runtime functions inside user functions
loadplugin("J:\plugins\Corr2D.dll")
loadplugin("J:\plugins\VariableBlur.dll")
loadplugin("J:\plugins\NNEDI3.dll")
#Border Detection Sven_X Corr2D version basing on 0.3 by jmac698
# see http://forum.doom9.org/showthread.php?p=1533363#post1533363
# A function to show cross correlation between the left border of a video and an edge model that can be tweaked
# Requirements: corr2d http://avisynth.org/vcmohan/Corr2D/Corr2D.html
# infact we do neet a Corr1D only, which would be much faster, but we have none...
# GRunT (NNEDI3, Variablelbur in some cases for the edge model)
SetMemoryMax(800) #set this to 1/3 of the available memory
[<separator="Clamping">]
clamp2=[<"Input clamping", 16, 255, 43>]
global clamp=[<"cff out clamping", 16, 255, 27>]
mag2 = [<"Magnification", 1, 8, 1>]
[<separator="Edge model">]
bwidth=[<"Black px", 2, 16, 2>]
bblur=[<"Edge blur (0)", 0, 5, 1>]
bgrey=[<"Edge white (255)", 16, 255, 43>]
Directshowsource("J:\test\test.avi")
converttoyuy2
#nnedi3(-2) #deinterlace, double frame rate, to view a non interlaced version
#return last #view input video
border=32#32
crop(0,0,-last.width+border,0)
#crop(0,0,-0,last.height*1/4) #1/2 for test purposes only, faster,
levels(0,1,clamp2,0,clamp2) #clamp bright areas
edge=makeedge(last,bwidth).converttoYV12.averageblur(bblur).converttoYUY2.levels(0,1,255,0,bgrey)
#make an edge model with minium edge at x=2 px
#return edge
corrbyline3(last, edge) # input, reference
pointresize(last.width*mag2/2,last.height*mag2/2) #Magnify output
function SplitLines2(clip c, int n) {#duplicates then crops each copy in a different spot
Assert(c.height%n == 0, "Clip height not a multiple of 'n'")
Assert(!(c.IsYV12() && n%2==1), "'n' must be even for YV12 clips")
nStrips = c.height/n #= 576 lines PAL
c = c.ChangeFPS(nStrips*Framerate(c)).AssumeFPS(c) # Repeat each frame 'nStrips' times
BlankClip(c, height=n) # template for ScriptClip result
GScriptClip("c.Crop(0, (current_frame%nStrips-1)*n, 0, n)", args="c, nStrips, n") #(left, top,width,heigth)
}
function MergeLines(clip c, int n) {MergeLines2(c,n,n)}
function MergeLines2(clip c, int n,int i) {
i<2 ? c.SelectEvery(n) : stackvertical(MergeLines2(c,n,i-1),c.SelectEvery(n, i))
}
function makeedge(clip v, int x){
#Based on the properties of v, make an edge of x black + white pixels
x=x/2*2
v
blk=blankclip(last, width=x, color_yuv=$108080)
wht=blankclip(last, width=last.width-x, color_yuv=$EB8080)
stackhorizontal(blk, wht)
}
function corrbyline3(clip v1, clip v2){
#Correlate two videos, line by line, and return the horizontal cross correlation.
# when input 1 and 2 are edges (a square waveform, then the length of the resulting ccf correponds to offset between both edges
scale=2
v1
pointresize(v1.width, v1.height*scale) #scale line height, for yv12 a line has to have at least 2 px height
h=last.height #PAL 2x576 lines = 1152
splitlines2(scale) #make n frames with one line (height=scale) from a frame
v2l=v2.crop(0,0,0,scale)
interleave(last, v2l)
pointresize(last, 400, 100) #to enlarge output of CCF, at least 200x100
corr=corr2d (last)
Crop(corr,100, 24, -0, -72).pointresize(corr.width,corr.height*4) #400x400
Crop(0, 126, -200, 2) #to get the interesting part of CCF (x=0...) 2 px height
selectevery(2,1) #drop cff output of every odd line
mergelines(h/2) #compose a frame from h/2 frames that contain a single line
v2=pointresize(v1,v1.width*2,v1.height*2)
stackhorizontal(v2,last,last.levels(clamp-2,1,clamp,0,255)) #clamps grey levels of ccf output above clamp to white
}
function corrbyline2(clip v1, clip v2){
#Correlate one video line by line, and return the cross correlation between two succeeding lines
scale=2
v1
pointresize(last.width, last.height*scale) #scale line height, for yv12 a line has to have at least 2 px height
h=last.height #PAL 2x576 lines = 1152
splitlines2(scale) #make n frames with one line (height=scale) from a frame
pointresize(last, 400, 100) #to enlarge output of CCF, at least 200x100
corr=corr2d (last)
Crop(corr,50, 24, -0, -72).pointresize(corr.width,corr.height*4) #400x400
Crop(0, 126, -300, 2) #to get the interesting part of CCF (x=0...) 2 px height
mergelines(h/2) #compose a frame from h/2 frames that contain a single line
v2=pointresize(v1,v1.width*2,v1.height*2)
stackhorizontal(v2,last,last.levels(clamp-2,1,clamp,0,255)) #clamps grey levels in Cff output above clamp to white
}
Here are a few first results.
It can be seen that the width of the cross correlation reacts on the luma of the input video. Bright image parts produce a longer CCF output. The effect is smaller when input contrast is lowered (which is done in this version by clamping all luma values above a certain threshold).
By looking at these pictures I am not so convinced anymore that cross correlation can provide a good border detection.
Last edited by sven_x; 31st October 2011 at 12:06.
|