View Single Post
Old 30th October 2011, 17:58   #90  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Cross correlation between a real video border and an edge model

This is part 3 of posting 69 and 82.

Basing on Border Detection 0.3 by jmac698 (same thread, here) I have modified the script that calculates cross correlation between the border of a real video and an edge model, whose parameters can be adjusted (edge blur, white level and number of black pixels of the edge).

You are welcome to do your own tests.

PHP Code:
#AvsP script

#loadplugin("J:\plugins\Gscript.dll")  # For y=0 ... loop
loadplugin("J:\plugins\GRunT.dll")  # For use of Runtime functions inside user functions
loadplugin("J:\plugins\Corr2D.dll")  
loadplugin("J:\plugins\VariableBlur.dll")   
loadplugin("J:\plugins\NNEDI3.dll")  

#Border Detection Sven_X Corr2D version basing on 0.3 by jmac698 
# see http://forum.doom9.org/showthread.php?p=1533363#post1533363
# A function to show cross correlation between the left border of a video and an edge model that can be tweaked
# Requirements:  corr2d  http://avisynth.org/vcmohan/Corr2D/Corr2D.html
# infact we do neet a Corr1D only, which would be much faster, but we have none...
# GRunT (NNEDI3, Variablelbur in some cases for the edge model)

SetMemoryMax(800)  #set this to 1/3 of the available memory  

[<separator="Clamping">]
clamp2=[<"Input clamping"1625543>]
global 
clamp=[<"cff out clamping"1625527>]
mag2 = [<"Magnification"181>]

[<
separator="Edge model">]
bwidth=[<"Black px"2162>]
bblur=[<"Edge blur (0)"051>]
bgrey=[<"Edge white (255)"1625543>]

Directshowsource("J:\test\test.avi")
converttoyuy2
#nnedi3(-2) #deinterlace, double frame rate, to view a non interlaced version 
#return last #view input video

border=32#32
crop(0,0,-last.width+border,0)
#crop(0,0,-0,last.height*1/4)  #1/2 for test purposes only, faster, 
levels(0,1,clamp2,0,clamp2)  #clamp bright areas 
edge=makeedge(last,bwidth).converttoYV12.averageblur(bblur).converttoYUY2.levels(0,1,255,0,bgrey
#make an edge model with minium edge at x=2 px
#return edge
corrbyline3(lastedge# input, reference
pointresize(last.width*mag2/2,last.height*mag2/2#Magnify output


function SplitLines2(clip cint n) {#duplicates then crops each copy in a different spot
    
Assert(c.height%== 0"Clip height not a multiple of 'n'")
    
Assert(!(c.IsYV12() && n%2==1), "'n' must be even for YV12 clips")
    
nStrips c.height/n   #= 576 lines PAL
    
c.ChangeFPS(nStrips*Framerate(c)).AssumeFPS(c# Repeat each frame 'nStrips' times
    
BlankClip(cheight=n# template for ScriptClip result
    
GScriptClip("c.Crop(0, (current_frame%nStrips-1)*n, 0, n)"args="c, nStrips, n"#(left, top,width,heigth)
}


function 
MergeLines(clip cint n) {MergeLines2(c,n,n)}

function 
MergeLines2(clip cint n,int i) {
  
i<c.SelectEvery(n) : stackvertical(MergeLines2(c,n,i-1),c.SelectEvery(ni))
}


function 
makeedge(clip vint x){
    
#Based on the properties of v, make an edge of x black + white pixels
    
x=x/2*2
    v
    blk
=blankclip(lastwidth=xcolor_yuv=$108080)
    
wht=blankclip(lastwidth=last.width-xcolor_yuv=$EB8080)
    
stackhorizontal(blkwht)
}

function 
corrbyline3(clip v1clip v2){
    
#Correlate two videos, line by line, and return the horizontal cross correlation. 
    # when input 1 and 2 are edges (a square waveform, then the length of the resulting ccf correponds to offset between both edges
    
scale=2
    v1
    pointresize
(v1.widthv1.height*scale#scale line height, for yv12 a line has to have at least 2 px height
    
h=last.height #PAL 2x576 lines = 1152
    
splitlines2(scale)  #make n frames with one line (height=scale) from a frame
    
v2l=v2.crop(0,0,0,scale)
    
interleave(lastv2l)
    
pointresize(last400100#to enlarge output of CCF, at least 200x100
   
corr=corr2d (last)
   
Crop(corr,10024, -0, -72).pointresize(corr.width,corr.height*4#400x400
   
Crop(0126, -2002#to get the interesting part of CCF (x=0...)  2 px height
   
selectevery(2,1#drop cff output of every odd line
   
mergelines(h/2)   #compose a frame from h/2 frames that contain a single line
   
v2=pointresize(v1,v1.width*2,v1.height*2)
   
stackhorizontal(v2,last,last.levels(clamp-2,1,clamp,0,255)) #clamps grey levels of ccf output above clamp to white
}


function 
corrbyline2(clip v1clip v2){
    
#Correlate one video line by line, and return the cross correlation between two succeeding lines
    
scale=2
    v1
    pointresize
(last.widthlast.height*scale#scale line height, for yv12 a line has to have at least 2 px height
    
h=last.height #PAL 2x576 lines = 1152
   
splitlines2(scale)  #make n frames with one line (height=scale) from a frame
    
pointresize(last400100#to enlarge output of CCF, at least 200x100
   
corr=corr2d (last)
   
Crop(corr,5024, -0, -72).pointresize(corr.width,corr.height*4#400x400
   
Crop(0126, -3002#to get the interesting part of CCF (x=0...)  2 px height
   
mergelines(h/2)   #compose a frame from h/2 frames that contain a single line
   
v2=pointresize(v1,v1.width*2,v1.height*2)
   
stackhorizontal(v2,last,last.levels(clamp-2,1,clamp,0,255)) #clamps grey levels in Cff output above clamp to white



Here are a few first results.







It can be seen that the width of the cross correlation reacts on the luma of the input video. Bright image parts produce a longer CCF output. The effect is smaller when input contrast is lowered (which is done in this version by clamping all luma values above a certain threshold).

By looking at these pictures I am not so convinced anymore that cross correlation can provide a good border detection.

Last edited by sven_x; 31st October 2011 at 12:06.
sven_x is offline   Reply With Quote