Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 22nd October 2011, 01:14   #61  |  Link
William.Lemos.BR
Registered User
 
Join Date: Oct 2010
Posts: 18
Just for the record: the problem Iīve reported before (with mt_binarize) had nothing to do with your script or masktools;
The problem was in my Avisinth (solved when I uninstall and reinstall it).
Sorry for this...
William.Lemos.BR is offline   Reply With Quote
Old 22nd October 2011, 09:52   #62  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Testing the script v.05

I discoverd, that the thresh value in ScriptClip has extreme influence in videos that have black details in the pixels on the left and right side. They are falsely regarded as borders by the algorithm. (see picture)




Influence of the Thresh value

Thresh=72 much too large, considers dark image details to be part of the borders, produces wrong results (see above)
Thresh=16 This should be the best value with standard levels video, but I found that the results are extremely random, that is, they react strongly on the noise that is superimposed to the ramp. Often gives very bad results (random line offset fluctuations).
Thresh=12 Much more stable, gives a nice vertical alignment of the lines. But in some videos borders are not recognized anymore.
Disadvantage: The algorithm thinks, that the real border is somewhere in the deeper black border areas, so that it shrinks the whole line to match the standard line width - details are lost.


Conclusions and proposals
  • The tricky part of the algorithm is how to find the border. Using a thresh reacts to strong on noise in the transition area between border and video content of the line. Some kind of avaraging should be applied.
  • Plausibility checks might be necessary to prevent wrong offset values that lay wide out of normal range. In a debug mode this checks could trigger detailed error information.
  • Perhaps thresh value could be adapted automaticly somehow to the luma levels (in other words: using autolevels for the frame that is analyzed, but perform stretching to the original frame.)
  • The blur effect that occurs when shrinking the line could be avoided, when all lines are stretched to, say 120% or even 200% instead of shrinking.

Last edited by sven_x; 22nd October 2011 at 10:34.
sven_x is offline   Reply With Quote
Old 22nd October 2011, 13:34   #63  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
This is a smart part of the left border from the above frame. Scaled to 800% and luma range is stretched to 0...128 (and 0...32). This gives an impression, how tricky the algorithm has to be to find the real border. It is very difficult.
sven_x is offline   Reply With Quote
Old 22nd October 2011, 14:30   #64  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
@sven
Great post again! I implemented one of your suggestions already, it works for me.
I didn't design this for the "case #2", of black borders in existing video. But it depends on your video. My videos have huge jitter, yours seem quite smooth. I would need a sample of your video to work with.
jmac698 is offline   Reply With Quote
Old 22nd October 2011, 15:11   #65  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Updated to a better resizer. You can also enlarge your clip before using the TBC.
jmac698 is offline   Reply With Quote
Old 22nd October 2011, 15:20   #66  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Building a simple cross correlation in photoshop v0.2

What to do, if one has no idea of programming languages, mathematica and those things? Voilla: Building a simple cross correlation function in Photoshop!
(Edit: This is not so easy as I thought before. Meanwhile version 0.2 of this post.)
Below you see the result. Input was from a frame which lines had a bumb on the upper left side.

Layer 1: Border area from frame (10 px black + border area + 10 px of last value stretched to 10 px)
Layer 2: an ideal border (15 px black, 15 px 50% grey), modus = multiply
Than select all, copy all reduced to one layer
The following has to be applied one time only and can be recorded with the "actions" panel in photoshop:
- Insert layer, copy it 9 times an set transparency for each layer to 1/(1+n) (the first must have 100%, then 50%, 33%, 25%...)
- Shift first 5 copies 1...5 pixels to the left, shift copy 6...10 1...5 pixels to the right.
Doing so you get the sum of all layers in the row in the middle. This simply acts as a trick to sum up the 10 pixels of each line of the product of layer 1 and layer 2.
- Then shift layer 2 (the ideal edge) 1 px to the left and run the same action. Do so for -2...-5 and +1...+5 px.
In a last step I copied the middle row of each layer set into one image, giving cross correlation from tau - 5px to + 5px as 10 rows of pixels. For a better look I have inverted the result and enhanced contrast by using the levels menu.



It would be better to test such an algorithm using a mathematic programming language.

Problems:
Borders are dark areas, which have luma levels close to 0.So a multiplication with another value (the ideal edge) gives allmost no information. We would obtain more information about the borders when inverting the image before multiplying.

Last edited by sven_x; 23rd October 2011 at 11:38.
sven_x is offline   Reply With Quote
Old 22nd October 2011, 15:39   #67  |  Link
Ghitulescu
Registered User
 
Ghitulescu's Avatar
 
Join Date: Mar 2009
Location: Germany
Posts: 5,773
Imagine you do this by hand 135000 times, for a regular VHS movie. Not to think about interlacing issues
__________________
Born in the USB (not USA)
Ghitulescu is offline   Reply With Quote
Old 22nd October 2011, 16:44   #68  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
That's easy...
Code:
#Border Detection 0.3 by jmac698
#A function to find the left edge of a video with a black border
#Requirements:  corr2d  http://avisynth.org/vcmohan/Corr2D/Corr2D.html
#  GRunT
AVISource("D:\project001a\tbc2\vhs hysnc sample.avi")
border=16
crop(0,0,-last.width+border,0)
edge=makeedge(last,border/2)
corrbyline(last, edge)

function SplitLines(clip c, int n) {#duplicates then crops each copy in a different spot
    Assert(c.height%n == 0, "Clip height not a multiple of 'n'")
    Assert(!(c.IsYV12() && n%2==1), "'n' must be even for YV12 clips")
    nStrips = c.height/n
    c = c.ChangeFPS(nStrips*Framerate(c)).AssumeFPS(c) # Repeat each frame 'nStrips' times
    BlankClip(c, height=n) # template for ScriptClip result
    GScriptClip("c.Crop(0, (current_frame%nStrips)*n, 0, n)", args="c, nStrips, n")
}

function MergeLines(clip c, int n) {MergeLines2(c,n,n)}

function MergeLines2(clip c, int n,int i) {
  i<2?c.SelectEvery(n):stackvertical(MergeLines2(c,n,i-1),c.SelectEvery(n, i))
}

function makeedge(clip v, int x){
    #Based on the properties of v, make an edge of black/white, with white at x
    x=x/2*2
    v
    blk=blankclip(last, width=x, color_yuv=$108080)
    wht=blankclip(last, width=last.width-x, color_yuv=$EB8080)
    stackhorizontal(blk, wht)
}

function corrbyline(clip v1, clip v2){
    #Correlate two videos, line by line, and return the correlation surface.  The position of peak luma indicates maximum correlation position.
    scale=2
    interleave(v1, v2)
    pointresize(last.width, last.height*scale)
    h=last.height
    splitlines(scale)
    corr2d
    selectevery(2,1)
    pointresize(last,width, last.height*2)
    crop(0,0,0,-last.height/2)#get only the top line
    mergelines(h/2)
    pointresize(last.width, last.height/scale)
}

Last edited by jmac698; 22nd October 2011 at 17:40.
jmac698 is offline   Reply With Quote
Old 23rd October 2011, 20:17   #69  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Testing cross correlation using Corr2D plugin

I run a few test with Corr2D, because its documentation sais fairly nothing about its implementation.

The origin of the output coordinate system lies at 128,128 (which is 1/4 of width and height of input frame). The function is symmetrical both in x and y direction.
Corr2D calculates cross correlation between each two consecutive frames. So interleave(edge2,edge1) provides a test input which gives cross correlation between two different edges.

Results


The first two rows show the auto correlation functions for a black 255 px border and a black 170 px border.
The next rows show cross correlation for different edge combinations. One can see that the length of the correlation function correlates to the pixel distance of the input edges.

With a very small distance of 3 px one could not tell much about this distance, because output does not go to zero with a zero offset (see the ACF plots in the first two rows). A cross correlation between two identical edges is the same as an autocorrelation of that edge.

Edit: The bad resolution for small distances is an effect of scaling. If we blow up input line and reference line to, say, 400% length, then CCF should be able to measure distances of a few pixels too.


Part 2: Getting closer to the real thing...

The next picture shows the cross correlation between an ideal edge model (256 px black + 256 px white) and a second edge (170 px black).

The second edge has been disturbed by
a) binmialblur(20)
b) binmialblur(80)
c) binmialblur(80) and make contrast weaker (black + dark grey with luma 64 only)
c) all of the above + strong noise superimposed

This was done for better simulation of a real transition from the black border to the beginn of video content of a line.



The output result of CCF looks very stable. It does not react on noise, blur, and bad contrast.

Continued with a test of real video input in posting 90.

Last edited by sven_x; 30th October 2011 at 18:04. Reason: Part 2 of edge test, Link to part 3
sven_x is offline   Reply With Quote
Old 24th October 2011, 01:14   #70  |  Link
ChiDragon
Registered User
 
ChiDragon's Avatar
 
Join Date: Sep 2005
Location: Vancouver
Posts: 600
The technical details are over my head. Is this solution going to be limited to content without black or near-black as part of the active video image touching the dead black borders?
ChiDragon is offline   Reply With Quote
Old 24th October 2011, 04:13   #71  |  Link
vcmohan
Registered User
 
Join Date: Jul 2003
Location: India
Posts: 912
My plugin DeJitter may help under some circumstances.
__________________
mohan
my plugins are now hosted here
vcmohan is offline   Reply With Quote
Old 24th October 2011, 04:56   #72  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
@sven,
Excellent analysis again, that's exactly what I was going to do to verify the calculations, except you should know that the location of the brightest pixel corresponds to the change in edge between frames, so you are measuring the relative distance between the two edges. If one edge is fixed this becomes the jitter. The coordinates are based where 0,0 is at width/4, height/4.

@Chi
This is just an experiment proposed by sven to find a better way of detecting the edge of the video. The black/white has nothing to do with it, the correlation function is a way to detect a shape when there is noise. We are making the ideal shape (a black/white border) and comparing it against video, and the location of the best match is the edge of the video (even though it's not black/white). The diagrams above are testing with fully artificial borders to ensure we are getting the proper coordinates back.

@vc
Unfortunately I started this because I couldn't get your dejitter to work in my case, also it turns out we need not just shifting, but stretching to fix the video. If you could update your plugin it would probably be better than my script at this point.
jmac698 is offline   Reply With Quote
Old 24th October 2011, 18:08   #73  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Look in my post above. I have it updated with part II of the test, which is getting closer to the real thing (a blurred line with noise and low contrast).
sven_x is offline   Reply With Quote
Old 25th October 2011, 01:27   #74  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
@sven,
Great analysis again.. I have analyzed correlation myself before, it is not affected by:
-order of the pixels
-mean of the two sets (brightness)
-noise
-multiplication (contrast) of one of the sets
-probably not affected by blur, because it's a combination of multiplications
In short, it's a good way to match shapes except for the order part. Anyhow, did you test my script on real video?

Last edited by jmac698; 25th October 2011 at 01:32.
jmac698 is offline   Reply With Quote
Old 26th October 2011, 14:59   #75  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
vcmohan, the author of Corr2D has sent me some more comments that I am allowed to post here.

Thanks for using the Corr2D plugin. It was designed to find shift between two frames. [...]

We want to process each line separately. Infact we only need a Corr1D plugin (which would be much faster), but we have none :-)

So were testing in a first step whether it is possible to use cross correlation to find the offset where the video signals starts in a line compared with the edge position of an ideal line model.
If it works the next step will be that someone writes a plugin.

I tested Dejitter and some other avisynth scripts that were posted over the years. The results are very random and it is not clear what produces artefacts.

Do you have an example script where you use the output of Corr2D as input for another function?

The Corr2D plugin output is apart from fft display is textual at end. Its output can not be used automatically in other plugins. It helps in arriving at parameters for FExpanse plugin. Or UNFurl Plugin.
1D FFT correlation is used in UNFurl plugin, but it is averaged over a number of adjacent scan lines. So it will not help you. In FFTQuiver plugin F1Quiver function does a 1D FFT and and in test mode displays.

For your specific need an extension to any of these plugins need to be coded. While making the DeJitter plugin at first I tried 1D Correlation, but gave up as there was no way of separating image charecteristics from the scan line distortion.

This is exactly the reason even FFT fails. Unless the image has a measurable difference on the left edge, it is not possible to identify its start.


When there are other structures in the very first pixels of the line they will contribute to the cross correlation output.
In any case I think a dejitter algorithm has to apply some plausibility checks.
Another idea is to clamp the video content of the line to a luma of, say 30%, so that the jump between black border pixels and video content gets more weight.
sven_x is offline   Reply With Quote
Old 26th October 2011, 15:16   #76  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
I already posted a working script for this, to get correlation on each line. See above.
jmac698 is offline   Reply With Quote
Old 26th October 2011, 20:10   #77  |  Link
William.Lemos.BR
Registered User
 
Join Date: Oct 2010
Posts: 18
First of all Iīd like to congratulate you for your good work. Iīm following for some time your efforts to make a software TBC and how you didnīt give up even with so many people saying it would be impossible. Your tenacity complies with Einsteinīs quote: "Something is only impossible until someone doubts and prove otherwise". I admire that!

Iīve tested your script and can say: it works wonderfully. It is necessary some adjustments in the threshold depending on the video, but it really works!

I intend to use your script to line up the fields of multiples copies of the same video in order to calculate itīs median (Iīve contacted you in the related thread, do you remember?). At that time (as I said to you) I came to the conclusion that the shifts between each videoīs fields were equivalent, so it wouldnīt be necessary to line them up. But I tried to apply the same principle to the audio. After much struggle Iīve managed to do it using MatLab, since I still donīt have the needed programming skills (I can share the Matlab program I made with you, if you want). As expected there was a noise reduction, but with a side effect: a residual and constant noise (light “crackling”). So I figured out the reason: the misalignment between the audio of each video. The audio of a videotape donīt have a sync pulse so even if you line up the beginning of each waveform will not fix the shift present some samples ahead. That was the cause of that constant crackling. That made me think: if this happens with audio (that has a much smaller bandwidith) happens much more with video. So I took time to compare again the different videos line by line and noticed that actually THERE WAS very little shifts between them. My (obvious) opinion is that it is imperative to ensure the strict alignment of the pixels between each video in order to do a correct median process, otherwise the resulting video will be full of noise.

The problem is that at the present stage of development your script is extremely slooooooow and I need to implement it right now. I wanted to help someway but I still need time to learn and understand all tech stuff, etc.

So when I was practically giving up remembered of an old VirtualDubīs plugin that could do something similar and decided to give it a try. Maybe you know it...
http://midimaker.narod.ru/filters/vhsrest.html

It is in russian (nothing that google translator canīt solve). Have you tested this one? It is impressive, much better than Dejitter! It is surprisingly fast and has a reliable edge detection, since you made the needed adjustments on itīs parameters (your automatic threshold detection algorithm beats it).

Iīve noticed it is important to set:

“Max offset” - big enough to get videoīs edge (increasing it too much may provoque wrong edge detection)
“Interline SR” – this limits the shift diference between each line; also donīt increase it too much (in my case used 3, but depends on the video)
“Adicional offset” – OFF (it seems to be a way to avoid wrong detection caused by noise but in my case only provoked jitter on the top lines)

It also may be used with a kind of “subsampling” precision using this trick: stretching the horizontal resolution 2x before process (it seems you need to select the “2x oversample for processing” option for that). Iīve only tested it yesterday but the results were promising. (OBS: This option caused a crash in my Virtualdub (1.9.11, 32bits). I had to use other program to run it, then it worked).

This plugin donīt resize the lines (it only aligns them), so our aproach is more robust. But I think that perhaps you can get some new idea or even use it, as I will.

Iīll let you know if I discover something that can help somehow. Sorry for my bad english...

Best regards!

Last edited by William.Lemos.BR; 26th October 2011 at 20:38. Reason: Typing mistake
William.Lemos.BR is offline   Reply With Quote
Old 26th October 2011, 23:06   #78  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Thanks, that was very interesting.
Audio
I've completely ignored audio. In the case of multiple pass VHS, I was thinking to take one version. I'm surprised that combining multiple audio passes leads to crackling; I've mixed two tracks of audio before and it created a phasing effect or hollow sound. Ideally, the audio should be stretched at the same time as the video, line-by-line. however, it takes 3 samples at 48KHz for each video line, so it's a very small effect.

Multipass video
I had thought of a completely different technique for this; a "relative" TBC; each copy is lined up, line-by-line, but the lines themselves are not lined up, so each copy is "wrong" in the same way, enough to perform the median. I had noticed that my lines looked approximately the same, but hadn't looked at the lines in detail, good to know. I have noticed the mismatch effect on a large scale, when one of my copies was missing a frame. It's an odd effect. However the resolution of VHS is low enough that a few pixels of mismatch I feel doesn't make a huge difference. There's a few ways to line up the lines; corr2d plugin, mvtools especially with horizontal search only, even line-by-line versions of dejitter or vhs-restore. Did you notice any stretch as well as shift in your lines? I suggest to do a relative line-up, median, followed by a software TBC.

Improving Software TBC
I know my current version is slow, it would be even slower without the new plugin made for it. I'm sure I can improve it but it will take time and work. I see this problem as broken down into detection phase and repair phase; the detection is fast but the repair is slow now. I can improve detection for the "case #2" of black borders in existing videos. I can use vhs-restore.vdf in the detection; to apply it normally and also to a mirror-image video; finally to detect what actions it took, then to re-repair it myself! Quite a long away around it. Ideally I have to eventually write a full plugin for this. The detection improvements are obvious, someone has suggested them before (such as limiting per-line change). However this depends on the source as well, others seem to have typically low change per line, but in my current test video there is large change per line.

Coming Soon - Perfect TBC
I am very close to the ultimate test which I've always wanted to do: perfect TBC. I do this by making a test signal which I can line-up perfectly. I put this signal on each edge and then can gather statistics of the true jitter; also test the performance of any "blind" dejitter algorithms. This test signal is amazing, I can deduce the linearity of the digitizer; find the video levels calibration; do dejitter and shifting; measure frequency response; measure noise. Ultimately it can perfectly calibrate any aspect. I can't wait to see what such a video would look like!
jmac698 is offline   Reply With Quote
Old 26th October 2011, 23:38   #79  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
For another topic; thanks for your compliments! On the contrary, why do people think it wouldn't work? I'm only guessing here; but perhaps they're reasoning is, it's unheard of and no one has done it before, or else people have tried but had no real success; and theoretically, they know a hardware TBC has special access to the signal, so it can't work. Why has no one noticed before that you need stretch? Has "mental baggage" really stopped many people from trying? All I've done is line up two edges of an image; it's really simple. I don't know if I don't give up; but rather I retained my interest. I love learning and like to explore something until I understand it for myself, so if someone tells me it's impossible I still want to know why, and maybe then I notice it's not impossible
I generally like to deal with abstract ideas. In one personality theory, I have the "dreamer" personality dimension as predominate. This is common in 1/3 of the population. Most people have the "practical do-er" personality, and they are likely to give up much sooner and for example, either say it can't be done or just buy the hardware TBC. They tend to disreggard basic research as impractical. I know that the world needs each type of personality; many inventions really need a specific personalty. After it's invented; we lose interest, that's where the do-er might take over (perhaps a busiiness partner) to get it out there.
jmac698 is offline   Reply With Quote
Old 27th October 2011, 00:08   #80  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
I should point out that sven was the first to post about noticing stretch; though it was obvious in my tests.
jmac698 is offline   Reply With Quote
Reply

Tags
tbc

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:15.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.