Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Capturing and Editing Video > Capturing Video

Reply
 
Thread Tools Search this Thread Display Modes
Old 25th April 2011, 04:55   #21  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Wow, I think there's a good chance of fixing that actually, the static background can be a reference point. I've been working at something else at the moment but I have a lot of the pieces I need to script this.
jmac698 is offline   Reply With Quote
Old 6th May 2011, 22:47   #22  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 269
*CHEERS* for jmac698's ongoing efforts.

It occurs to me one approach to this problem would be a tweak to MVTools: It already has a "horizontal only" motion detection mode; What it would need next is a way to define one or more complete video lines as a block, rather than a square-ish block as at present. (If we can do 32x32 pixels, then why not 1024x1? ) Then you feed it frames of video, each of which is a single line of the source (probably re-sized to a height of 4 or 8, to get around clip format restrictions), and let it mo-comp the entire line (frame) to an average of the previous and next lines' (or frames') positions. Re-stitch the lines back into frames, and see how it looks...

Is that anything like what you're doing?

EDIT: ...Or, of course, you use DePan rather than MVtools, since that's already designed to track and move whole frames instead of little blocks. [SLAPS SELF FOR MISSING THE OBVIOUS]

Last edited by pbristow; 6th May 2011 at 22:49. Reason: ('cos I'm thick!)
pbristow is offline   Reply With Quote
Old 10th May 2011, 17:48   #23  |  Link
Ghitulescu
Registered User
 
Ghitulescu's Avatar
 
Join Date: Mar 2009
Location: Germany
Posts: 5,773
We are not discussing here HW TBCs, but whether it is feasible to implement one in software.
__________________
Born in the USB (not USA)
Ghitulescu is offline   Reply With Quote
Old 13th May 2011, 08:43   #24  |  Link
tenantfile
Registered User
 
Join Date: Apr 2011
Posts: 1
I didn't go through all the paper so I'm not sure if this was already covered but how does it detect the edge of the active picture, especially in a dark scene or when there is a black object at the edge of the picture?
tenantfile is offline   Reply With Quote
Old 13th May 2011, 16:33   #25  |  Link
NoX1911
Registered User
 
NoX1911's Avatar
 
Join Date: Aug 2003
Location: Germany
Posts: 186
Just a guess but i think the dedicated hardware devices do it by the 'h-sync pulse' that is like a white vertical line in the offscreen area. The problem is its even outside the 53.33µs area (the area the drivers for capture cards allow access to) so you cannot see it. Theoretically its just a driver issue...
NoX1911 is offline   Reply With Quote
Old 9th October 2011, 16:08   #26  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Suggesting a Software TBC using Cross correlation

Has anybody tried cross correlation function to find out which is the actual line offset?

I have thought a while about this issue. These are the results I came up with:

Cross correlation function in this case is something like
F(x) = 1/C * integral (f1(x - tau) + f2(x)

where
f1(x) are the pixels of the actual line, x are the pixel positions
f2 (x) are pixels of a reference line
C is a scaling constant
tau are the amounts of possible pixel offsets, ranging from -range … + range


The most important question is, which function (say line) could be taken as reference function f2.
The real goal is not to move the lines so that the fit best to their neighbours or to their predecessor / successor in time. The real goal is to get stable lines which begin all at the same point x.
That’s why I suggest using a master pattern of an ideal line begin as reference function f2. It will be something like ____------ (which is some pixels black followed by grey). Perhaps an averaged line (average of all even or odd lines of a frame) will give a better cross correlation which is less dependent from average luma of that frame.
Also the grey level of the reference line can be adjusted dynamically to the average grey level of the actual frame.

Because Pixel values in videos are in the range 0…255, both functions f1 and f2 have to be inverted, so that the black levels which are present in the beginning of the line get more weight.

Speed enhancement.
To get a fast computation cross correlation function is computed for the first, say, 50 pixels of a line only.
Getting a floating point result, factor C is not needed, because we’re just looking for the maximum value of the cross correlation. The value of tau we are getting is the offset the line must be moved to the left or to the right.
For cross correlation the missing pixels on the left and right can simply be filled with the value of the last pixel.

I think this might be a more stable approach then checking if Pixel color crosses a certain threshold which was suggested before in this thread, here , here, here or with Dejitter. The stabilising effects of these plugins were fairly poor. Random VHS noise adds some element of chaos when a pixel value will cross a threshold. Cross correlation function better avarages the influence of noise.

The resulting value must undergo additional plausibility checks:
a) Line offset must be within a permitted range
b) Because jitter results from mechanical properties, jitter between two succeeding lines is relatively small, compared to the absolute jitter values that are possible in a single frame (the border where the lines begin is something wavy)
b2) borders between the even or the odd frames look rather constant for each group. Perhaps the resulting line offset values for even or odd frames could be stabilized by temporal avarage of some frames of the same group.
c) do nothing or take the average value of the two neighbour lines
c1) when there is no clear result
d) use a greater part of a line for correlation, when fast correlating of the border areas only gives no clear result (but here it must be ensured, that the line contains no moving areas)


What are your thoughts?

p.S. Cross correlation using fftw3.dll is not necessary, because cross correlation function must be computed for a few pixels only, which might be faster done in a direct way instead of using fft.

Last edited by sven_x; 9th October 2011 at 16:12.
sven_x is offline   Reply With Quote
Old 9th October 2011, 20:39   #27  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Hi,
I've managed to tweak a driver and can now capture the hsync, just like a hardware TBC, so I should be able to do anything a hardware TBC does in software.
http://screenshotcomparison.com/comparison/86066
I've also worked out all the math in the Nikolova paper, except for one point I'm ready to code the algorithm. As for pbris' idea, I've tried that too, it turns out that the captures I have (and confirmed by one other person) are consistent between captures, so as I suspected there is already a line TBC active in the capture card, it just doesn't work correctly (however consistently). What I mean is, two captures of the same frame are lined up to each other.
I also have my splitbylines and correlation scripts to play with, and they could line some things up, most notably say a colorized vhs to a stable b/w dvd.
Anyhow as you can see, even with a good bright line or bright image it's still not working for me. The problem is somewhere else, not the noise. I'm thinking I need to line up both edges of the picture.
I've also recreated that russian's experiments and have digitized up to 1700 pixels per line. I can think of a way to digitize HD even with an ordinary capture card. It looks like noise when you try it, but there's a repeating pattern to the scrambled lines that you can fix. I've already captured the entire VBI area and 480p.

As to the question, how did the mathematical paper deal with a black edge, it has nothing to do with looking at edges, it analyzes the content of the entire lines and matches up the textures to each other.

Last edited by jmac698; 9th October 2011 at 20:43.
jmac698 is offline   Reply With Quote
Old 9th October 2011, 20:50   #28  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
sven,
As to your correlation idea, what you want is a robust edge finder, even better to sub-pixel precision. There are many ways to do this, such as susan http://users.fmrib.ox.ac.uk/~steve/s...san/node8.html
I don't see that as the problem now because even in a test case, with a bright picture, the jitter was quite bad, much more so than the single pixel error that noise could have caused. Just look at my stills. (I have one lined just by active image edge and it doesn't look much better, even in bright areas).
However if that works on some of your samples, I could certainly make something for it.

@g
Ahh, I probably have the scripts to work on your sample now... but my experiments are more fun right now
jmac698 is offline   Reply With Quote
Old 10th October 2011, 18:06   #29  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Now we're discussing in two threads.
Perhaps we should unite into one, because the things become clearer now. And we're getting closer to find a new algorithm.

In Dejitter script I have analyzed a screenshot with some VHS jitter, which gives some answers to my own questions in the post above.

Quote:
Correction
I have imported the synced screenshot into photoshop. Line heigth was increased to 300% to get a better view.
Using a fixed selection window (720px x 3px) I copied each line into a new have layer and than manually shifted that line to a position that looked good.

The result is given below:


Conclusions
When a correction of the image could be done by eyes view it also must be possible to put that in a computer algorithm.
  1. Lines from VHS grabbings cannot be synced by using what was regarded as sync signal
  2. Lines from VHS grabbings can be synced using cross correlation between subsequent lines.
  3. There is also a small jitter in line length, that needs an extra treatment.
  4. The line offset jitter does not correlate very well to the beginning of the lines (which is the beginning of total black pixels) (but not as bad as to the sync signal)
This corrects my first suggestion. It seems that line begin is not a good reference to find the begin of a line. But a matching can be done by content (if it correlates to some degree the to the previous line).
Also a correction of line length should be applied (which seems to be about 1...2 pixels jitter only).

If the jitter has been reduced to a certain amount, more improvement could be achieved by using a NNEDI3 interpolation in even and odd frames with a weighted merging, that Didée has suggested here (example images here). His script works wonders.

Last edited by sven_x; 12th October 2011 at 17:55.
sven_x is offline   Reply With Quote
Old 10th October 2011, 18:32   #30  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
I posted Didee's comparison's here, http://screenshotcomparison.com/comparison/86447
That's just unbelievable...
jmac698 is offline   Reply With Quote
Old 10th October 2011, 19:07   #31  |  Link
NoX1911
Registered User
 
NoX1911's Avatar
 
Join Date: Aug 2003
Location: Germany
Posts: 186
Snowwhite pictures look suspicious. The jitter is more or less constant in the 52µs area but there's a variable offset to the hsync from line to line. Looks more like there are dropouts in the porch area and the card doesn't really brute-sample continuously.

Edit: Where are the downloadlinks for the video?

Last edited by NoX1911; 10th October 2011 at 21:17.
NoX1911 is offline   Reply With Quote
Old 11th October 2011, 09:24   #32  |  Link
Ghitulescu
Registered User
 
Ghitulescu's Avatar
 
Join Date: Mar 2009
Location: Germany
Posts: 5,773
Quote:
Originally Posted by NoX1911 View Post
Snowwhite pictures look suspicious.
The non-software-TBCed video looks more "stable" than the TBCed one (mouse over)
The algorithm must start from the end of the line towards its beginning, not viceversa.
__________________
Born in the USB (not USA)
Ghitulescu is offline   Reply With Quote
Old 11th October 2011, 13:59   #33  |  Link
Mounir
Registered User
 
Join Date: Nov 2006
Posts: 780
Perhaps THIS will help for a great software tbc
I quote:
Free AviSynth filter for fixing situation when different lines of source frame are placed in different output frames. This often occurs during movie capture, when odd lines are one frame later than even lines
Mounir is offline   Reply With Quote
Old 11th October 2011, 19:29   #34  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Hi,
Try importing my test image and selectodd. You can see that interlacing isn't the problem.
A lot of people have been photoshopping! I feel sorry for that because I have a way to automatically measure jitter/stetch and to align, with subpixel precision. This will tell us a lot.
I need to make a better test image. I'm going to include a diagram to show where the front/back porch, hsync, and color burst are. I can capture again without setup/pedestal which will let me see this better. For now I will use straight lines so if you wanna plan with lining things up, it's easy.
jmac698 is offline   Reply With Quote
Old 11th October 2011, 20:53   #35  |  Link
Perepandel
Registered User
 
Join Date: Sep 2011
Posts: 37
Hi all!

Add me to the party... I came here some days ago after searching for "software TBC" in Google. I was so interested in the subject that I still want to program my own one, even I alredy transfered my VHS tapes using a VCR with an integrated TBC :P. But there's a lot of work I have to do yet.

I registered at first to contact jmac698 directly. Now I've realized the subject is hot enough to participate, plus I've discovered some of us seem to be overlapping our efforts (read "align by hand with Photoshop" here )

My current approach (after realizing that some more trained people than me is trying the post-captured video allignment :P) is at capture time. Let's see if we can get something useful!
Perepandel is offline   Reply With Quote
Old 11th October 2011, 21:26   #36  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Welcome
arty:
Thanks for adding to the enthusiasm. I know that this topic has been discussed many times and never come to a good solution, and I'm not betting on the outcome, however I think there is something different here, that is having access to the hsync by using special chip access gives some hope. If this fails I want to understand it completely and prove that it fails, but remember one person's 'failure' is only relative to their skill level. I never want to discourage someone trying with a fresh approach. Don't give up until you feel you've done all you can do, and enjoyed the research and discovery in the meantime, after all, for programmers it's the challenge of the problem solving that is the goal, and the practical side-effect is just a bonus

Interlacing
I agree that interlacing should be taken into account; certainly different heads are involved and it's a separate period of time with possibly different jitter statistics; certainly not related to the lines in the other field. I feel that jitter should be linked due to the forces of momentum/rigidity. I have even measured the differences in heads in video signal quality before.
You bring an interesting resource to the mix, I'd like you to do some tests with your TBC as well.

Just hang on a few days, I'm working on a new set of tests. The really big analysis is going to take some time, I have an algorithm that works and is tested for sub-pixel alignment of a test signal; it's accurate to slight frequency changes (stretch), to noise, impervious to frequency response, but I haven't tested it's robustness to signal distortion.
In plain english, I need to record this test signal and run it through an analysis to determine the statistics of jitter, and stretch, the alignment to the hsync and picture edges. I need to examine how these statistics change over many generation copies, by tape speed, by VCR, even by capture card. If it's possible at all to line up anything, I should have the answer, if not, I can at least quantify some parameters for "intrinsic jitter", that is techniques to guess at lining up the lines.
jmac698 is offline   Reply With Quote
Old 11th October 2011, 23:27   #37  |  Link
NoX1911
Registered User
 
NoX1911's Avatar
 
Join Date: Aug 2003
Location: Germany
Posts: 186
I've shifted some lines (comparison). The first faint yellow vertical line goes side-by-side with the global line jitter. I bet that's the hsync not the big vertical line.

HSync is 0V and below reference black voltage (0.3V) so it probably is -0.3V. The reason why it is so weak.

Whatever the big yellow line is it doesn't look like hsync at all. There is no relation to the global jitter that is constant before and after that big line. It's also suspicious that it is full white even though it should be 0V.

If you fix the picture by that first line you indeed get a corrected picture.

I would really like to know what hardware and driver you modified. Where is the discussion place for that mods?
NoX1911 is offline   Reply With Quote
Old 12th October 2011, 13:56   #38  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Quote:
If you fix the picture by that first line you indeed get a corrected picture.
That innocent sounding statement is telling me - it actually works! I was just using the wrong line and need to enhance it more!
The 44 pixel or so area in green has been drastically brightened with tweak; that's why it's confusing you. I did notice the weak other line and that would be the rising edge of hsync. If I capture in NTSC_J I can avoid the pedastal; in other words capture lower voltages.

I'm afraid I had to spend the last hours writing a script to help create test signals (drawrect) and making a faster script to line up picture edges (fast line shifter). But I am almost ready to record another test Have other things to do though...
I have a prediction, stretch won't be much of a factor, because that should change the color tint along the line, and I'm not seeing that kind of problem (or unnoticeable). In other words, lining up just the left edge should be sufficient.

This will be exciting!

Perepandel in http://forum.doom9.org/showthread.php?t=162726&page=2
Quote:
One of the problems would be the amount of pixels we could get per raw line or digitizing resolution. If we are limited to 800 and some, and we have to adquire the active video from it and translate the result to 720 pixels, then maybe we could get a lot of aliasing/lack of resolution, etc. But we should try it to see if it is acceptable or not. Also jmac698 said somewhere that he's able to get about twice per line that values, so in that case we'd probably have by far enough information to get good results.
All solved; we have the first report that it's actually working and we just used the wrong line. About the resolution, I can immediately capture 754 pixels which helps; this still doesn't give me 1:1 resolution for the active area though. There's two workarounds; perhaps further driver tweaking can get more resolution, or I'd have to make two passes; one for the left side+sync and one for the right side. I would put them together via correlation or motion tools. Remember this is optional and is only to get further sharpness. And also remember that VHS doesn't need such high resolution to begin with; the capture FAQ recommends much lower resolution. On the other hand I have measured the resolution of my S-VHS and I can distinguish 360 pixels (at least, I'd need higher capture resolutions to see the true limit).
Aliasing would be more of a problem of the hardware scaler in the capture card; they use oversampling and it's not at all likely that a tape based machine could overwhelm it.

Last edited by jmac698; 12th October 2011 at 14:26.
jmac698 is offline   Reply With Quote
Old 12th October 2011, 16:48   #39  |  Link
ronnylov
Registered User
 
Join Date: Feb 2002
Location: Borås, Sweden
Posts: 492
Really interesting stuff you are working on! What kind of hardware do you use and can you share your tweaked driver so more people can join and try tweaking it further?
__________________
Ronny
ronnylov is offline   Reply With Quote
Old 12th October 2011, 18:54   #40  |  Link
smok3
brontosaurusrex
 
smok3's Avatar
 
Join Date: Oct 2001
Posts: 2,392
so basically a TBC needs to make the lines the same lenght (starting at the same point is just conclusion) i guess?
__________________
certain other member
smok3 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:17.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.