Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se |
|
|
#22 | Link |
|
Registered User
Join Date: Jun 2009
Location: UK
Posts: 269
|
*CHEERS* for jmac698's ongoing efforts.
![]() It occurs to me one approach to this problem would be a tweak to MVTools: It already has a "horizontal only" motion detection mode; What it would need next is a way to define one or more complete video lines as a block, rather than a square-ish block as at present. (If we can do 32x32 pixels, then why not 1024x1? ) Then you feed it frames of video, each of which is a single line of the source (probably re-sized to a height of 4 or 8, to get around clip format restrictions), and let it mo-comp the entire line (frame) to an average of the previous and next lines' (or frames') positions. Re-stitch the lines back into frames, and see how it looks...Is that anything like what you're doing? EDIT: ...Or, of course, you use DePan rather than MVtools, since that's already designed to track and move whole frames instead of little blocks. [SLAPS SELF FOR MISSING THE OBVIOUS] Last edited by pbristow; 6th May 2011 at 22:49. Reason: ('cos I'm thick!) |
|
|
|
|
|
#25 | Link |
|
Registered User
Join Date: Aug 2003
Location: Germany
Posts: 186
|
Just a guess but i think the dedicated hardware devices do it by the 'h-sync pulse' that is like a white vertical line in the offscreen area. The problem is its even outside the 53.33µs area (the area the drivers for capture cards allow access to) so you cannot see it. Theoretically its just a driver issue...
|
|
|
|
|
|
#26 | Link |
|
Registered User
Join Date: Oct 2011
Location: Germany
Posts: 39
|
Suggesting a Software TBC using Cross correlation
Has anybody tried cross correlation function to find out which is the actual line offset? I have thought a while about this issue. These are the results I came up with: Cross correlation function in this case is something like F(x) = 1/C * integral (f1(x - tau) + f2(x) where f1(x) are the pixels of the actual line, x are the pixel positions f2 (x) are pixels of a reference line C is a scaling constant tau are the amounts of possible pixel offsets, ranging from -range … + range The most important question is, which function (say line) could be taken as reference function f2. The real goal is not to move the lines so that the fit best to their neighbours or to their predecessor / successor in time. The real goal is to get stable lines which begin all at the same point x. That’s why I suggest using a master pattern of an ideal line begin as reference function f2. It will be something like ____------ (which is some pixels black followed by grey). Perhaps an averaged line (average of all even or odd lines of a frame) will give a better cross correlation which is less dependent from average luma of that frame. Also the grey level of the reference line can be adjusted dynamically to the average grey level of the actual frame. Because Pixel values in videos are in the range 0…255, both functions f1 and f2 have to be inverted, so that the black levels which are present in the beginning of the line get more weight. Speed enhancement. To get a fast computation cross correlation function is computed for the first, say, 50 pixels of a line only. Getting a floating point result, factor C is not needed, because we’re just looking for the maximum value of the cross correlation. The value of tau we are getting is the offset the line must be moved to the left or to the right. For cross correlation the missing pixels on the left and right can simply be filled with the value of the last pixel. I think this might be a more stable approach then checking if Pixel color crosses a certain threshold which was suggested before in this thread, here , here, here or with Dejitter. The stabilising effects of these plugins were fairly poor. Random VHS noise adds some element of chaos when a pixel value will cross a threshold. Cross correlation function better avarages the influence of noise. The resulting value must undergo additional plausibility checks: a) Line offset must be within a permitted range b) Because jitter results from mechanical properties, jitter between two succeeding lines is relatively small, compared to the absolute jitter values that are possible in a single frame (the border where the lines begin is something wavy) b2) borders between the even or the odd frames look rather constant for each group. Perhaps the resulting line offset values for even or odd frames could be stabilized by temporal avarage of some frames of the same group. c) do nothing or take the average value of the two neighbour lines c1) when there is no clear result d) use a greater part of a line for correlation, when fast correlating of the border areas only gives no clear result (but here it must be ensured, that the line contains no moving areas) … What are your thoughts? p.S. Cross correlation using fftw3.dll is not necessary, because cross correlation function must be computed for a few pixels only, which might be faster done in a direct way instead of using fft. Last edited by sven_x; 9th October 2011 at 16:12. |
|
|
|
|
|
#27 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Hi,
I've managed to tweak a driver and can now capture the hsync, just like a hardware TBC, so I should be able to do anything a hardware TBC does in software. http://screenshotcomparison.com/comparison/86066 I've also worked out all the math in the Nikolova paper, except for one point I'm ready to code the algorithm. As for pbris' idea, I've tried that too, it turns out that the captures I have (and confirmed by one other person) are consistent between captures, so as I suspected there is already a line TBC active in the capture card, it just doesn't work correctly (however consistently). What I mean is, two captures of the same frame are lined up to each other. I also have my splitbylines and correlation scripts to play with, and they could line some things up, most notably say a colorized vhs to a stable b/w dvd. Anyhow as you can see, even with a good bright line or bright image it's still not working for me. The problem is somewhere else, not the noise. I'm thinking I need to line up both edges of the picture. I've also recreated that russian's experiments and have digitized up to 1700 pixels per line. I can think of a way to digitize HD even with an ordinary capture card. It looks like noise when you try it, but there's a repeating pattern to the scrambled lines that you can fix. I've already captured the entire VBI area and 480p. As to the question, how did the mathematical paper deal with a black edge, it has nothing to do with looking at edges, it analyzes the content of the entire lines and matches up the textures to each other. Last edited by jmac698; 9th October 2011 at 20:43. |
|
|
|
|
|
#28 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
sven,
As to your correlation idea, what you want is a robust edge finder, even better to sub-pixel precision. There are many ways to do this, such as susan http://users.fmrib.ox.ac.uk/~steve/s...san/node8.html I don't see that as the problem now because even in a test case, with a bright picture, the jitter was quite bad, much more so than the single pixel error that noise could have caused. Just look at my stills. (I have one lined just by active image edge and it doesn't look much better, even in bright areas). However if that works on some of your samples, I could certainly make something for it. @g Ahh, I probably have the scripts to work on your sample now... but my experiments are more fun right now
|
|
|
|
|
|
#29 | Link | |
|
Registered User
Join Date: Oct 2011
Location: Germany
Posts: 39
|
Now we're discussing in two threads.
Perhaps we should unite into one, because the things become clearer now. And we're getting closer to find a new algorithm. In Dejitter script I have analyzed a screenshot with some VHS jitter, which gives some answers to my own questions in the post above. Quote:
Also a correction of line length should be applied (which seems to be about 1...2 pixels jitter only). If the jitter has been reduced to a certain amount, more improvement could be achieved by using a NNEDI3 interpolation in even and odd frames with a weighted merging, that Didée has suggested here (example images here). His script works wonders. Last edited by sven_x; 12th October 2011 at 17:55. |
|
|
|
|
|
|
#30 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
I posted Didee's comparison's here, http://screenshotcomparison.com/comparison/86447
That's just unbelievable... |
|
|
|
|
|
#31 | Link |
|
Registered User
Join Date: Aug 2003
Location: Germany
Posts: 186
|
Snowwhite pictures look suspicious. The jitter is more or less constant in the 52µs area but there's a variable offset to the hsync from line to line. Looks more like there are dropouts in the porch area and the card doesn't really brute-sample continuously.
Edit: Where are the downloadlinks for the video? Last edited by NoX1911; 10th October 2011 at 21:17. |
|
|
|
|
|
#32 | Link | |
|
Registered User
Join Date: Mar 2009
Location: Germany
Posts: 5,773
|
Quote:
![]() The algorithm must start from the end of the line towards its beginning, not viceversa.
__________________
Born in the USB (not USA) |
|
|
|
|
|
|
#33 | Link |
|
Registered User
Join Date: Nov 2006
Posts: 780
|
Perhaps THIS will help for a great software tbc
I quote: Free AviSynth filter for fixing situation when different lines of source frame are placed in different output frames. This often occurs during movie capture, when odd lines are one frame later than even lines |
|
|
|
|
|
#34 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Hi,
Try importing my test image and selectodd. You can see that interlacing isn't the problem. A lot of people have been photoshopping! I feel sorry for that because I have a way to automatically measure jitter/stetch and to align, with subpixel precision. This will tell us a lot. I need to make a better test image. I'm going to include a diagram to show where the front/back porch, hsync, and color burst are. I can capture again without setup/pedestal which will let me see this better. For now I will use straight lines so if you wanna plan with lining things up, it's easy. |
|
|
|
|
|
#35 | Link |
|
Registered User
Join Date: Sep 2011
Posts: 37
|
Hi all!
Add me to the party... I came here some days ago after searching for "software TBC" in Google. I was so interested in the subject that I still want to program my own one, even I alredy transfered my VHS tapes using a VCR with an integrated TBC :P. But there's a lot of work I have to do yet. I registered at first to contact jmac698 directly. Now I've realized the subject is hot enough to participate, plus I've discovered some of us seem to be overlapping our efforts (read "align by hand with Photoshop" here )My current approach (after realizing that some more trained people than me is trying the post-captured video allignment :P) is at capture time. Let's see if we can get something useful! |
|
|
|
|
|
#36 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Welcome
arty:Thanks for adding to the enthusiasm. I know that this topic has been discussed many times and never come to a good solution, and I'm not betting on the outcome, however I think there is something different here, that is having access to the hsync by using special chip access gives some hope. If this fails I want to understand it completely and prove that it fails, but remember one person's 'failure' is only relative to their skill level. I never want to discourage someone trying with a fresh approach. Don't give up until you feel you've done all you can do, and enjoyed the research and discovery in the meantime, after all, for programmers it's the challenge of the problem solving that is the goal, and the practical side-effect is just a bonus ![]() Interlacing I agree that interlacing should be taken into account; certainly different heads are involved and it's a separate period of time with possibly different jitter statistics; certainly not related to the lines in the other field. I feel that jitter should be linked due to the forces of momentum/rigidity. I have even measured the differences in heads in video signal quality before. You bring an interesting resource to the mix, I'd like you to do some tests with your TBC as well. Just hang on a few days, I'm working on a new set of tests. The really big analysis is going to take some time, I have an algorithm that works and is tested for sub-pixel alignment of a test signal; it's accurate to slight frequency changes (stretch), to noise, impervious to frequency response, but I haven't tested it's robustness to signal distortion. In plain english, I need to record this test signal and run it through an analysis to determine the statistics of jitter, and stretch, the alignment to the hsync and picture edges. I need to examine how these statistics change over many generation copies, by tape speed, by VCR, even by capture card. If it's possible at all to line up anything, I should have the answer, if not, I can at least quantify some parameters for "intrinsic jitter", that is techniques to guess at lining up the lines. |
|
|
|
|
|
#37 | Link |
|
Registered User
Join Date: Aug 2003
Location: Germany
Posts: 186
|
I've shifted some lines (comparison). The first faint yellow vertical line goes side-by-side with the global line jitter. I bet that's the hsync not the big vertical line.
HSync is 0V and below reference black voltage (0.3V) so it probably is -0.3V. The reason why it is so weak. Whatever the big yellow line is it doesn't look like hsync at all. There is no relation to the global jitter that is constant before and after that big line. It's also suspicious that it is full white even though it should be 0V. If you fix the picture by that first line you indeed get a corrected picture. I would really like to know what hardware and driver you modified. Where is the discussion place for that mods? |
|
|
|
|
|
#38 | Link | ||
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Quote:
The 44 pixel or so area in green has been drastically brightened with tweak; that's why it's confusing you. I did notice the weak other line and that would be the rising edge of hsync. If I capture in NTSC_J I can avoid the pedastal; in other words capture lower voltages. I'm afraid I had to spend the last hours writing a script to help create test signals (drawrect) and making a faster script to line up picture edges (fast line shifter). But I am almost ready to record another test Have other things to do though...I have a prediction, stretch won't be much of a factor, because that should change the color tint along the line, and I'm not seeing that kind of problem (or unnoticeable). In other words, lining up just the left edge should be sufficient. This will be exciting! Perepandel in http://forum.doom9.org/showthread.php?t=162726&page=2 Quote:
Aliasing would be more of a problem of the hardware scaler in the capture card; they use oversampling and it's not at all likely that a tape based machine could overwhelm it. Last edited by jmac698; 12th October 2011 at 14:26. |
||
|
|
|
|
|
#39 | Link |
|
Registered User
Join Date: Feb 2002
Location: Borås, Sweden
Posts: 492
|
Really interesting stuff you are working on! What kind of hardware do you use and can you share your tweaked driver so more people can join and try tweaking it further?
__________________
Ronny |
|
|
|
![]() |
| Thread Tools | Search this Thread |
| Display Modes | |
|
|