Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 12th October 2011, 07:45   #21  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Good point.
My new version is faster; I'm still working on it.
jmac698 is offline   Reply With Quote
Old 12th October 2011, 09:57   #22  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,442
Quote:
Originally Posted by StainlessS View Post
How about moving GScript(" ... ") # GScript outside of ScriptClip,
I think that there is some kind of initialiser involved [Executed on every frame???]...
A call to GScript has a relatively small overhead, equivalent to calling Eval, in which the code string is parsed and evaluated. It's all done at compile-time, so there is a per-frame penalty only when inside ScriptClip.

Doing it the other way round, calling ScriptClip inside a GScript string, has a problem in that GScript constructs cannot be used directly inside the ScriptClip part (without calling GScript again), as ScriptClip creates a new standard parser instance for each frame, bypassing the GScript parser.

What you can do is extract the GScript code into a function defined inside GScript (yes, entire functions can be GScript'ed), and then call the function from within ScriptClip. This reduces the per-frame parsing overhead to a minimum. Using this scheme, the original code could be rewritten as:
Code:
GScript("
  function f(clip src) {
        out=getline(src,0)
        for (y=0, src.height-1, 1) {
            line=getline(src, y)
            shiftx=0
            for (x=0, 25, 1) {
                if (getpixel(src, x, y)>200) {
                    shiftx=x
                }#if
            }#for x
            out=stackvertical(out,line.shift(shiftx-20,0))
        }#for y
        out.crop(0,1,0,0)
  } # end f
")#GScript

ScriptClip("""
    f()
""")#ScriptClip
I'm not sure how much difference this would make - probably most of the overhead is coming from the pixel manipulation anyway.
__________________
GScript and GRunT - complex Avisynth scripting made easier
Gavino is offline   Reply With Quote
Old 12th October 2011, 11:05   #23  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Sorry, it was close, but pixel manipulation in any kind of loop is too slow, so I don't have a first use for gscript after all . If you could add a pixel reading/writing feature, that would make it extremely useful!

Update: See first page for a new version of a line shifter script.

Last edited by jmac698; 12th October 2011 at 12:56.
jmac698 is offline   Reply With Quote
Old 12th October 2011, 14:07   #24  |  Link
Perepandel
Registered User
 
Join Date: Sep 2011
Posts: 37
Quote:
Originally Posted by sven_x View Post
The subject has been discussed several times. Please take a look at this thread.
Wow Sven, you're referencing a thred in which jmac698, the user you're replying to, has already participated xD


Quote:
Originally Posted by johnmeyer
As has been pointed out in previous posts on this subject, a good time base corrector, used properly, and inserted at the appropriate point in the analog signal path, will probably completely eliminate this problem.
John, you seem to miss the point: he's trying to make a time base corrector.

Quote:
Originally Posted by johnmeyer
Software is wonderful, but some things can only be done in hardware, especially when dealing with an analog signal. You really cannot create a digital TBC because the sync signal is only available in the analog domain and is not available once the analog video has been digitized. You cannot do the same thing simply by looking at the resulting badly captured video and then trying to figure out what to do with the bad video.

If you want really good video from your old VHS analog video, then you MUST use a TBC. I admire your desire to do many things in software, but this is one situation where you are not going to get very good results.
Maybe we are not talking properly then. The truth is that it would be a hardware+software TBC. A bt878-based capture card is used to capture the (raw?) video signal, and then a software algorithm is used to correctly horizontally allign the video lines.

A friend of mine lend me a SVHS video with an integrated TBC. I already captured my tapes with it and the horizontal jitter went of; it really made miracles with some badly jittered tapes. But I still have the intereset in this project, sharing the motivations with jmac698.

The label on the SVHS says, about the TBC, "Digital 3-D circuit" and "Digital 3R picture system". If the advertising is true, then it means that the signal is digitized somewhere, then an algorithm applied to the digital signal to retore the stabilization, etc. That is the same principle that what we are trying to archieve with the so-called "software TBC". So John, I wouldn't be that radical in my opinion.

Quote:
Originally Posted by ronnylov
The time between two line sync pulses should in theory be a constant but on a recorded tape it may fluctuate a little bit during playback because of tape tension. So instead of just syncing the start maybe you need to stretch back the length of each line to the nominal value. Check the time distance between each pair of hsync pulses and readjust the signal back to nornal. Then you can align the lines.
I'm glad ronnylov had the same idea I shared with jmac698 in one of my previous private messages with (before seeing this thread and posting in the forum). That's were I would focus my efforts right now.

One of the problems would be the amount of pixels we could get per raw line or digitizing resolution. If we are limited to 800 and some, and we have to adquire the active video from it and translate the result to 720 pixels, then maybe we could get a lot of aliasing/lack of resolution, etc. But we should try it to see if it is acceptable or not. Also jmac698 said somewhere that he's able to get about twice per line that values, so in that case we'd probably have by far enough information to get good results.

Quote:
Originally Posted by ronnylov
Just an idea you can try if you have captured the complete video signal. I don't know how a real hardware TBC work.
That's also another thing I wanted to say: we need more information on how a stand-alone TBC works, in order to make ours.
Perepandel is offline   Reply With Quote
Old 16th October 2011, 06:02   #25  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Hi,
I made a new sample, it's quite interesting
http://screenshotcomparison.com/comparison/87833
This seems to be a partial success; overall it's much better, but it seems to need lining up on both sides. That will be tough to script.
I tried to get as much of the video as I could this time, it's brighter, there's no filtering, as much of the left border, as much resolution as possible, and even all of the right hand side of the picture.
The bright green line turns out to not be relevant; I can move it with a register called AGC delay so it's not part of the video.
I've also made a package for you to analyze;
http://www.sendspace.com/file/jdh9j9

I don't think the cause of jitter is what we think at all! It seems to be related to picture content, and it can vary dramatically between lines. It's also common over both fields. It seems to happen more in bright areas but it's not consistent. Anyhow have any observations?

Last edited by jmac698; 16th October 2011 at 06:19.
jmac698 is offline   Reply With Quote
Old 16th October 2011, 11:43   #26  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
Great job!
The corrected image points out that there is also a jitter in line length (wich might be a delay in head rotation or tape stretch or a delay in tape speed or even rippled tape). This jitter is very fast( i.e. a 1/30 of the time of a frame, about 900 Hz). A VHS player uses several loops to control frame synchronisation. So the source of this effect could also be any ringing or disturbance in the electrical signal flow.

I do not have the impression that jitter is related to picture content. It has an amplitude envelope that is rather a sinus curve. For me that speeks for electrical or mechanical "ringing".

using nnedi3(field=-2,nsize=2) one would notice, that
a) the jitter is different in the even and odd field
b) in both fields it is more present in the upper part of the frame and also in the same regions (line numbers). That is strange.

The AGC of the video signal reacts on the negative sync impulse. The AGC produces an overshoot because it tries to compensate the negative impulse (AGC tries to amplify video amplitude to 16...235). The AGC delay should be set so fast, that sync jitter does not affect luma of the first video pixels in a line. On the other hand a very fast AGC delay might lead to less stable synchronisation. Have you noticed any influences of the value to the quality of synchronisation, jitter frequency or luma modulations?

Another aspect is, the problem that appears in your grabbing can be described in other words with "the video grabber card does not synchronize to a proper line synchronisation signal". Have you ever tried to grab the same tapes with another card?

Last edited by sven_x; 16th October 2011 at 18:15.
sven_x is offline   Reply With Quote
Old 16th October 2011, 23:33   #27  |  Link
Mounir
Registered User
 
Join Date: Nov 2006
Posts: 780
I'm not sure if you have chosen the right video sample because animes are supposed to be 24fps right ? Perhaps a video with live content (concert) which is filmed at 29.97 fps, truly interlaced would be suited for the test ? The challenge would be to find one that has defects (lines shifts)
Mounir is offline   Reply With Quote
Old 17th October 2011, 01:07   #28  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
@mounir
I don't follow your reasoning, I like this content because it's fairly steady and you can tell if the picture itself is not lined up. I'm really looking for straight edges in the video. With animation, there's a few frames that aren't interlaced so I can view the frame as a whole (though we could also analyze through separatefields; I don't care about picture quality/interlacing at all just the jitter). The video is still 29.97 btw; just that entire pictures repeat themselves.

@sven
The AGC register seems to set the point in time where the amplitude is measured. I've positioned it in the hsync area. The picture brightness changes if I move it into various zones. The line it causes to be drawn on the video seems to be as you say; the ringing as it reacts to the amplitude. The video was far brighter than it had to be too, but it doesn't matter for now.

Notice the first scene change - the jitter area instantly changes. We also have to consider if macrovision could be influencing our analysis, I should try another example. Anyhow I'll try lining up both sides and some other experiments. I am still hopeful this can be done based on the partial improvement.
Btw I'm seen "bowed" video which bent to the right on the lines which had high average brightness, I suspect that's an electrical effect, possibly due to aged components in the circuit which is not operating properly.
jmac698 is offline   Reply With Quote
Old 17th October 2011, 04:51   #29  |  Link
*.mp4 guy
Registered User
 
*.mp4 guy's Avatar
 
Join Date: Feb 2004
Location: USA
Posts: 1,348
I have not read the entirety of the thread however, I do not think this has been mentioned. It looks like instead of shifting only, you must shift, and stretch so that both ends of the signal space line up. If you think about how the jitter errors are created (stretching of tape / time signal), it makes sense that the entire "time" (horizontal) dimension would need to be resampled to enforce consistency.

So what must be done is:
1 shift signal so one edge is aligned
2 calculate stretch/squash (resample factor) needed to maintain correct signal length
3 resample signal accordingly
4 hope that most time-scale variance is between lines rather then within lines
*.mp4 guy is offline   Reply With Quote
Old 17th October 2011, 10:10   #30  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
@*.mp4
Quote:
This seems to be a partial success; overall it's much better, but it seems to need lining up on both sides.
@sven
Oops, also I can capture with two cards at once - but have no doubt, even modern cards show jitter. In fact, some chips have a feature such as Ultralock, which are supposed to be a line TBC (horizontal shift only), however they don't work. It will be interesting to test the performance of various capture cards in this way, but also to finally discover why it's not working as well as a typical TBC.
jmac698 is offline   Reply With Quote
Old 17th October 2011, 18:58   #31  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
@jmac698
I just had a second look at your latest screen shots. Now it looks to me that not only the sync impulse could be taken to line up the lines, but also the transition from very black to grey would do. If the very black borders on the left and right are deeper black than the rest of the image, an algorithm should be possible that finds the very first and very last pixel of the line video content. Doing so we'll get length and pixel offset for each line so that we can
- shrink the line to standard length
- shift the beginning of the line to a standard position

Then all of the driver manipulation would not be necessary and the method might work with USB grabbers as well.
If the borders are not in a deeper black this method will fail in some cases.
sven_x is offline   Reply With Quote
Old 19th October 2011, 22:03   #32  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Ok, I have an alpha version of the stretching function. It works:
http://screenshotcomparison.com/comparison/88691
So we've solved software TBC - it was so simple, why didn't anyone do this before? All you need to do is line up both sides. I doubt we even need hsync at all. It's still going to help to see the porch area.
jmac698 is offline   Reply With Quote
Old 19th October 2011, 22:36   #33  |  Link
ronnylov
Registered User
 
Join Date: Feb 2002
Location: Borås, Sweden
Posts: 492
It seems to introduce aliasing in areas that did not have aliasing in the original sample, like on the strings across the chest of the hunter.
Maybe it needs some more tweaking?
__________________
Ronny
ronnylov is offline   Reply With Quote
Old 20th October 2011, 00:48   #34  |  Link
Mounir
Registered User
 
Join Date: Nov 2006
Posts: 780
that is surprizingly good result to me! (with some aliasing, true that) ; if you could give the procedure to tweak the drivers...
Mounir is offline   Reply With Quote
Old 20th October 2011, 02:23   #35  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
I don't think you need to tweak the drivers at all, it's just a simple script to line up the edges. The aliasing and top line are just bugs. I'll be tweaking for speed and quality.
jmac698 is offline   Reply With Quote
Old 20th October 2011, 09:12   #36  |  Link
Ghitulescu
Registered User
 
Ghitulescu's Avatar
 
Join Date: Mar 2009
Location: Germany
Posts: 5,773

I can hardly wait for the script.
__________________
Born in the USB (not USA)
Ghitulescu is offline   Reply With Quote
Old 20th October 2011, 10:20   #37  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Thanks, I suppose I should stop playing video games and try to finish it
Maybe I'll get a coffee first...
There's speed and quality issues still.
jmac698 is offline   Reply With Quote
Old 20th October 2011, 10:41   #38  |  Link
sven_x
Registered User
 
Join Date: Oct 2011
Location: Germany
Posts: 39
@jmac698
I am very impressed! Fantastic result.
Perhaps the algorithm to find borders could become a bit more robust. There is still some ripple. (Using a small cross correlation function for border areas only?)
I am just starting to tweak my VHS grabbings with avisynth. Didée has posted some scripts that use NNDEDI(-2) as deinterlacer. It extracts both fields of a frame, wich gives the odd or even lines of two successing frames and interpolates the missing lines. Then the two resulting frames are merged back into one. With digital sources this also acts as good antialiasing algorithm.

Quote:
oo=last

nnedi3(field=-2,nsize=0,nns=3) #DVD content: nns=3
merge(selecteven(),selectodd())

D1=mt_makediff(oo,last)
D2=mt_makediff(last,last.removegrain(11,-1))
last.mt_adddiff(D2.repair(D1,13,-1).mt_lutxy(D2,"x 128 - y 128 - * 0 < 128 x 128 - abs y 128 - abs < x y ? ?"),U=2,V=2)
o=last
But in VHS sources (at least in my own sources) we have the problem, that every second frame has a slightly different position as every first, after deinterlacing. Just browse through the frames after a NNEDI3(-2). You will see it.
Now when both frames are merged back into one the offset between the two frames leads to a blurring effect. The resulting composed frame is less sharp. On the other hand it has more details as one of both single frames, because NNEDI3 cannot invent details that are smaller than three lines.
His method also reduces the noise, because tape noise from both frames is not correlated much. (But film grain of course is not reduced, when both frames originate from a 25 fps movie).
Nevertheless -- his method would work much better, if we succeed in lining up both interpolated frames at exactly the same position.

@Mounir
The aliazing might result from the fact, that the grabbed frame is still interlaced, not processed.

Last edited by sven_x; 20th October 2011 at 10:51. Reason: inserted code
sven_x is offline   Reply With Quote
Old 20th October 2011, 10:59   #39  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
Thanks,
The antialiasing is a bug in my script which is easily fixed. I've seen Didee's method on a german forum, it was very impressive! But for cartoons I have another method already for this, I call relative jitter. It correlates the same background image over several frames, so even if there is jitter, it's the *same* jitter for several frames, so the frames can be temporally processed. The result is less noise, but still has a static jitter pattern. It doesn't help your problem.

There is a slight jitter still in my sample because I need a subpixel algorithm. That will take some more work.
jmac698 is offline   Reply With Quote
Old 20th October 2011, 13:25   #40  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,869
New version, faster, bugs fixed.
http://screenshotcomparison.com/comparison/88810
jmac698 is offline   Reply With Quote
Reply

Tags
tbc

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:05.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.