Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se |
|
|||||||
![]() |
|
|
Thread Tools | Search this Thread | Display Modes |
|
|
#22 | Link | |
|
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,442
|
Quote:
Doing it the other way round, calling ScriptClip inside a GScript string, has a problem in that GScript constructs cannot be used directly inside the ScriptClip part (without calling GScript again), as ScriptClip creates a new standard parser instance for each frame, bypassing the GScript parser. What you can do is extract the GScript code into a function defined inside GScript (yes, entire functions can be GScript'ed), and then call the function from within ScriptClip. This reduces the per-frame parsing overhead to a minimum. Using this scheme, the original code could be rewritten as: Code:
GScript("
function f(clip src) {
out=getline(src,0)
for (y=0, src.height-1, 1) {
line=getline(src, y)
shiftx=0
for (x=0, 25, 1) {
if (getpixel(src, x, y)>200) {
shiftx=x
}#if
}#for x
out=stackvertical(out,line.shift(shiftx-20,0))
}#for y
out.crop(0,1,0,0)
} # end f
")#GScript
ScriptClip("""
f()
""")#ScriptClip
|
|
|
|
|
|
|
#23 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Sorry, it was close, but pixel manipulation in any kind of loop is too slow, so I don't have a first use for gscript after all
. If you could add a pixel reading/writing feature, that would make it extremely useful!Update: See first page for a new version of a line shifter script. Last edited by jmac698; 12th October 2011 at 12:56. |
|
|
|
|
|
#24 | Link | |||||
|
Registered User
Join Date: Sep 2011
Posts: 37
|
Quote:
Quote:
Quote:
A friend of mine lend me a SVHS video with an integrated TBC. I already captured my tapes with it and the horizontal jitter went of; it really made miracles with some badly jittered tapes. But I still have the intereset in this project, sharing the motivations with jmac698. The label on the SVHS says, about the TBC, "Digital 3-D circuit" and "Digital 3R picture system". If the advertising is true, then it means that the signal is digitized somewhere, then an algorithm applied to the digital signal to retore the stabilization, etc. That is the same principle that what we are trying to archieve with the so-called "software TBC". So John, I wouldn't be that radical in my opinion. Quote:
One of the problems would be the amount of pixels we could get per raw line or digitizing resolution. If we are limited to 800 and some, and we have to adquire the active video from it and translate the result to 720 pixels, then maybe we could get a lot of aliasing/lack of resolution, etc. But we should try it to see if it is acceptable or not. Also jmac698 said somewhere that he's able to get about twice per line that values, so in that case we'd probably have by far enough information to get good results. Quote:
|
|||||
|
|
|
|
|
#25 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Hi,
I made a new sample, it's quite interesting http://screenshotcomparison.com/comparison/87833 This seems to be a partial success; overall it's much better, but it seems to need lining up on both sides. That will be tough to script. I tried to get as much of the video as I could this time, it's brighter, there's no filtering, as much of the left border, as much resolution as possible, and even all of the right hand side of the picture. The bright green line turns out to not be relevant; I can move it with a register called AGC delay so it's not part of the video. I've also made a package for you to analyze; http://www.sendspace.com/file/jdh9j9 I don't think the cause of jitter is what we think at all! It seems to be related to picture content, and it can vary dramatically between lines. It's also common over both fields. It seems to happen more in bright areas but it's not consistent. Anyhow have any observations? Last edited by jmac698; 16th October 2011 at 06:19. |
|
|
|
|
|
#26 | Link |
|
Registered User
Join Date: Oct 2011
Location: Germany
Posts: 39
|
Great job!
The corrected image points out that there is also a jitter in line length (wich might be a delay in head rotation or tape stretch or a delay in tape speed or even rippled tape). This jitter is very fast( i.e. a 1/30 of the time of a frame, about 900 Hz). A VHS player uses several loops to control frame synchronisation. So the source of this effect could also be any ringing or disturbance in the electrical signal flow. I do not have the impression that jitter is related to picture content. It has an amplitude envelope that is rather a sinus curve. For me that speeks for electrical or mechanical "ringing". using nnedi3(field=-2,nsize=2) one would notice, that a) the jitter is different in the even and odd field b) in both fields it is more present in the upper part of the frame and also in the same regions (line numbers). That is strange. The AGC of the video signal reacts on the negative sync impulse. The AGC produces an overshoot because it tries to compensate the negative impulse (AGC tries to amplify video amplitude to 16...235). The AGC delay should be set so fast, that sync jitter does not affect luma of the first video pixels in a line. On the other hand a very fast AGC delay might lead to less stable synchronisation. Have you noticed any influences of the value to the quality of synchronisation, jitter frequency or luma modulations? Another aspect is, the problem that appears in your grabbing can be described in other words with "the video grabber card does not synchronize to a proper line synchronisation signal". Have you ever tried to grab the same tapes with another card? Last edited by sven_x; 16th October 2011 at 18:15. |
|
|
|
|
|
#27 | Link |
|
Registered User
Join Date: Nov 2006
Posts: 780
|
I'm not sure if you have chosen the right video sample because animes are supposed to be 24fps right ? Perhaps a video with live content (concert) which is filmed at 29.97 fps, truly interlaced would be suited for the test ? The challenge would be to find one that has defects (lines shifts)
|
|
|
|
|
|
#28 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
@mounir
I don't follow your reasoning, I like this content because it's fairly steady and you can tell if the picture itself is not lined up. I'm really looking for straight edges in the video. With animation, there's a few frames that aren't interlaced so I can view the frame as a whole (though we could also analyze through separatefields; I don't care about picture quality/interlacing at all just the jitter). The video is still 29.97 btw; just that entire pictures repeat themselves. @sven The AGC register seems to set the point in time where the amplitude is measured. I've positioned it in the hsync area. The picture brightness changes if I move it into various zones. The line it causes to be drawn on the video seems to be as you say; the ringing as it reacts to the amplitude. The video was far brighter than it had to be too, but it doesn't matter for now. Notice the first scene change - the jitter area instantly changes. We also have to consider if macrovision could be influencing our analysis, I should try another example. Anyhow I'll try lining up both sides and some other experiments. I am still hopeful this can be done based on the partial improvement. Btw I'm seen "bowed" video which bent to the right on the lines which had high average brightness, I suspect that's an electrical effect, possibly due to aged components in the circuit which is not operating properly. |
|
|
|
|
|
#29 | Link |
|
Registered User
Join Date: Feb 2004
Location: USA
Posts: 1,348
|
I have not read the entirety of the thread however, I do not think this has been mentioned. It looks like instead of shifting only, you must shift, and stretch so that both ends of the signal space line up. If you think about how the jitter errors are created (stretching of tape / time signal), it makes sense that the entire "time" (horizontal) dimension would need to be resampled to enforce consistency.
So what must be done is: 1 shift signal so one edge is aligned 2 calculate stretch/squash (resample factor) needed to maintain correct signal length 3 resample signal accordingly 4 hope that most time-scale variance is between lines rather then within lines |
|
|
|
|
|
#30 | Link | |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
@*.mp4
Quote:
Oops, also I can capture with two cards at once - but have no doubt, even modern cards show jitter. In fact, some chips have a feature such as Ultralock, which are supposed to be a line TBC (horizontal shift only), however they don't work. It will be interesting to test the performance of various capture cards in this way, but also to finally discover why it's not working as well as a typical TBC. |
|
|
|
|
|
|
#31 | Link |
|
Registered User
Join Date: Oct 2011
Location: Germany
Posts: 39
|
@jmac698
I just had a second look at your latest screen shots. Now it looks to me that not only the sync impulse could be taken to line up the lines, but also the transition from very black to grey would do. If the very black borders on the left and right are deeper black than the rest of the image, an algorithm should be possible that finds the very first and very last pixel of the line video content. Doing so we'll get length and pixel offset for each line so that we can - shrink the line to standard length - shift the beginning of the line to a standard position Then all of the driver manipulation would not be necessary and the method might work with USB grabbers as well. If the borders are not in a deeper black this method will fail in some cases. |
|
|
|
|
|
#32 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Ok, I have an alpha version of the stretching function. It works:
http://screenshotcomparison.com/comparison/88691 So we've solved software TBC - it was so simple, why didn't anyone do this before? All you need to do is line up both sides. I doubt we even need hsync at all. It's still going to help to see the porch area. |
|
|
|
|
|
#33 | Link |
|
Registered User
Join Date: Feb 2002
Location: Borås, Sweden
Posts: 492
|
It seems to introduce aliasing in areas that did not have aliasing in the original sample, like on the strings across the chest of the hunter.
Maybe it needs some more tweaking?
__________________
Ronny |
|
|
|
|
|
#38 | Link | |
|
Registered User
Join Date: Oct 2011
Location: Germany
Posts: 39
|
@jmac698
I am very impressed! Fantastic result. Perhaps the algorithm to find borders could become a bit more robust. There is still some ripple. (Using a small cross correlation function for border areas only?) I am just starting to tweak my VHS grabbings with avisynth. Didée has posted some scripts that use NNDEDI(-2) as deinterlacer. It extracts both fields of a frame, wich gives the odd or even lines of two successing frames and interpolates the missing lines. Then the two resulting frames are merged back into one. With digital sources this also acts as good antialiasing algorithm. Quote:
Now when both frames are merged back into one the offset between the two frames leads to a blurring effect. The resulting composed frame is less sharp. On the other hand it has more details as one of both single frames, because NNEDI3 cannot invent details that are smaller than three lines. His method also reduces the noise, because tape noise from both frames is not correlated much. (But film grain of course is not reduced, when both frames originate from a 25 fps movie). Nevertheless -- his method would work much better, if we succeed in lining up both interpolated frames at exactly the same position. @Mounir The aliazing might result from the fact, that the grabbed frame is still interlaced, not processed. Last edited by sven_x; 20th October 2011 at 10:51. Reason: inserted code |
|
|
|
|
|
|
#39 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
Thanks,
The antialiasing is a bug in my script which is easily fixed. I've seen Didee's method on a german forum, it was very impressive! But for cartoons I have another method already for this, I call relative jitter. It correlates the same background image over several frames, so even if there is jitter, it's the *same* jitter for several frames, so the frames can be temporally processed. The result is less noise, but still has a static jitter pattern. It doesn't help your problem. There is a slight jitter still in my sample because I need a subpixel algorithm. That will take some more work. |
|
|
|
|
|
#40 | Link |
|
Registered User
Join Date: Jan 2006
Posts: 1,869
|
New version, faster, bugs fixed.
http://screenshotcomparison.com/comparison/88810 |
|
|
|
![]() |
| Tags |
| tbc |
| Thread Tools | Search this Thread |
| Display Modes | |
|
|