Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 2nd August 2008, 22:52   #21  |  Link
setarip_old
Registered User
 
setarip_old's Avatar
 
Join Date: Aug 2005
Posts: 16,267
Quote:
Mr Segura sent me a PM offering excuses about why he could not tell me the claims of his patent, which appears not to exist. He knows them well enough to try to shut down the thread, but not well enough to tell us what they are!
I'd view this as a failed attempt, by Mr. Segura, at "bluff poker" ;>}

If anyone has any further information regarding enhancing/restoring kinescopes, I for one, would be extremely interested...
setarip_old is offline   Reply With Quote
Old 3rd August 2008, 06:13   #22  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
Because there are no patented claims, if I remove the code it might set a bad precedent... thus, I can not budge on this. The post from 2005 that I referenced is trivial prior art, and my guess is it's older then his potential claims. I understand where Kevin is coming from, but I do not want to set a precedent where we have people claiming to have something patented, and yet can not produce any proof.
mikeytown2 is offline   Reply With Quote
Old 3rd August 2008, 09:00   #23  |  Link
avih
Capture, Deinterlace
 
avih's Avatar
 
Join Date: Feb 2002
Location: Right there
Posts: 1,971
@mikeytown2, agreed.
avih is offline   Reply With Quote
Old 3rd August 2008, 15:52   #24  |  Link
Leak
ffdshow/AviSynth wrangler
 
Leak's Avatar
 
Join Date: Feb 2003
Location: Austria
Posts: 2,441
Quote:
Originally Posted by setarip_old View Post
I'd view this as a failed attempt, by Mr. Segura, at "bluff poker" ;>}
Or rather a successful attempt at invoking the Streisand effect...

np: Spooky - Little Bullet (Part One) (Gargantuan)
__________________
now playing: [artist] - [track] ([album])
Leak is offline   Reply With Quote
Old 4th August 2008, 11:44   #25  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
k so I've taken another look at this... I think the "heart" of the issues revolves around Telecide once you got the MoComped FPS stuff figured out. Added FFT3DFilter to the prefiltering stages.

Code:
AVISource("Kinescope test.avi")

AssumeBFF()

filldropsI()
function filldropsI (clip c)
{
#even fields processing
even = c.SeparateFields().SelectEven()
evenPF = even.FFT3DFilter(sigma=4,bt=5)

vfE = evenPF.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=3, chroma=false)
vbE = evenPF.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=3, chroma=false)

E = ConditionalFilter(even, even.mvflowinter(vbE,vfE,time=50,idx=3), even, "YDifferenceFromPrevious()", "lessthan", "4.0")


#odd fields processing
odd = c.SeparateFields().SelectOdd()
oddPF = even.FFT3DFilter(sigma=4,bt=5)

vfO = oddPF.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=4, chroma=false)
vbO = oddPF.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=4, chroma=false)

O = ConditionalFilter(odd, odd.mvflowinter(vbE,vfE,time=50,idx=4), odd, "YDifferenceFromPrevious()", "lessthan", "4.0")


#return interlaced
Interleave(E,O)
AssumeFieldBased()
Weave()
AssumeBFF()
}


Telecide(blend=true,back=0)
Decimate(quality=3)

#degrain for FPS conversion Analyse
source=last.FFT3DFilter(sigma=2,bt=5)

backward_vec3 = source.MVAnalyse(isb = true, delta = 3, pel = 4, overlap=4, sharp=2, idx = 2)
backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 4, overlap=4, sharp=2, idx = 2)
backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 4, overlap=4, sharp=2, idx = 2)
forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 4, overlap=4, sharp=2, idx = 2)
forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 4, overlap=4, sharp=2, idx = 2)
forward_vec3 = source.MVAnalyse(isb = false, delta = 3, pel = 4, overlap=4, sharp=2, idx = 2)
source=source.MVDegrain3(backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=800,idx=2)

#Do magic to get 60P 60000/1001
backward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=1, chroma=false)
forward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=1, chroma=false)
#use undegrained clip for final output
last.MVFlowFps(backward_vec, forward_vec, num=60000, den=1001)

#convert 60P to 30i
AssumeFrameBased()
SeparateFields()
SelectEvery(4, 0, 3)
Weave()
bob()
Using above code on frame 118, the interpolated motion is messed up. If i change
Telecide(blend=true,back=0) to
Telecide(blend=true,back=2) then frame 118 is ok but now frame 85 has backwards motion. Any ideas?

Also Gavino I had to use "lessthan" because "<=" did not work. Thanks for the help though! Possible bug with 2.58RC3?

Last edited by mikeytown2; 4th August 2008 at 11:51.
mikeytown2 is offline   Reply With Quote
Old 4th August 2008, 12:03   #26  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Quote:
Originally Posted by mikeytown2 View Post
Also Gavino I had to use "lessthan" because "<=" did not work. Thanks for the help though! Possible bug with 2.58RC3?
Aargh! Yet another unnecessary limitation of the built-in run-time filters which I wasn't aware of.

Using GRunT, you can write
Code:
ConditionalFilter(c1, c2, c3, "YDifferenceFromPrevious()<=4")
Let me take this opportunity to say well done for standing your ground on this issue, mikeytown2.
Gavino is offline   Reply With Quote
Old 15th December 2008, 13:17   #27  |  Link
Floatingshed
Registered User
 
Join Date: Nov 2008
Posts: 322
This is fascinating stuff.
I'm a bit of an Avisynth novice but learning fast. I'd love to do something like this with 625/25 - 625/50 material. Would one of you chaps who clearly has a greater understanding than me please post an amended version of the script for me to play with?
Many thanks in advance,
Andy.
Floatingshed is offline   Reply With Quote
Old 15th December 2008, 15:56   #28  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
I got it to work, and it works very well. However, it is a multi-step process because the 24p to 60i conversion is just one step. Also, you need to have very good stuff to start with, preferably the original kinescope film. At the very least, you need something on which you can do an inverse telecine (IVTC) to recover the original 24 fps film.
johnmeyer is offline   Reply With Quote
Old 15th December 2008, 17:27   #29  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: UK
Posts: 1,673
Quote:
Originally Posted by Floatingshed View Post
I'd love to do something like this with 625/25 - 625/50 material.
That's easier than 24p back to 60i.

Following the above script, you could just try...
PHP Code:
#degrain for FPS conversion Analyse
source=last.FFT3DFilter(sigma=2,bt=5)

backward_vec3 source.MVAnalyse(isb truedelta 3pel 4overlap=4sharp=2idx 2)
backward_vec2 source.MVAnalyse(isb truedelta 2pel 4overlap=4sharp=2idx 2)
backward_vec1 source.MVAnalyse(isb truedelta 1pel 4overlap=4sharp=2idx 2)
forward_vec1 source.MVAnalyse(isb falsedelta 1pel 4overlap=4sharp=2idx 2)
forward_vec2 source.MVAnalyse(isb falsedelta 2pel 4overlap=4sharp=2idx 2)
forward_vec3 source.MVAnalyse(isb falsedelta 3pel 4overlap=4sharp=2idx 2)
source=source.MVDegrain3(backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=800,idx=2)

#Do magic to get 50P
backward_vec source.MVAnalyse(blksize=16overlap=4isb truepel=4search=3idx=1chroma=false)
forward_vec source.MVAnalyse(blksize=16overlap=4isb falsepel=4search=3idx=1chroma=false)
#use undegrained clip for final output
last.MVFlowFps(backward_vecforward_vecnum=50den=1)

#Re-interlace if needed:
assumetff()
separatefields()
selectevery(4,0,3)
weave() 
Leave out the initial degraining to save time if you want - replace everything above "#Do magic to get 50P" with just "source=last".

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 15th December 2008, 18:09   #30  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Yes, that looks like my 24p to 60i script, with the appropriate changes made to go from 25p to 50i. I had dct=4 in both of the mvanalyze lines, but that slows things down a lot and may not improve the quality enough to be worth the extra time. You'll have to test on your clips and see.
johnmeyer is offline   Reply With Quote
Old 15th December 2008, 18:44   #31  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: UK
Posts: 1,673
It's slow enough already! Or if not, there's always mvflowfps2.

The "Dad's Army in Colour" shown on BBC Two Saturday evening was vidFIRE'd. Very effective, though there were a couple of places where it didn't look right, and many where the motion looked realistically smooth but more blurred than video.

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 16th December 2008, 01:03   #32  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
there were a couple of places where it didn't look right, and many where the motion looked realistically smooth but more blurred than video.
They probably have access to the film and also can spend a lot more time on this than a non-commercial person like me. However, I'll bet that even for them that there are lots of situations where the motion comp breaks down and they have to substitute a "safer" way (interpolation) to create the extra fields. That of course will look blurry.
johnmeyer is offline   Reply With Quote
Old 16th December 2008, 17:00   #33  |  Link
Floatingshed
Registered User
 
Join Date: Nov 2008
Posts: 322
2Bdecided thanks for the script.
I'm afraid it is not sinking into my brain quite as quickly as I would like. If you can spare the time I'd appreciate a detailed description of what each section does.
Sorry to be a pain, MVtools is making my brain hurt!
Floatingshed is offline   Reply With Quote
Old 16th December 2008, 17:59   #34  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
The first block of code denoises (FFT3d) and then degrains the original footage.

The next two blocks create additional frames between the existing frames in order to take your footage from 25 progressive frames per second to 50 progressive frames per second. The only "tricky" part is the separatefields() line which takes that 50 frames per second and turns it into 100 fields per second, with the first (0) field being a top field, the second field (1) a bottom field, the third field (2) a top field, and the fourth field (3) a bottom field. The SelectEvery command then takes each group of four fields and only uses the first (0) top field and the fourth (3) bottom field. It then combines (weaves) these two fields back into a frame. Since these two commands started with 100 fields per second, and then threw away the second and third field, there are only 50 fields per second left. Then, the weave command takes these 50 fields, two at a time, and combines them back into frames, so you end up with 25 frames, but unlike the original frames, these are interlaced.

So, what did all this accomplish? You now have temporal movement between the top and bottom field, whereas the original 25 fps footage had absolutely no temporal movement between fields. And, that is the definition of interlaced vs. progressive and therefore you now have footage which has twice the temporal resolution and therefore has the "television" feel as opposed to the "film" feel.
johnmeyer is offline   Reply With Quote
Old 16th December 2008, 18:18   #35  |  Link
Floatingshed
Registered User
 
Join Date: Nov 2008
Posts: 322
Very nicely explained, thanks. Can't wait to get home and play. Cheers.
Andy.
Floatingshed is offline   Reply With Quote
Old 16th December 2008, 18:33   #36  |  Link
Floatingshed
Registered User
 
Join Date: Nov 2008
Posts: 322
What does the "E = ConditionalFilter(even, even.mvflowinter(vbE,vfE,time=50,idx=3), even, "YDifferenceFromPrevious()", "lessthan", "4.0")" stuff do in the original script?
Thanks.
Floatingshed is offline   Reply With Quote
Old 16th December 2008, 18:53   #37  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
What does the "E = ConditionalFilter(even, even.mvflowinter(vbE,vfE,time=50,idx=3), even, "YDifferenceFromPrevious()", "lessthan", "4.0")" stuff do in the original script?
When I started this thread, I thought that most kinescopes were made in a manner where the film camera would actually capture a full video frame from start to finish, and than another, and then skip a frame. However, that was stupid on my part. The person responding tried to come up with an approach that would only add extra frames/fields for the frames/fields missed during 30 -> 24 decimation. The conditional script was a very clever way to only add back the fields and frames when no movement is detected, so when there isn't much difference between frames, then add an interpolated frame. Those lines were adapted from another script posted here years ago where a person was getting dropped frames, and his frame capture software merely duplicated the previous frame whenever a frame drop was detected. That script is actually amazingly useful and, since a perfect duplicate results in a YDifference of 0.00, it is easy to detect. That script then replaced the missing frame with an interpolated frame using MVanayze's motion estimation.

A little more reading and thinking about the kinescope process reveals that the conversion is much, much simpler, and fortunately this makes it much easier to do this film to video conversion. Remember that older video cameras actually scanned the image so that each line -- and in fact each part of each scan line -- is taken at a later moment in time than the line above. This is different than most of today's cameras where the entire field is exposed and then dumped from the CCD (and now CMOS) sensor, so there is actually no temporal difference between adjacent lines.

So, if you ignore for the moment the brief vertical blanking interval, you can think of the scanning of the raster lines in a normal old-fashioned CRT TV picture as a continuous process that never starts and never stops. Thus, if you are taking a film picture of the TV screen, it actually doesn't matter when you start taking the picture, as long as you stop taking the picture (i.e., close the shutter) exactly 1/29.97 second later. You will always end up with a complete frame of video. It turns out that most kinescopes appear to have been done this way. They take a frame of video, then close the shutter and advance the film. They lose some portion of the video while the film is pulled down. The kinescope film camera shutter is specially modified so that it has exactly a 72 degree angle. There is some other stuff about wobble (to make the scan lines disappear) and persistence (so both fields appear with equal intensity).

Rather than repeat all this, I described it in another forum:

http://www.sonycreativesoftware.com/...ssageID=624219

Last edited by johnmeyer; 16th December 2008 at 19:34.
johnmeyer is offline   Reply With Quote
Old 16th December 2008, 19:03   #38  |  Link
Floatingshed
Registered User
 
Join Date: Nov 2008
Posts: 322
It's beginning to click! Thanks for taking the time to explain.
Floatingshed is offline   Reply With Quote
Old 17th December 2008, 01:32   #39  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
oh, this has got me interested.

i think if we have the original film available to us, maybe a telecine could be aligned just right to extract the original fields. this assumes the original filming had sufficiently good lenses and sufficiently good camera operators... and that the camera and telecine have sufficiently stable film transport. if the fields are too blurry or there's too much aberration it wouldn't work, but if it does we can line up the machine. the other option is to scan in a very high res and use some avisynth magic to lock onto the fields.

damn, i wish i had some kine footage lying around here. it doesn't come up too often unfortunately.

@ Joel Cairo:

nothing personal here... you should see what's happening in the decrypting forum. but really, we need to know what we're violating, except for simply trying to do the "gist" of what your product already does. there's already been attempts on this forum and others to accomplish the same thing, usually using "Vidfire" and Doctor Who as references. in addition i develop "in house" techniques to accomplish various things at my workplace, but wouldn't try to stop the very active and useful development in the avisynth forums here (rather i feed off it and contribute where i can ). relax - even in a worst case scenario for you (losing a lot of business through free tools doing the same thing), you could probably get a job at snell and willcox, cintel, filmlight, digital vision, or any of the other companies making digital film manipulation tools with the knowledge you have.
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 17th December 2008, 02:41   #40  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,691
Quote:
i think if we have the original film available to us, maybe a telecine could be aligned just right to extract the original fields.
Originally I thought that might be possible, but I am now pretty certain that this is totally impossible for two different reasons. Either reason would make it next to impossible, but the two together pretty much guarantee that you cannot retrieve original fields.

1. The kinescope process involved one of several processes designed to eliminate the scan lines. For instance, if you Google:

"kinescope spot wobble"

you'll find various descriptions about how the video was intentionally "vibrated" in order to obscure the scan lines. This means that the fields were never exactly in the same position from moment to moment. Other techniques were used as well.

2. The other problem is the nature of film itself. As you doubtless know when film is exposed in the camera, and then again when it is projected, each frame of film never comes to rest in the film gate exactly in the same way as the previous frame. As a result, if you look at a static title at the beginning of a feature film, you will notice that it bounces and "weaves" on the screen. This "gate weave" is obviously present in a kinescope and, much like the spot wobble, means that the video fields will never be at the same place from one frame of film to the next.

Now, #2 might be possible to cure if you had access to the original film. You could use Deshaker to remove the gate weave if you could see the edge of the film (outside the gate) and stabilize on that. When Deshaker was first released, I corresponded with the author, Gunnar Thalin, and he told me he actually added a few settings for a user who was doing exactly that. His work was posted many years ago in the 8mm film forum (not on Doom9, but elsewhere).

However, despite this negative news, I also have something else to share: I don't think it matters. I had many revelations like this in the course of doing this, but what this exercise is all about is synthesizing not only the missing partial frames, but also synthesizing the missing fields. It really doesn't matter where the original fields may lie, because the kinescope process was designed to obscure them and give us a frame of film that looks exactly like what a film camera would have taken if it had been mounted on top of the video camera at the time the original video was taken. Except for the video contrast, and other video artifacts, the kinescope actually does a pretty good job of doing exactly that. Therefore, all we can do -- and indeed all we really WANT to do -- is create the illusion that what was film is now in fact video. Therefore, this process should work equally well with 24p video, or any regular film source that has been IVTC'd in order to recover the original progressive frames.

The one thing you could do if you had the original film is to get a much better scan, because the contrast of most kinescopes that I have captured off the air or received from collectors is really, really bad compared to captures of film that I've done myself. Also, as mentioned above, with access to the film, you could remove the gate weave, and this would significantly add to the illusion that you are once again watching video. The slight residual motion on stationary shots is still a telltale giveaway that makes you subconsciously realize that you still are watching something that isn't quite video (although in the things I've done so far, it is actually pretty amazing at times).

The pros at VidFire clearly do a better job, but for your own work, using the techniques being discussed here, you can pretty close.
johnmeyer is offline   Reply With Quote
Reply

Tags
24p, 60i, ivtc, kinescope, mvtools

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 10:04.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.