Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
2nd August 2008, 22:52 | #21 | Link | |
Registered User
Join Date: Aug 2005
Posts: 16,267
|
Quote:
If anyone has any further information regarding enhancing/restoring kinescopes, I for one, would be extremely interested... |
|
3rd August 2008, 06:13 | #22 | Link |
Resize Abuser
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
|
Because there are no patented claims, if I remove the code it might set a bad precedent... thus, I can not budge on this. The post from 2005 that I referenced is trivial prior art, and my guess is it's older then his potential claims. I understand where Kevin is coming from, but I do not want to set a precedent where we have people claiming to have something patented, and yet can not produce any proof.
__________________
Mine: KenBurnsEffect/ZoomBox CutFrames Helped: DissolveAGG ColorBalance LQ Animation Fixer |
3rd August 2008, 15:52 | #24 | Link | |
ffdshow/AviSynth wrangler
Join Date: Feb 2003
Location: Austria
Posts: 2,441
|
Quote:
np: Spooky - Little Bullet (Part One) (Gargantuan)
__________________
now playing: [artist] - [track] ([album]) |
|
4th August 2008, 11:44 | #25 | Link |
Resize Abuser
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
|
k so I've taken another look at this... I think the "heart" of the issues revolves around Telecide once you got the MoComped FPS stuff figured out. Added FFT3DFilter to the prefiltering stages.
Code:
AVISource("Kinescope test.avi") AssumeBFF() filldropsI() function filldropsI (clip c) { #even fields processing even = c.SeparateFields().SelectEven() evenPF = even.FFT3DFilter(sigma=4,bt=5) vfE = evenPF.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=3, chroma=false) vbE = evenPF.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=3, chroma=false) E = ConditionalFilter(even, even.mvflowinter(vbE,vfE,time=50,idx=3), even, "YDifferenceFromPrevious()", "lessthan", "4.0") #odd fields processing odd = c.SeparateFields().SelectOdd() oddPF = even.FFT3DFilter(sigma=4,bt=5) vfO = oddPF.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=4, chroma=false) vbO = oddPF.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=4, chroma=false) O = ConditionalFilter(odd, odd.mvflowinter(vbE,vfE,time=50,idx=4), odd, "YDifferenceFromPrevious()", "lessthan", "4.0") #return interlaced Interleave(E,O) AssumeFieldBased() Weave() AssumeBFF() } Telecide(blend=true,back=0) Decimate(quality=3) #degrain for FPS conversion Analyse source=last.FFT3DFilter(sigma=2,bt=5) backward_vec3 = source.MVAnalyse(isb = true, delta = 3, pel = 4, overlap=4, sharp=2, idx = 2) backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 4, overlap=4, sharp=2, idx = 2) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 4, overlap=4, sharp=2, idx = 2) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 4, overlap=4, sharp=2, idx = 2) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 4, overlap=4, sharp=2, idx = 2) forward_vec3 = source.MVAnalyse(isb = false, delta = 3, pel = 4, overlap=4, sharp=2, idx = 2) source=source.MVDegrain3(backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=800,idx=2) #Do magic to get 60P 60000/1001 backward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=1, chroma=false) forward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=1, chroma=false) #use undegrained clip for final output last.MVFlowFps(backward_vec, forward_vec, num=60000, den=1001) #convert 60P to 30i AssumeFrameBased() SeparateFields() SelectEvery(4, 0, 3) Weave() bob() Telecide(blend=true,back=0) to Telecide(blend=true,back=2) then frame 118 is ok but now frame 85 has backwards motion. Any ideas? Also Gavino I had to use "lessthan" because "<=" did not work. Thanks for the help though! Possible bug with 2.58RC3?
__________________
Mine: KenBurnsEffect/ZoomBox CutFrames Helped: DissolveAGG ColorBalance LQ Animation Fixer Last edited by mikeytown2; 4th August 2008 at 11:51. |
4th August 2008, 12:03 | #26 | Link | |
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
Quote:
Using GRunT, you can write Code:
ConditionalFilter(c1, c2, c3, "YDifferenceFromPrevious()<=4") |
|
15th December 2008, 13:17 | #27 | Link |
Registered User
Join Date: Nov 2008
Posts: 322
|
This is fascinating stuff.
I'm a bit of an Avisynth novice but learning fast. I'd love to do something like this with 625/25 - 625/50 material. Would one of you chaps who clearly has a greater understanding than me please post an amended version of the script for me to play with? Many thanks in advance, Andy. |
15th December 2008, 15:56 | #28 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
I got it to work, and it works very well. However, it is a multi-step process because the 24p to 60i conversion is just one step. Also, you need to have very good stuff to start with, preferably the original kinescope film. At the very least, you need something on which you can do an inverse telecine (IVTC) to recover the original 24 fps film.
|
15th December 2008, 17:27 | #29 | Link | |
Registered User
Join Date: Dec 2002
Location: UK
Posts: 1,673
|
Quote:
Following the above script, you could just try... PHP Code:
Cheers, David. |
|
15th December 2008, 18:09 | #30 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Yes, that looks like my 24p to 60i script, with the appropriate changes made to go from 25p to 50i. I had dct=4 in both of the mvanalyze lines, but that slows things down a lot and may not improve the quality enough to be worth the extra time. You'll have to test on your clips and see.
|
15th December 2008, 18:44 | #31 | Link |
Registered User
Join Date: Dec 2002
Location: UK
Posts: 1,673
|
It's slow enough already! Or if not, there's always mvflowfps2.
The "Dad's Army in Colour" shown on BBC Two Saturday evening was vidFIRE'd. Very effective, though there were a couple of places where it didn't look right, and many where the motion looked realistically smooth but more blurred than video. Cheers, David. |
16th December 2008, 01:03 | #32 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
|
|
16th December 2008, 17:00 | #33 | Link |
Registered User
Join Date: Nov 2008
Posts: 322
|
2Bdecided thanks for the script.
I'm afraid it is not sinking into my brain quite as quickly as I would like. If you can spare the time I'd appreciate a detailed description of what each section does. Sorry to be a pain, MVtools is making my brain hurt! |
16th December 2008, 17:59 | #34 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
The first block of code denoises (FFT3d) and then degrains the original footage.
The next two blocks create additional frames between the existing frames in order to take your footage from 25 progressive frames per second to 50 progressive frames per second. The only "tricky" part is the separatefields() line which takes that 50 frames per second and turns it into 100 fields per second, with the first (0) field being a top field, the second field (1) a bottom field, the third field (2) a top field, and the fourth field (3) a bottom field. The SelectEvery command then takes each group of four fields and only uses the first (0) top field and the fourth (3) bottom field. It then combines (weaves) these two fields back into a frame. Since these two commands started with 100 fields per second, and then threw away the second and third field, there are only 50 fields per second left. Then, the weave command takes these 50 fields, two at a time, and combines them back into frames, so you end up with 25 frames, but unlike the original frames, these are interlaced. So, what did all this accomplish? You now have temporal movement between the top and bottom field, whereas the original 25 fps footage had absolutely no temporal movement between fields. And, that is the definition of interlaced vs. progressive and therefore you now have footage which has twice the temporal resolution and therefore has the "television" feel as opposed to the "film" feel. |
16th December 2008, 18:53 | #37 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
A little more reading and thinking about the kinescope process reveals that the conversion is much, much simpler, and fortunately this makes it much easier to do this film to video conversion. Remember that older video cameras actually scanned the image so that each line -- and in fact each part of each scan line -- is taken at a later moment in time than the line above. This is different than most of today's cameras where the entire field is exposed and then dumped from the CCD (and now CMOS) sensor, so there is actually no temporal difference between adjacent lines. So, if you ignore for the moment the brief vertical blanking interval, you can think of the scanning of the raster lines in a normal old-fashioned CRT TV picture as a continuous process that never starts and never stops. Thus, if you are taking a film picture of the TV screen, it actually doesn't matter when you start taking the picture, as long as you stop taking the picture (i.e., close the shutter) exactly 1/29.97 second later. You will always end up with a complete frame of video. It turns out that most kinescopes appear to have been done this way. They take a frame of video, then close the shutter and advance the film. They lose some portion of the video while the film is pulled down. The kinescope film camera shutter is specially modified so that it has exactly a 72 degree angle. There is some other stuff about wobble (to make the scan lines disappear) and persistence (so both fields appear with equal intensity). Rather than repeat all this, I described it in another forum: http://www.sonycreativesoftware.com/...ssageID=624219 Last edited by johnmeyer; 16th December 2008 at 19:34. |
|
17th December 2008, 01:32 | #39 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
oh, this has got me interested.
i think if we have the original film available to us, maybe a telecine could be aligned just right to extract the original fields. this assumes the original filming had sufficiently good lenses and sufficiently good camera operators... and that the camera and telecine have sufficiently stable film transport. if the fields are too blurry or there's too much aberration it wouldn't work, but if it does we can line up the machine. the other option is to scan in a very high res and use some avisynth magic to lock onto the fields. damn, i wish i had some kine footage lying around here. it doesn't come up too often unfortunately. @ Joel Cairo: nothing personal here... you should see what's happening in the decrypting forum. but really, we need to know what we're violating, except for simply trying to do the "gist" of what your product already does. there's already been attempts on this forum and others to accomplish the same thing, usually using "Vidfire" and Doctor Who as references. in addition i develop "in house" techniques to accomplish various things at my workplace, but wouldn't try to stop the very active and useful development in the avisynth forums here (rather i feed off it and contribute where i can ). relax - even in a worst case scenario for you (losing a lot of business through free tools doing the same thing), you could probably get a job at snell and willcox, cintel, filmlight, digital vision, or any of the other companies making digital film manipulation tools with the knowledge you have.
__________________
sucking the life out of your videos since 2004 |
17th December 2008, 02:41 | #40 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
1. The kinescope process involved one of several processes designed to eliminate the scan lines. For instance, if you Google: "kinescope spot wobble" you'll find various descriptions about how the video was intentionally "vibrated" in order to obscure the scan lines. This means that the fields were never exactly in the same position from moment to moment. Other techniques were used as well. 2. The other problem is the nature of film itself. As you doubtless know when film is exposed in the camera, and then again when it is projected, each frame of film never comes to rest in the film gate exactly in the same way as the previous frame. As a result, if you look at a static title at the beginning of a feature film, you will notice that it bounces and "weaves" on the screen. This "gate weave" is obviously present in a kinescope and, much like the spot wobble, means that the video fields will never be at the same place from one frame of film to the next. Now, #2 might be possible to cure if you had access to the original film. You could use Deshaker to remove the gate weave if you could see the edge of the film (outside the gate) and stabilize on that. When Deshaker was first released, I corresponded with the author, Gunnar Thalin, and he told me he actually added a few settings for a user who was doing exactly that. His work was posted many years ago in the 8mm film forum (not on Doom9, but elsewhere). However, despite this negative news, I also have something else to share: I don't think it matters. I had many revelations like this in the course of doing this, but what this exercise is all about is synthesizing not only the missing partial frames, but also synthesizing the missing fields. It really doesn't matter where the original fields may lie, because the kinescope process was designed to obscure them and give us a frame of film that looks exactly like what a film camera would have taken if it had been mounted on top of the video camera at the time the original video was taken. Except for the video contrast, and other video artifacts, the kinescope actually does a pretty good job of doing exactly that. Therefore, all we can do -- and indeed all we really WANT to do -- is create the illusion that what was film is now in fact video. Therefore, this process should work equally well with 24p video, or any regular film source that has been IVTC'd in order to recover the original progressive frames. The one thing you could do if you had the original film is to get a much better scan, because the contrast of most kinescopes that I have captured off the air or received from collectors is really, really bad compared to captures of film that I've done myself. Also, as mentioned above, with access to the film, you could remove the gate weave, and this would significantly add to the illusion that you are once again watching video. The slight residual motion on stationary shots is still a telltale giveaway that makes you subconsciously realize that you still are watching something that isn't quite video (although in the things I've done so far, it is actually pretty amazing at times). The pros at VidFire clearly do a better job, but for your own work, using the techniques being discussed here, you can pretty close. |
|
Tags |
24p, 60i, ivtc, kinescope, mvtools |
Thread Tools | Search this Thread |
Display Modes | |
|
|