Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 30th July 2008, 04:43   #1  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,332
Kinescope restoration back to 60i

I have searched all these forums to see if anyone has attempted Kinescope restoration similar to what this company is doing:

http://www.kinescopes.com/LF_4.html

I have many Kinescope captures which I would like to look more like real video (which is what is done by the company linked to above). I think I know most of what I have to do, and I have done most of the following steps in other projects, but I need help in two ways.

First, am I missing a step that you think would make the result look more like video?

More important, has anyone come up with better ideas for 24p to 60i than what is described in this thread:

http://forum.doom9.org/showthread.ph...hlight=24p+60i

Here is my proposed restoration on which I would appreciate comments:
1. Do IVTC to recover the film frames from the 29.97 video. This is easy to do, and I have already tested and it works on Kinescope just like it does with any telecined film. This gets me the 24p film, just as if I had transferred the Kinescope film on a Rank Cintel.

2. Remove film dust and dirt. I spent hours and hours during work on another project perfecting a Despot script, and finally got Despot to work really well. There are some really great hints scattered around this forum that had to be combined together.

3. Do gamma adjustment to try to moderate some of the horrendous contrast that is typical of the Kinescope process. Again, I can do this in Vegas or in AVISynth. Not too hard.

4. Restore the sound. I do this for a living and there are lots of tricks to add "life" to muddy sound. Nero Wave Editor actually has a neat plugin for this.

5. OK, this is the tough one. I need to take the 24p, convert that to interlaced, and then synthesize the fields that were dropped during the Kinescope capture. MVTools is my choice for this (actually one of the MVFlowFps functions), but in my quick tests, I am not happy with the results.
So, if anyone has some hints or suggestions on #5 -- or on the overall process -- that would be much appreciated.

BTW, if you look at the results of the LiveFeed technology in the link I provided above, it is darned impressive stuff. I think I should be able to duplicate it almost exactly if I can get #5 working correctly.

Thanks!
johnmeyer is offline   Reply With Quote
Old 30th July 2008, 08:16   #2  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
Looked at the comparison vids from kinescopes... nothing that special just a denoiser as far as i can tell.

Code:
a=FFmpegSource("Philofarnsworth-WML15thAnniversaryFebruary1965Kinescope676.avi").Subtitle("Orginal")
b=FFmpegSource("Philofarnsworth-WML15thAnniversaryFebruary1965LiveFeedRestoration136.avi").Subtitle("Restoration").Trim(3,0)
StackHorizontal(a,b)
ShowFrameNumber(x=8,y=50)
FFMpegSource()

The 2 Avi's are both 29.970 FPS and they used Vdub to create them.
http://mediainfo.sourceforge.net/

Explain what you mean by "Real Video". Also what is your source and target resolution? I've played around with High Motion video (me wakeboarding shot from a boat) trying to get 120p out of 30i and it doesn't work at all. It really depends on the video content, if there is not a lot of movement in the frame then your chances are much better. Post a sample and we can give you better advice. As of right now there is no magic button you can press to get nice looking 60i from 24p for every type of content out there. It doesn't mean we can't try with a sample though...
mikeytown2 is offline   Reply With Quote
Old 30th July 2008, 10:00   #3  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: Yorkshire, UK
Posts: 1,673
I think the comparison videos on the website are wrong - they compare 24p with 30p, whereas the real output is 60i - so those video hide the real advantage.

My vague understanding of the kinescope process is that the fields aren't dropped - that's the real problem - they're all merged together in there. I could be wrong - I'm in the UK and the process here is quite different (50i>25p, simpler than 60i>24p).

_if_ the fields were simply missing, you could use mvtools to reconstruct the missing moments in time. The only problems would be the usual problems with mvtools, and probably the implied variable frame rate of the source.

However, given that nothing is missing, but it's all merged together, it's far more complicated.

In the UK, the Vidfire process works well for our archive stuff - we call them telerecordings. You can read about it on the Doctor Who Restoration website.

http://en.wikipedia.org/wiki/VidFIRE

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 30th July 2008, 10:18   #4  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,636
this might also be interesting:

colour restoration out af a black&white telerecroding (kinescope)

http://www.techmind.org/colrec/
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 30th July 2008, 18:24   #5  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,332
Quote:
nothing that special just a denoiser as far as i can tell.
No, they definitely do quite a bit of temporal magic, although you are correct about the denoiser (and film dust removal). See further below about why the improvements aren't quite as evident if you play these clips on your PC.

Quote:
Explain what you mean by "Real Video". Also what is your source and target resolution?
Good questions, especially given how many video standards we have to deal with. I am trying to create 720x480 SD 29.97 interlaced video, commonly referred to as NTSC video. The source of my footage are hockey games from the 1950s and 1960s, captured via a pass-through on my DV camcorder from the composite video feed from my satellite. It is captured at 720x480 29.97 interlaced (i.e., standard SD DV). See next comment for more about what I'm doing so far.

Quote:
they compare 24p with 30p, whereas the real output is 60i
True. You have to put the clips on a DVD and view them on a standard NTSC monitor (29.97 interlaced) to see the rather stunning difference.

Quote:
In the UK, the Vidfire process works well for our archive stuff
Yes, I should have provided a reference or link to that as well. It actually precedes in time the technology I referenced, and is apparently more advanced.

If you research the kinescope process or if you capture and view individual fields of a kinescope (just do a separatefields() function and view the result), you will see that it is just standard film and you can recover the film using a standard IVTC process. However, it gets difficult after that. Without going into everything that I've found in my research, the short story is that Kinescope film cameras were pointed at a high quality television monitor and then filmed the result. The camera motors/shutters were electronically synced with the video VBI. In the best of these systems, two fields would be recorded on one frame of film and, through some sort of technique, the next field would either be stored (via persistent phosphors or some other means -- no digital storage back then) while the film pulled down, or it would be discarded. The discarding was needed in order to record 30 frames a second on a medium which only recorded 24 frames.

As I've looked at this problem, it may be way beyond what can be done with AVISynth because there are quite a few really tough problems.

Neglecting the fact that not all Kinescopes were done the same way, and assuming for the moment they were created as described above, I thought at first I could simply do the IVTC, separate the fields, run MVFlowFPS to create the fields that were dropped, weave the fields back together and have a super result. The problem, however, is that film jumps around, both in the gate of the camera, and then again when it is projected. Thus, the fields that were recorded on the film don't line up consistently from frame to frame. I can use Deshaker to completely eliminate this gate weave, but while that makes the film look stable, since I have no way of knowing what part of the gate weave/jitter happened in the camera and what part came from the projector, the fields still aren't going to appear in the same place.

It is this problem alone that I think dooms my project. The two companies that are doing this have access to the actual film, and therefore have control over the projection/capture process and therefore with the proper, controlled capture can know for certain that the jitter comes from the original camera and therefore have a chance at aligning fields from adjacent frames of film.

So, in the time I've thought about this from my original post, I am rapidly coming to the conclusion that this simply cannot be done without access to the original Kinescope film.

Thanks for all the comments. It helped me work through this relatively quickly and saved me wasting a lot of time.
johnmeyer is offline   Reply With Quote
Old 30th July 2008, 20:50   #6  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
eh don't give up that easily, post a 30 sec clip. You never know...?
mikeytown2 is offline   Reply With Quote
Old 30th July 2008, 22:27   #7  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,332
Quote:
eh don't give up that easily, post a 30 sec clip. You never know...?
Thanks!

Here's a few seconds of the clip, as captured, encoded as a DV NTSC 720x480 AVI file.

If you run it through this simple AVISynth script:

AVISource("E:\Kinescope test.avi").assumebff()
separatefields()

This gives you each field from the capture. You will immediately see a standard 2:3 (two fields, then three fields) telecined film cadence. You will see some ghosting in some of the very fast moving fields. This is clearly due to a limitation in how the material was captured. Ignore that for the moment.

The general idea to get a normal NTSC 29.97 interlaced video "feel" from what was originally a video signal, is to replace that third repeated field with a synthesized field, and then re-weave the result. As I said in my original post, there are other things that need to be done as well, but this is the heart and soul of what would need to be done.

BTW, in researching this, I found some unbelievable technology that was being used to re-insert color with nothing but B&W kinescope film (of a color TV broadcast) as the starting point. No, it wasn't Ted Turner colorization, but instead, there was enough residual colorburst information at the margins (the technicians forgot to turn off the "color killer," a circuit found in most early color TV monitors so that B&W broadcasts wouldn't have color fringing). Anyway, somewhat like forensic pathologists, they were able to reconstruct the body from just these few traces. Remarkable stuff.

Here's the link (good for seven days):

https://www.yousendit.com/download/Q...dkdrUmswTVE9PQ

BTW, in looking at this particular Kinescope, I think it may have used the cheapest and easiest form of capture where they only captured half the fields. Therefore, some sort of resolution upresn'g may also provide some benefit.

Last edited by johnmeyer; 30th July 2008 at 22:30.
johnmeyer is offline   Reply With Quote
Old 31st July 2008, 00:31   #8  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
Here's my attempt... there appears to be some blended fields still so it's not perfect though...


Code:
AVISource("Kinescope test.avi")
#Get Film Frames
AssumeBFF()
Telecide(guide=1,blend=true,back=2)
Decimate(cycle=5,mode=3,quality=3)

#degrain for FPS conversion Analyse
source=last
backward_vec3 = source.MVAnalyse(isb = true, delta = 3, pel = 2, overlap=4, sharp=1, idx = 2)
backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 2)
backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 2)
forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 2)
forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 2)
forward_vec3 = source.MVAnalyse(isb = false, delta = 3, pel = 2, overlap=4, sharp=1, idx = 2)
source=source.MVDegrain3(backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=800,idx=2)

#Do magic to get 60P 60000/1001
backward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=1, chroma=false)
forward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=1, chroma=false)
#use undegrained clip for final output
last.MVFlowFps(backward_vec, forward_vec, num=60000, den=1001)

#convert 60P to 30i
AssumeFrameBased()
SeparateFields()
SelectEvery(4, 0, 3)
Weave()
Frames 15-25 seem to have blended issues. I would look at these filters for more options
http://avisynth.org/mediawiki/Extern...ending_removal

Last edited by mikeytown2; 31st July 2008 at 10:07. Reason: Fix TFF to BFF
mikeytown2 is offline   Reply With Quote
Old 31st July 2008, 09:09   #9  |  Link
Alex_ander
Registered User
 
Alex_ander's Avatar
 
Join Date: Apr 2008
Location: St. Petersburg, Russia
Posts: 334
In my understanding of the Kinescope 60 -> 24 process, some fields are inevitably dropped and in irregular way (unlike with 'telerecording' 50 -> 25). It is possible to extract the recorded field sequence, but MVTools (MVFlowFPS) will not recreate the dropped fields/frames in their initial places because it assumes that input frames belong to equidistant moments (and this can't be true after simple 60 -> 24). It will even rebuid all the INPUT frames for new moments. Some script or special motion analysis is needed to help MVTools to only recreate the dropped fields in case of irregular dropping. I often needed something like this for restoration of bad NTSC->PALs but hesitated in addressing Fizick with the problem of narrow interest (now it doesn't look so narrow).
Alex_ander is offline   Reply With Quote
Old 31st July 2008, 10:38   #10  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: Yorkshire, UK
Posts: 1,673
Quote:
Originally Posted by johnmeyer View Post
Thus, the fields that were recorded on the film don't line up consistently from frame to frame.
They're not supposed to. The original fields are intentionally blurred vertically. Without the blurring, they would never line up exactly by accident, but having them line up approximately for even part of the frame would have caused terrible problems on a normal display (reversed movement on part of a frame due to field swapping, for example!), so they are intentionally blurred to avoid this.

In the UK, on some telerecordings, this is done during recording (using something called "spot wobble"); on others (very early ones) it was done during playback. There are transfers of skip-field telerecording (one field only was stored on the film) where you can clearly see the original lines.

On your example, you can clearly see both fields superimposed without visible lines, as if blend deinterlaced. Recovering the original fields from this is a serious challenge - maybe this is something that the software you linked to achieves?


Even if the lines are discretely visible on the original film print, and even if no attempt is made to blur them when transferring the film back to video, you can't expect those 480 lines to be recoverable when almost randomly cropped, re-sampled and re-scaled to a new, non-aligned 480 lines (as they inevitably have been). If there are original video lines discretely visible on the film print, you'll need an HD transfer to recover them.

Some people are transferring SD telerecordings in HD to recover something else...
http://colour-recovery.wikispaces.com/

see especially
http://www.rtr.myzen.co.uk/pt450pal.bmp
from
http://colour-recovery.wikispaces.co...very+(yet+more)

Some early results have been amazing...
http://www.rtr.myzen.co.uk/totp_full_gamut_601.mov
...but it doesn't work with everything yet.

Cheers,
David.

Last edited by 2Bdecided; 31st July 2008 at 10:44.
2Bdecided is offline   Reply With Quote
Old 31st July 2008, 10:38   #11  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
Attempt 2... Got an idea from here
http://forum.doom9.org/showthread.php?t=104294

Code:
AVISource("Kinescope test.avi")

AssumeBFF()

filldropsI()
function filldropsI (clip c)
{
even = c.SeparateFields().SelectEven()
vfE = even.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=3, chroma=false)
vbE = even.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=3, chroma=false)

odd = c.SeparateFields().SelectOdd()
vfO = odd.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=4, chroma=false)
vbO = odd.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=4, chroma=false)


global Efilldrops_d = even.mvflowinter(vbE,vfE,time=50,idx=3)#.ScriptClip("Subtitle(String(YDifferenceFromPrevious))")
global Efilldrops_c = even
E = even.scriptclip("""YDifferenceFromPrevious()<=4? Efilldrops_d : Efilldrops_c""")

global Ofilldrops_d = odd.mvflowinter(vbE,vfE,time=50,idx=4)#.ScriptClip("Subtitle(String(YDifferenceFromPrevious))")
global Ofilldrops_c = odd
O = odd.scriptclip("""YDifferenceFromPrevious()<=4? Ofilldrops_d : Ofilldrops_c""")

Interleave(E,O)
AssumeFieldBased()
Weave()
AssumeBFF()
}


Telecide(guide=1,blend=true,back=2)
Decimate(cycle=5,mode=3,quality=3)

#degrain for FPS conversion Analyse
source=last
backward_vec3 = source.MVAnalyse(isb = true, delta = 3, pel = 2, overlap=4, sharp=1, idx = 2)
backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 2)
backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 2)
forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 2)
forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 2)
forward_vec3 = source.MVAnalyse(isb = false, delta = 3, pel = 2, overlap=4, sharp=1, idx = 2)
source=source.MVDegrain3(backward_vec1,forward_vec1,backward_vec2,forward_vec2,backward_vec3,forward_vec3,thSAD=800,idx=2)

#Do magic to get 60P 60000/1001
backward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=1, chroma=false)
forward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=1, chroma=false)
#use undegrained clip for final output
last.MVFlowFps(backward_vec, forward_vec, num=60000, den=1001)

#convert 60P to 30i
AssumeFrameBased()
SeparateFields()
SelectEvery(4, 0, 3)
Weave()

I think it's fairly good! except for the blended fields...

Play around with Telecide and Decimate

Last edited by mikeytown2; 31st July 2008 at 11:30.
mikeytown2 is offline   Reply With Quote
Old 31st July 2008, 11:34   #12  |  Link
Alex_ander
Registered User
 
Alex_ander's Avatar
 
Join Date: Apr 2008
Location: St. Petersburg, Russia
Posts: 334
Quote:
Originally Posted by mikeytown2 View Post
Attempt 2... Got an idea from here
http://forum.doom9.org/showthread.php?t=104294
Thanks for the link, very important function. But only works in case the (single) drops are filled with repeated frames/fields. Looks difficult to make right frames (fields) repeated, something like ChangeFPS() will probably create dupes between wrong frames.
Alex_ander is offline   Reply With Quote
Old 1st August 2008, 08:17   #13  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,332
I was contacted today by Kevin Segura from LiveFeed Video Imaging, the company behind the technology referenced in the kinescopes.com link I referenced in my original post.
Kevin claims that my post, this thread, and my attempt to "reverse engineer" the process he uses at LiveFeed all constitute patent infringement.
I sent Kevin a long email in which I disagreed with every part of his assertion, especially since patents do not restrict our free speech rights to discuss technology. However, I have a soft spot in my heart for entrepreneurs, and I have seen the results of his work and, as I indicated in my first post, I think those results are fantastic. Therefore, rather than get into a long discussion about patents and whether he is properly asserting his rights, or not, I instead have decided not to make any further posts in this thread, not because any of us are doing anything wrong -- because we definitely are not -- but because this is a guy trying to make it on his own in a tough business, and I don't want to do anything to make it tougher for him.

So, I wish Kevin well and hope that, perhaps, in the brief discussion we've had here that perhaps we have provided a few ideas that he can use to make his products and services better.
johnmeyer is offline   Reply With Quote
Old 1st August 2008, 10:38   #14  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: Yorkshire, UK
Posts: 1,673
Some countries have an exemption for R&D, others do not.

IANAL but AFAIK you aren't on very safe ground carrying out R&D on patented matter in the USA.

Again, IANAL but AFAIK you're fairly safe here in the UK.

This is not legal advice!

I agree that people who are trying to earn money from inventing novel and useful video processing techniques deserve encouragement.

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 1st August 2008, 10:43   #15  |  Link
mikeytown2
Resize Abuser
 
mikeytown2's Avatar
 
Join Date: Apr 2005
Location: Seattle, WA
Posts: 623
fair enough... but just out of curiosity I did a quick search and came up with nothing on google patent search, assuming the patent is in Kevin's name. Xvid is a classic example of code that might infringe on patents, at least according to wikipedia; discussion about Mpeg4 part 2 code implementation is still going on though. If we would have looked at a patent first and then went about reverse engineering it, i would feel dirty about doing something like that. The code i made above is somewhat trivial... simple MoComp FPS adjustment, took less then 20 min to get it running. So with that, to everyone here... good luck! I have no need for the code I made, just did it because of an interesting challenge. As for the 2 vid's from kinescopes they where of no help, as i stated above all thats there is a denoised clip; nothing was accomplished until johnmeyer posted a sample vid with the request to get 30i out of it. So this entire misunderstanding could have been avoided if johnmeyer would have posted a sample and not referenced Kevin's website... I would have come up with exactly the same code! Code for my first post came from this documentation. The second post states where i got the idea from.

With that, if Kevin wishes to have my posted code removed, so be it, i will do it; but what i came up with is trivial, and i had no idea it might be patented.

I'm also a supporter of people making money!

Last edited by mikeytown2; 1st August 2008 at 10:52.
mikeytown2 is offline   Reply With Quote
Old 1st August 2008, 17:15   #16  |  Link
Joel Cairo
Registered User
 
Join Date: Oct 2005
Posts: 18
Well, I'll just say "thanks" to John & mikeytown2 for understanding my position. It was indeed the specific LiveFeed references and links to my site that brought about my reaction. As John & I discussed in a PM exchange, we both recognize that I.P. rights have to be asserted in order to preserve them. So, while I have no desire to stifle general scholarly discussion, I hope you'll all understand my position.

The patent application is in process (and believe me, it's a **long** [multi-year] process), so that may be why the search didn't turn up anything.

I'm grateful that people are so interested in the work that I do, and even more grateful that they want to see it succeed. So mikeytown2, it would be my **preference** (and my polite request) that your code be taken down-- if, for no other reason, because the process that John was looking for has already been created; and also because I hate the potential hassle of having to watch for the future legions of people who'd stumble across the code with the aid of Google and decide that they didn't care about intellectual property rights (both mine, and those of the owners of the material that they might attempt to exploit...)

So thanks again to everyone for your support of a fellow forum member, and also for your time and consideration in reading this response.

All the best,

-Kevin Segura (a/k/a "Joel Cairo")
LiveFeed Video Imaging
Joel Cairo is offline   Reply With Quote
Old 1st August 2008, 17:19   #17  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Before any code is taken down, please state the claims of your patent so we can decide if the code is violating anything. Thank you.

I assume you are not claiming a patent on the idea of motion-compensated frame/field interpolation.
Guest is offline   Reply With Quote
Old 1st August 2008, 19:32   #18  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,406
While I support the protection of legitimate rights, I find it shocking that anyone could consider mikeytown2's code to be violating anything at all, as he has clearly stated how he came up with it using his own brain.

Out of solidarity, I will add my contribution: there's no need to use global variables, as
Code:
global Efilldrops_d = even.mvflowinter(vbE,vfE,time=50,idx=3)
global Efilldrops_c = even
E = even.scriptclip("""YDifferenceFromPrevious()<=4? Efilldrops_d : Efilldrops_c""")
can be replaced by
Code:
E = even.ConditionalFilter(even.mvflowinter(vbE,vfE,time=50,idx=3), even, \
"YDifferenceFromPrevious()", "<=", "4")
Don't worry, mikeytown2 - I won't be seeking any royalties for this idea.

Last edited by Guest; 1st August 2008 at 21:13. Reason: readability
Gavino is offline   Reply With Quote
Old 2nd August 2008, 19:32   #19  |  Link
avih
Capture, Deinterlace
 
avih's Avatar
 
Join Date: Feb 2002
Location: Right there
Posts: 1,966
I don't think any code should be taken down. The general notion of video restoration is not new, nor are motion compensated processes, cleaning filters in trillion variations, etc. A guy comes up with a simple method, implemented in a framework intended exactly for such tasks, that produce results which are [trying to be] similar to ones which are produced by other products. There's nothing illegal about that. It's not even reverse engineered, just plain black-box examination. IMHO the discussion should continue if there's still interest in it.
avih is offline   Reply With Quote
Old 2nd August 2008, 21:39   #20  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Mr Segura sent me a PM offering excuses about why he could not tell me the claims of his patent, which appears not to exist. He knows them well enough to try to shut down the thread, but not well enough to tell us what they are!
Guest is offline   Reply With Quote
Reply

Tags
24p, 60i, ivtc, kinescope, mvtools

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:21.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, vBulletin Solutions Inc.