Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
2nd November 2005, 02:20 | #61 | Link |
Registered User
Join Date: Oct 2005
Posts: 18
|
castellandw:
With regard to the Doctor Who restorations, there should be no need to for the Restoration Team to recover frames from interlaced content... the episodes that they treat with motion conversion are derived from 25 fps film telerecordings, so it's my understanding that they just have special video transfers made, clean up the image, and work directly from those. -Kevin |
2nd November 2005, 02:40 | #62 | Link |
Registered User
Join Date: Sep 2005
Posts: 90
|
Kevin, what are you talking about? I'm talking about regular broadcast standards conversion process (aka PAL-to-NTSC or NTSC-to-PAL) in general with the Snell Converter that the Restoration Team uses. This is nothing to do with recovering the original interlaced frames from film telerecordings like the actual VIDFIRE process.
Last edited by castellandw; 2nd November 2005 at 02:50. |
2nd November 2005, 04:01 | #64 | Link | ||
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
Quote:
Quote:
Last edited by mg262; 2nd November 2005 at 04:03. |
||
2nd November 2005, 12:53 | #68 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
it's a chicken and egg thing.
to do a motion-compensated deinterlace (good bob) we need to start with a good bob.
__________________
sucking the life out of your videos since 2004 |
2nd November 2005, 15:34 | #70 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
Mug Funky, scharfis,
Why do you need to start with a normal bob? The only thing that comes to my mind is the case where you have vertical motion of 1 (or 3 or 5, etc) original scan lines per field, i.e. the case where field 0 and field 1 match pixel for pixel. In this case, you either have to look further afield temporally (which may not help and increases the risk from incorrect motion prediction), or (safer) use pure spatial upconversion. Cf again the paper Fizick pointed us to: http://www.ics.ele.tue.nl/~dehaan/pdf/111_IVCP_ZHAO There is a motion-compensated deinterlace which starts ab initio. |
2nd November 2005, 16:32 | #71 | Link |
Registered User
Join Date: Sep 2005
Posts: 90
|
@msg262, has anyone actually implemented the GST algorithm as a plugin for AVIsynth or at least tested out the GST algorithm to see if it actually has good results? Also, do you think I should go to the Doctor Who restoration team to see if they know what kind of deinterlace technique is used before motion compensation starts in the Snell converters they use?
|
2nd November 2005, 17:15 | #72 | Link | |
Registered User
Join Date: Mar 2002
Posts: 1,075
|
Quote:
|
|
2nd November 2005, 17:16 | #73 | Link | |||
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
Quote:
It is certainly worthwhile to know what methods are used in hardware -- if the explanation is sufficiently detailed to be replicable. But I think you should not assume that the methods used in hardware, even high-end hardware, are the "best" methods around. (The same caveat should be placed on academic material.) In the end, to choose an approach for a given task, you need to immerse yourself in the material until you understand the strengths and weaknesses of each method well enough to pick -- or construct -- the appropriate one for the task. Quote:
On this, on phase correlation, things are only implemented when someone has an interest in implementing them -- which typically means an interest in the specific tasks for which they are useful. I'm going to steal some well chosen words: Quote:
|
|||
2nd November 2005, 17:20 | #74 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
MfA, inherently aliased fields: do you mean aliased in the sense that they are aliases of each other or aliased in the sense of anti-aliasing? (I think you mean the former, but if it's the latter I'll have to think some more...)
Update on progress: I put in the half-pixel assembly, and then spent a fair while profiling, optimising, trying a lot of different things to squeeze out more speed. I will probably release a new version after putting in FindReverseMotion and ReverseCompensate functions. After that, real life will soak up most of my time, but I'm thinking of implementing one of these two (in C to start with): -- simple (bilinear) mesh warping -- simple (no clever occlusion handling) FPS conversion. Simple because I like to have a baseline to test more advanced features against. Preferences? Last edited by mg262; 2nd November 2005 at 17:33. |
2nd November 2005, 18:07 | #76 | Link | |
Registered User
Join Date: Sep 2005
Posts: 90
|
Quote:
Last edited by castellandw; 2nd November 2005 at 18:20. |
|
2nd November 2005, 18:10 | #77 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
Got it. I would certainly agree that
50p --MC-FPS-> 60p --discard fields-> 30i Beats 50p --discard fields-> 25i --MC-FPS-> 30i But our case is slightly different. Our underlying material is 25i , and the only way we can get 50p back out of it is by bobbing, i.e. by interpolating values. Now IMO the MC-FPS unit is just as capable of interpolating values as a bobber. In fact, I would argue that it is better placed because: a) Suppose we are interpolating midway between fields B and C in this configuration: A B C D Now a bobber will use parts of C to fill in bits of B, and then the FPS unit will use those filled in bits to create the intermediate frame -- but really this is giving us no more information than it already gets from considering C directly. b) The bobber will also use A to fill in B -- but the FPS unit can do this if it chooses to, simply by expanding the temporal radius. You could argue that this introduces extra inaccuracy (pulling information across 1.5 fields distance rather than 1 field distance), but i) the information is being pulled across 1.5 fields anyway; it's just happening in two hops ii) the bobber itself is always pulling information across 1 field's distance, whereas the MC-FPS unit can get away with looking across 0.5 fields distance. Also note that at least some motion compensated deinterlacing methods, like the GST, can be directly applied to FPS change -- we obtain motion compensated samples from previous and next fields, apply the generalised sampling theorem to reconstruct a curve, and read the values we want off the curve. (I think there are better methods available, which allow things like e.g. lower weighting of information pulled across a larger temporal distance... but that's another matter.) Last edited by mg262; 2nd November 2005 at 18:38. |
2nd November 2005, 18:30 | #78 | Link | |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
castellandw,
Quote:
The only way to really to which method is best is to implement both methods and refine them with all the tweaks everyone has thought of and then compare them on real applications like FPS conversion. So, having implemented one method, we need to refine it and include applications before any comparison is even possible. |
|
2nd November 2005, 18:34 | #79 | Link | |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,653
|
Quote:
Did you mean: 50p --MC-FPS-> 60p --discard fields-> 30i or 60p --MC-FPS-> 50p --discard fields-> 50i
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|