Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 10th August 2008, 23:46   #1  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Deinterlace should preserve original pixels?

Yes, I'm afraid it's Yadiq - Yet Another Deinterlacing Question

Given that deinterlacing is basically a problem of interpolation, is it considered a) essential, b) desirable or c) unimportant that a deinterlacer/bobber preserves the original pixel values. Please explain your answers

To clarify, what I am asking is, given the interlaced input, with dots representing pixels,
Code:
y \ time-------------->
|     .   .   .   .
|       .   .   .   .
|     .   .   .   .
v       .   .   .   .
           etc
should the dots be preserved unchanged in the output and the work be limited to 'filling in the blanks'?

Is the answer the same for both 'normal' deinterlacing (50i->25p) and bobbing (50i->50p)? (Clearly, in the first case, you throw some information away as well.)

How do some of the well-known deinterlacers behave in this respect?

Aside - deinterlacing seems to be one of those mysterious topics, like quantum mechanics or relativity, where the more you learn, the more you realise how much further you still have to go...
Gavino is offline   Reply With Quote
Old 11th August 2008, 00:07   #2  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Quote:
Originally Posted by Gavino View Post
should the dots be preserved unchanged in the output
What are you thinking they should be changed to and why?
Guest is offline   Reply With Quote
Old 11th August 2008, 00:36   #3  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Quote:
Originally Posted by neuron2 View Post
What are you thinking they should be changed to and why?
Well, as a rule I don't think they should be (and it seems you agree), but I have read suggestions that this is not always the case (eg here and here).
Gavino is offline   Reply With Quote
Old 11th August 2008, 00:52   #4  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
Ideally the input pixels should be preserved. As you say "the work be limited to 'filling in the blanks'".

But this is a perfect world view. In reality you have to invent the new pixel values to fill in the gaps and this will occasionally lead to mistakes. Sometimes these mistake lead to visually unpleasant artefacts.

If a given implementation chooses to do some extra processing to limit the visual significance of these mistake, and it involves tweaking the value of the original input pixels, and this truly makes these mistakes less objectionable, then I would say it is okay to munge the input values slightly to improve the total result.

The dumb internal Bob() filter takes this approach. It moves the top field down 0.5 pixel and the bottom field up 0.5 pixel using the Bicubic resizer to do the work. This is so that all output frames are equally processed. Using some clever values for B, C and Height the input pixel values can be retained, I do not like the look of the result, but for some applications it is relevant.
IanB is offline   Reply With Quote
Old 11th August 2008, 06:02   #5  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
I checked, and bob keep (alternatively) one field at the same position, and recreates the other field by moving the first one 0.5 pixel up or down. However, the field looks slightly lowpassed, which makes sense for a dumb spatial bob since a field taken alone necessarily has some aliasing.
__________________
Manao is offline   Reply With Quote
Old 11th August 2008, 10:55   #6  |  Link
Alex_ander
Registered User
 
Alex_ander's Avatar
 
Join Date: Apr 2008
Location: St. Petersburg, Russia
Posts: 334
Probably, NNEDI does keep original pixels. Don't know about its internal procedures, but documentation says it 'interpolates the missing pixels using only information from the kept field'. Considering its speed looks like it doesn't do it some dumb way. It wouldn't work good at plain deinterlacing in some critical cases (e.g. one field black, the other white), but at bob -> resize -> interlace the same example somehow should work.
Alex_ander is offline   Reply With Quote
Old 11th August 2008, 11:06   #7  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: UK
Posts: 1,673
There's a post somewhere where Didee explained that mcbob strictly preserves the original fields, but this constraint could be relaxed - and this might potentially give a nicer looking output in some circumstances.

I care about preserving the original fields if I'm generating a progressive version for processing, and will revert to interlaced for the final output. It at least gives the chance of the whole thing becoming a NOP (at least in theory!) if/where necessary.

I care about the nicest looking output if the progressive version is the final output - and in that case, I'll use whatever looks best - whether it preserves, or not.

Interlacing, and deinterlacing, are compromises. You have to throw ideals away sometimes when working with compromises!

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 13th August 2008, 00:25   #8  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Thanks for the responses, everyone. I think the answers show that the question was not a trivial one.

Encouraged by that, I have done a bit more investigation, and have found that the truth about Bob [sounds like the title of a bad movie] is a bit more complex than the documentation says.

Since Bob uses a Bicubic (aka Mitchell-Netravali) reconstruction filter, it will keep the original pixels when that filter acts as a pure interpolator, ie if and only if b=0 (regardless of the value of c). Hence the default values (b=c=1/3) do not preserve the input data. As the documentation states, (b=0, c=1) will do so, but so will b=0 and any other value of c, including c=0.5 (the Catmull-Rom spline) which keeps the desirable property b+2*c=1.

I have verified this in practice by undoing the Bob (via SeparateFields, SelectEvery and Weave) and doing a Compare with the original.

What no-one mentioned (perhaps because they thought it self-evident) is that for YV12 it is wrong and makes no sense for any deinterlacer to preserve the original chroma data, because of the different way YV12 chroma is stored in interlaced and progressive frames.

Last edited by Gavino; 13th August 2008 at 01:20.
Gavino is offline   Reply With Quote
Old 13th August 2008, 03:25   #9  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
Input YV12 chroma sample could be preserved as well, just the positioning is a little different.
IanB is offline   Reply With Quote
Old 13th August 2008, 03:49   #10  |  Link
vcmohan
Registered User
 
Join Date: Jul 2003
Location: India
Posts: 890
I am interested to know how the yv12 chroma is originally arrived at in case of interlaced video? U and V values correspond to two by two pixels. In this case the two vertical pixels may have completely different chroma values. I am intending to do a rotation in my spinner plugin and need to know the chroma scheme.
__________________
mohan
my plugins are now hosted here
vcmohan is offline   Reply With Quote
Old 13th August 2008, 08:13   #11  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
@IanB
You're right that in the case of Bob (or anything that works on only one field at a time), input chroma could be preserved by changing the resampling positions for the chroma (but I'm not sure if this would be theoretically 'correct'). For deinterlacers that combine the two fields, I think it would be wrong. Consider the effect on a stationary scene.

@vcmohan.
See here for all the gory details.

I'm not sure it makes sense to do rotation on interlaced video.
Or do you plan to deinterlace, rotate and re-interlace?
Even then I don't think it could work right.

Last edited by Gavino; 13th August 2008 at 08:30.
Gavino is offline   Reply With Quote
Old 13th August 2008, 08:26   #12  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
Quote:
What no-one mentioned (perhaps because they thought it self-evident) is that for YV12 it is wrong and makes no sense for any deinterlacer to preserve the original chroma data, because of the different way YV12 chroma is stored in interlaced and progressive frames.
Chroma samples are spatially at the same place in progressive & interlaced frame. However, the relative position of chroma samples in regard to luma samples varies when you consider a single field.
Code:
X   X   X   X ----> Top luma

  O       O   ----> Top chroma

X   X   X   X ----> Bottom luma



X   X   X   X ----> Top luma

  O       O   ----> Bottom chroma

X   X   X   X ----> Bottom luma
So the top field taken alone looks like :
Code:
X   X   X   X

  O       O





X   X   X   X
And the bottom field like that :
Code:
X   X   X   X





  O       O

X   X   X   X
__________________
Manao is offline   Reply With Quote
Old 13th August 2008, 10:10   #13  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
Borrowing and correcting (Mpeg1 versus Mpeg2 placement) Manao's nice ascii art :-
Code:
X   X   X   X ----> Top luma

O       O   ----> Top chroma

*   *   *   * ----> Interpolated luma



X   X   X   X ----> Top luma

+       +   ----> Interpolated chroma

*   *   *   * ----> Interpolated luma
What is conceptually so difficult about this, practicality is another matter
IanB is offline   Reply With Quote
Old 13th August 2008, 17:01   #14  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
Quote:
Originally Posted by Manao View Post
Chroma samples are spatially at the same place in progressive & interlaced frame. However, the relative position of chroma samples in regard to luma samples varies when you consider a single field.
Ah, that was the missing link in my understanding.
Thanks for that - now what you and IanB are saying makes perfect sense.

I was assuming that each field was equivalent to a half-sized progressive YV12 image, whereas, as far as the chroma sampling positions are concerned, it is not the same.

However, the corollary of this seems to be that Bob is actually operating wrongly on YV12 chroma, since it resizes each field as if it were a standard progressive image. (This is independent of whether or not it retains the original pixels.)
Should it not be shifting the chroma differently from the luma before resizing?

Going further, it also suggests that the approach of resizing an interlaced image by SeparateFields, etc, will also introduce a chroma error with YV12 unless the chroma is treated specially. Can this be right?

Last edited by Gavino; 13th August 2008 at 17:03.
Gavino is offline   Reply With Quote
Old 14th August 2008, 01:11   #15  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
Quote:
Originally Posted by Gavino View Post
However, the corollary of this seems to be that Bob is actually operating wrongly on YV12 chroma, since it resizes each field as if it were a standard progressive image.

Should it not be shifting the chroma differently from the luma before resizing?
No, the resizer maintains the spacial relationship between luma and chroma centre points.

Internally the chroma planes are offset +/-0.125 as the luma planes are offset +/-0.25 so the overall effect is correct.
Quote:
Going further, it also suggests that the approach of resizing an interlaced image by SeparateFields, etc, will also introduce a chroma error with YV12 unless the chroma is treated specially. Can this be right?
Again no, because the resizer is "maintain centre based" the offset adjustments come out in the wash.

The input crop value is the result of 3 components
  1. Input offset.
  2. Output un-offset.
  3. Resizer centre correction.
The Input and Output offsets are the same numeric pixel value, but of opposite sign and in different spatial units (the pixel sizes are different).
Code:
Top field Luma      = 0.0
Top field chroma    = 0.125
Bottom field Luma   = 0.5
Bottom field chroma = 0.625
The centre correction for the chroma planes has all the values halved, notably the subrange_start value.
Code:
  // the following translates such that the image center remains fixed
  double pos = subrange_start + ((subrange_width - target_width) / (target_width*2));
or
             = subrange_start - 0.5*(1 - subrange_width/target_width);
IanB is offline   Reply With Quote
Old 14th August 2008, 03:51   #16  |  Link
vcmohan
Registered User
 
Join Date: Jul 2003
Location: India
Posts: 890
Is there a flag to find out if a image is bobbed or just plain separated?

In case of RGB what happens for bobbed clip if we trim and join at places not matching?

@Gavino: I have provision to linearly move the coordinates of rotation center along clip. In such case I need to know where exactly these coordinates are in each field.
__________________
mohan
my plugins are now hosted here
vcmohan is offline   Reply With Quote
Old 14th August 2008, 06:16   #17  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
A clip after SeparateFields() has vi.IsFieldBased() true.
IanB is offline   Reply With Quote
Old 14th August 2008, 09:13   #18  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,431
IanB, thanks for the explanation.

Can you please clarify one detail for me - do the resizers treat fieldbased and framebased clips differently, or are you saying specifically there is no need for them to do so (because it 'comes out in the wash')?
Gavino is offline   Reply With Quote
Old 14th August 2008, 09:40   #19  |  Link
IanB
Avisynth Developer
 
Join Date: Jan 2003
Location: Melbourne, Australia
Posts: 3,167
No the resizer always treats the frame as framebased. This is why you need to manually calculate the offset for the top and bottom fields when using my interlaced resizing function.

To answer the question I think you were really trying to ask :- Because the resizer is centre of image referenced the relationship between luma and chroma is maintained during a resize. If chroma is 0.125 old pixels up in input it will still be 0.125 new pixels up on output.
IanB is offline   Reply With Quote
Old 14th August 2008, 09:58   #20  |  Link
vcmohan
Registered User
 
Join Date: Jul 2003
Location: India
Posts: 890
Quote:
Originally Posted by IanB View Post
A clip after SeparateFields() has vi.IsFieldBased() true.
How does one know whether an image is bobbed or not?
__________________
mohan
my plugins are now hosted here
vcmohan is offline   Reply With Quote
Reply

Tags
yv12 chroma offset

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:21.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.