Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Announcements and Chat > General Discussion

Reply
 
Thread Tools Search this Thread Display Modes
Old 6th January 2010, 14:45   #41  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Quote:
Originally Posted by 2Bdecided View Post
It's rare for interlaced signals to be filtered to half the vertical resolution, so suggesting you'll get 540 vs 720 isn't realistic.
As I said, the bandwidth is usually reduced to around 70% ... so the effective resolution is about 70%.

That's just when things aren't moving though. When things move at a multiple of 1 pel per field in the vertical direction even the best deinterlacers being used at the moment are not going to be able to make something sharp out of the result ... the object becomes identical (ignoring the shift) in both fields, all the image data for half the lines in every frame is simply gone, interpolation will have to do. So in that case the effective resolution becomes 50% ... but it's aliased, so it's actually a little worse than 50%.

Which is why you really don't want to do high movement video with interlacing (sports mostly, since action movies are of course shot with flicker cam, making interlacing moot).

Last edited by MfA; 6th January 2010 at 21:56.
MfA is offline   Reply With Quote
Old 8th January 2010, 00:39   #42  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 41
Quote:
Originally Posted by MfA View Post
As I said, the bandwidth is usually reduced to around 70% ... so the effective resolution is about 70%.

That's just when things aren't moving though. When things move at a multiple of 1 pel per field in the vertical direction even the best deinterlacers being used at the moment are not going to be able to make something sharp out of the result ... the object becomes identical (ignoring the shift) in both fields, all the image data for half the lines in every frame is simply gone, interpolation will have to do. So in that case the effective resolution becomes 50% ... but it's aliased, so it's actually a little worse than 50%.

Which is why you really don't want to do high movement video with interlacing (sports mostly, since action movies are of course shot with flicker cam, making interlacing moot).
I just remembered a review of Canon Hf20 consumer 1080@60i AVCHD camera:
http://www.camcorderinfo.com/content...erformance.htm

Quote:
The Canon HF20 has the best video resolution we've ever recorded on a consumer camcorder and its scores are comparable to some of the professional models we've tested (like the Sony HDR-FX1000 and Canon XL H1A). The camcorder measured an approximate video resolution of 800 line widths per picture height (lw/ph) horizontal and 900 lw/ph vertical.
That would seem to indicate that little to no vertical prefiltering is used for this particular camera, wouldnt it?

-k
knutinh is offline   Reply With Quote
Old 8th January 2010, 06:04   #43  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Not exactly, because generally we don't use brickwall filters in video filtering ... just for instance a simple 1/4, 1/2, 1/4 filter will let such high frequency line patterns through attenuated. You can't exactly say it leaves resolution intact though.

PS. if you are going to ignore my argument of vertical resolution in the presence of vertical motion don't quote it ...
MfA is offline   Reply With Quote
Old 8th January 2010, 07:54   #44  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
Quote:
That would seem to indicate that little to no vertical prefiltering is used for this particular camera, wouldnt it?
I wasn't able to find any equivalence between lw/ph and effective resolution in pixels. What I was able to find, however, was that the IPhone has a lw/hp of 866 horizontally and 897 vertically. Which makes me doubt the quality of this camera. Care to explain ?
__________________
Manao is offline   Reply With Quote
Old 8th January 2010, 08:11   #45  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 41
Quote:
Originally Posted by MfA View Post
Not exactly, because generally we don't use brickwall filters in video filtering ... just for instance a simple 1/4, 1/2, 1/4 filter will let such high frequency line patterns through attenuated. You can't exactly say it leaves resolution intact though.
I was under the impression that such filtering was not possible in the sensor itself. If my sources are right that this sensor is native 60i, then it seems that electronic filtering will either be [1, 1]/2, or none. As for OLPF and optic flaws my knowledge is worse.

A tent filter will still have considerable attenuation above its cutoff frequency. If the measurement technique is good, then quoted line-pairs (or whatever) should be representative of the real frequency response, should it not?

Quote:
PS. if you are going to ignore my argument of vertical resolution in the presence of vertical motion don't quote it ...
I am not ignoring it. But for resolution in static scenes it is irrelevant. The question is to what degree cameras employ static filtering (for combating issues with movement) that also degrade the resolution in non-moving scenes. The answer seems to be that both are possible, like I said a few posts ago:
Quote:
Originally Posted by knutinh
To complicate matters further, different cameras construct the two video fields in different manners. In some cameras the even field corresponds to the even lines of pixels in the CCD chip, and the odd field to the odd lines of pixels in the CCD chip.
...
Slightly better are cameras which produce an average of the even lines and the preceding odd lines for the even field, and the odd lines and the preceding even lines for the odd field.
If you lookup that post you will see that I supply urls.

-k

Last edited by knutinh; 8th January 2010 at 08:15.
knutinh is offline   Reply With Quote
Old 8th January 2010, 09:09   #46  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
No, it's not representative of the frequency response because in and of themselves you can't determine how fast the fall off (ie. how blurry vs ringy it is) is from a single data point.

I never said you didn't supply URLs ... hell the more the merrier, let me do one too. It's from EBU which has been doubted in this thread, but even if you doubt them there is the zinger from Faroudja (yes, that one) ... “I am amazed that anybody would consider launching new services based on interlace. I have spent all of my life working on conversion from interlace to progressive. Now that I have sold my successful company, I can tell you the truth: interlace to progressive does not work!”.
MfA is offline   Reply With Quote
Old 8th January 2010, 09:24   #47  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 41
Quote:
Originally Posted by MfA View Post
No, it's not representative of the frequency response because in and of themselves you can't determine how fast the fall off (ie. how blurry vs ringy it is) is from a single data point.
If a decimation/interpolation pass has a passband that is within +0dB/-3dB from DC to frequency Fc, then I would claim that that single datapoint says a lot about the system.

Ringing is first and foremost an issue in high-order filtering including negative coefficients, I think. Do you see much ringing in 2-3tap decimation/interpolation filters?

A system with flat and wide passband should be relatively unblurry. The degree of blurryness should be relatively well predicted from the "single datapoint" of passbandwidth?

Quote:
there is the zinger from Faroudja (yes, that one) ... “I am amazed that anybody would consider launching new services based on interlace. I have spent all of my life working on conversion from interlace to progressive. Now that I have sold my successful company, I can tell you the truth: interlace to progressive does not work!”.
Hilarious. Like I said in my first post in this thread (before venturing off into nit-picking):
Quote:
Originally Posted by knutinh View Post
I tend to see interlacing as an analog 2:1 perceptually motivated compression method. I dont really see its purpose in this digital era, except for legacy purposes. If you want to trade motion/resolution/bandwidth, then use a lossy digital codec that does it intelligently..

-k
knutinh is offline   Reply With Quote
Old 8th January 2010, 09:49   #48  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 41
Quote:
Originally Posted by Manao View Post
I wasn't able to find any equivalence between lw/ph and effective resolution in pixels. What I was able to find, however, was that the IPhone has a lw/hp of 866 horizontally and 897 vertically. Which makes me doubt the quality of this camera. Care to explain ?
That camera was in fact one of the highest effective resolution consumer 1080 cams (it fails in other areas such as low light performance though).

It would seem that their "video sharpness" test does in fact factor in temporal aspects such as interlacing/deinterlacing, temporal lossy compression etc (my assumption was wrong. Always delightful to improve understanding):
Quote:
Video Sharpness
The sharpness that a camcorder actually produces is rarely the same number that the manufacturer advertises. For instance, camcorders that output a 1920 x 1080 picture are not actually capturing one thousand nine-hundred and twenty horizontal lines of information. That's simply the size of the "container" that the camcorder outputs (also known as the resolution). In fact, there are lots of ways that manufacturers can play with the numbers, emphasizing capabilities of the lens, or the sensor, or something else. The simple fact is, you don't buy a sensor, and in most cases, you don't buy a lens. You buy a camcorder – a complete, pre-assembled camcorder, so that's how we test them.

We light a DSC Labs Multiburst chart at an even 3000 lux. The camcorder is stationed in a fixed position on a tripod. We aim the camcorder, aligning with the chart's 16:9 guideframes. We then pan the camcorder slowly left and right for about 30 seconds. Then we re-align the camcorder and tilt slowly up and down for about 30 seconds.

After the shooting is complete, we connect the camcorder to our HDTV using the camcorder's highest quality connection, typically either composite-out, S-video, component-out, or HDMI. We examine the playback footage, looking for the point at which the lines on the chart become indistinguishable.

The reason we test sharpness with the camcorder in motion, rather than a static shot, is simple. When was the last time you shot a video with nothing moving? The inherent nature of video is movement. This may not be the method manufacturers would prefer, but we think it makes the most sense.

The final score for this section is based on the horizontal and vertical sharpness as recorded in auto mode in the 60i frame rate. We may also examine the sharpness in other frame rates, but it does not factor into the score.
http://www.imatest.com/docs/sharpness.html
Quote:

...
The use of Picture Height gives a slight advantage to compact digital cameras, which have an aspect ratio (width:height) of 4:3, compared to 3:2 for digital SLRs. Compact digital cameras have slightly more vertical pixels for a given number of total pixels.
knutinh is offline   Reply With Quote
Old 8th January 2010, 10:02   #49  |  Link
2Bdecided
Registered User
 
Join Date: Dec 2002
Location: Yorkshire, UK
Posts: 1,673
Quote:
Originally Posted by MfA View Post
...That's just when things aren't moving though. When things move at a multiple of 1 pel per field in the vertical direction even the best deinterlacers being used at the moment are not going to be able to make something sharp out of the result ...
"even the best deinterlacers" ever can't+won't overcome the fundamental high spatial frequency / temporal confusion inherent in an interlaced signal.

Lots of movement and/or lots of fine detail isn't a fundamental problem - the fundamental unrecoverable problem is the specific combination of fine detail and specific movement, as you illustrated.

Where there's movement + fine detail that is recoverable, it's often argued that it doesn't matter so much if the display can't recover it because we're less sensitive to detail when the image is moving. It's not true for eye-tracked motion however - but then most modern displays are already a disaster for eye-tracked motion anyway, so it matters less.

Cheers,
David.
2Bdecided is offline   Reply With Quote
Old 8th January 2010, 10:19   #50  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Quote:
Originally Posted by knutinh View Post
If a decimation/interpolation pass has a passband that is within +0dB/-3dB from DC to frequency Fc
How could it be any different? The -3dB point is the definition of the cut off frequency. We don't know the cut off frequency though, that's not what they are measuring (the lines will still be visible long after that). Higher order flicker filters do exist and even the tent and averaging filters differ by an order, so you really need two datapoints.

Sorry, I got you and 2B mixed up ... thought you were arguing in favour of making interlacing a standard in a time period when progressive displays already ruled the roost (the US had a decent excuse since they started so much earlier).
MfA is offline   Reply With Quote
Old 8th January 2010, 10:32   #51  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Quote:
Originally Posted by 2Bdecided View Post
It's not true for eye-tracked motion however - but then most modern displays are already a disaster for eye-tracked motion anyway, so it matters less.
Getting better all the time though. As long as we are going to put ever more computing power and algorithms into something I'd rather have it go into better motion compensated framerate conversion than better deinterlacing. With 240 Hz displays with 2 msec gray-gray I'd guess the display technology is pretty close to where no further improvements in smoothness can be perceived ... assuming good content to actually drive it at 240 Hz (where the computing power and algorithms come in).
MfA is offline   Reply With Quote
Old 8th January 2010, 10:39   #52  |  Link
knutinh
Registered User
 
Join Date: Sep 2006
Posts: 41
Quote:
Originally Posted by MfA View Post
Getting better all the time though. As long as we are going to put ever more computing power and algorithms into something I'd rather have it go into better motion compensated framerate conversion than better deinterlacing. With 240 Hz displays with 2 msec gray-gray I'd guess the display technology is pretty close to where no further improvements in smoothness can be perceived ... assuming good content to actually drive it at 240 Hz (where the computing power and algorithms come in).
Would not mocomp frc also be an example of technology that has worse benefit/cost if done blindly in the display, as opposed to integrated in the lossy codec? Using B-frames (or similar), in-between frames can be described with the added benefit of explicit encoder control. Given that 240fps content is available to the encoder (generally not true), or that the encoder can afford more processing cost than the decoder (generally true for broadcast), even a few bits spent for controlling behaviour could improve things compared to spending no bits.

-k
knutinh is offline   Reply With Quote
Old 8th January 2010, 11:01   #53  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Bitwise yes, moneywise no ... the costs added on display devices for the extra necessary gates to decode high fps streams is problematic (progressive vs. interlaced, when EBU was deciding things, was already not a big deal in any part of the chain). When H.265 is being hammered out and almost all displays are 144 Hz+ to begin with, HD slowmo cameras ubiquitous and transistors cheaper than ever I would definitely want to see high fps modes being a fundamental part.
MfA is offline   Reply With Quote
Old 31st March 2010, 15:40   #54  |  Link
lovelove
Registered User
 
Join Date: Mar 2010
Posts: 106
Quote:
Originally Posted by 2Bdecided View Post
... interlacing does (at least partly) achieve the gains it's supposed to. That's why it's used. It's not a conspiracy, and it's not a mistake - it actually works (i.e. gives better quality / lower bitrates). Even with H.264 (if the encoder handles interlacing well enough). [...]

It does make logical sense that packaging the (adaptive) interlacing and (adaptive) deinterlacing into the encoder should make it work better than externally - but it's more complexity: more tuning in the encoder; more work in the decoder. Has anyone ever done it?
I found this:

Quote:
Originally Posted by http://mewiki.project357.com/index.php?title=X264_Settings&oldid=3918
x264's interlaced encoding is inherently less efficient than its progressive encoding, so it is probably better to deinterlace an interlaced source before encoding rather than use interlaced mode.
2Bdecided, is this the bottom line or do you still maintain that it gives "better quality/lower bitrates" in h264? If yes, what other encoder than x264 are you refering to?
lovelove is offline   Reply With Quote
Old 31st March 2010, 17:53   #55  |  Link
GodofaGap
Registered User
 
Join Date: Feb 2006
Posts: 823
I doubt that x264's interlaced mode is so bad that bobbing first and encoding afterwards will result in a better quality video than encoding interlaced first and bobbing afterwards at an equal bitrate.
GodofaGap is offline   Reply With Quote
Old 31st March 2010, 18:13   #56  |  Link
lovelove
Registered User
 
Join Date: Mar 2010
Posts: 106
You doubt, that's okay. But that's guessing only.
Also note that it's not necessarily x264's interlaced mode being "so bad", but potentially it's non-interlaced mode being "much more efficient".

Last edited by lovelove; 31st March 2010 at 18:21.
lovelove is offline   Reply With Quote
Old 31st March 2010, 20:16   #57  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
lovelove : Deinterlacing != bobbing. The quote from mewiki is misleading. If you deinterlace an interlaced source, you lose information (since you halves the motion). So saying it's more efficient is misleading, because the efficiency doesn't consider the lost information. What the quote wants to say is that, if you take an interlaced video, encode it with some bitrate X, and measure the PSNR, you'll get a (far lower) PSNR than if you deinterlace, encode it at the same bitrate X, and measure PSNR (against the deinterlaced version, of course). You're actually comparing apples and oranges when you do that, but it's an easy mistake to conclude that it's more efficient.

Furthermore, x264 is a very efficient encoder, but when encoding interlaced content, it's lacking some major tools (either field picture support, or adaptive mbaff, or both) that seriously hampers it in regards to the concurrence. That doesn't mean it will be worse, but it does mean there is a lot of room (from 0.5 to 2dB, depending on the sequence) for improvement in x264 when it comes to interlacing.
__________________
Manao is offline   Reply With Quote
Old 31st March 2010, 20:45   #58  |  Link
lovelove
Registered User
 
Join Date: Mar 2010
Posts: 106
  • First of all: PSNR != quality (and that's a direct quote from the x264 devs).
  • Secondly, note that neither my posting nor the quotes in my posting mention bobbing.
  • Thirdly, bob(bing) IS a deinterlacing method according to www.100fps.com
  • Fourthly:
    Quote:
    measure PSNR (against the deinterlaced version, of course).
    that is your supposition only.

    Quote:
    You're actually comparing apples and oranges when you do that
    exactly, and that would be a good reason for the author not to have used this comparison and for your supposition to be wrong as a result.

----

Quote:
x264 ... when encoding interlaced content, it's lacking some major tools ... that seriously hampers it in regards to the concurrence.
so which other encoders can you recommend for interlaced content then ?
lovelove is offline   Reply With Quote
Old 31st March 2010, 21:36   #59  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,394
Quote:
Originally Posted by Manao View Post
lovelove : Deinterlacing != bobbing.
bobbing == Deinterlace(parity=alternating)
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 1st April 2010, 07:07   #60  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
I think we can agree that on this forum, when somebody says he "deinterlaced a video", nobody will assume he bob-deinterlaced it, just that he took a 50i/60i video and made it 25p/30p. Bobbing has become more than a deinterlacing method, it has become the process of taking a 50i/60i video and making it 50p/60p (see the number of avisynth filters with 'bob' in their name, and look at what they do).

If I talk of PSNR, it's because as of now, we still haven't a way to measure quality. If you prefer, I could have said the quantizer would be lower on the deinterlaced encoding.

I still stand by my interpretation of mewiki's quote (it's easy for a guy writing a documentation to get carried on and make a mistake)

Finally, all the broadcast encoders support one or both of the tools x264 lacks. I'm pretty sure mainconcept's does to.
__________________
Manao is offline   Reply With Quote
Reply

Tags
content, deinterlace, interlaced, progressive, quality

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 08:31.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.