View Single Post
Old 17th January 2013, 22:54   #16993  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
I agree with 6233638 that the automatic film vs video detection for PAL content doesn't work that well in many of today's devices. At least that's my own experience.

However, I agree with DragonQ that "proper" video deinterlacing is much better than simple Bob. I don't know of any consumer hardware solution which uses motion compensation for video mode deinterlacing (not sure about what ATI and NVidia do, though, maybe they do some sort of motion compensation, I don't know). The deinterlacer hardware chips in TVs, Receivers, Blu-Ray players etc all use motion-adaptive deinterlacing with some sort of diagonal filtering. The key feature of motion-adaptive deinterlacing is that it detects static vs. moving image areas. For static image areas you can safely weave the fields together without getting artifacts and as a result for static image areas you get full progressive resolution. For moving parts most implementations use a Bob scaling algorithm with diagonal filtering, which produces results which are better than a Bob algorithm based on simple Bilinear scaling.

I would guess that if you used something like Jinc on every separate field (like 6233638 seems to suggest), you'd get better image quality for *moving* parts than what the typical CE deinterlacer chip does. However, if you don't weave static image areas then you're going to lose a lot of resolution during "quiet" scenes.

Quote:
Originally Posted by ajp_anton View Post
Out of curiosity, why does this even exist?
Because some studios/encoding houses don't know what they're doing.
madshi is offline   Reply With Quote