View Single Post
Old 17th January 2013, 23:36   #16997  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 934
Quote:
Originally Posted by madshi View Post
(not sure about what ATI and NVidia do, though, maybe they do some sort of motion compensation, I don't know)
They all do better than this.

Quote:
Originally Posted by madshi View Post
The deinterlacer hardware chips in TVs, Receivers, Blu-Ray players etc all use motion-adaptive deinterlacing with some sort of diagonal filtering. The key feature of motion-adaptive deinterlacing is that it detects static vs. moving image areas. For static image areas you can safely weave the fields together without getting artifacts and as a result for static image areas you get full progressive resolution. For moving parts most implementations use a Bob scaling algorithm with diagonal filtering, which produces results which are better than a Bob algorithm based on simple Bilinear scaling.
That sounds about right for motion adaptive deinterlacing but modern GPUs use vector adaptive deinterlacing, which is far better at reproducing the "original" full resolution progressive video than motion adaptive deinterlacing. Both the GTS250 and GT430 do some form of vector adaptive deinterlacing.

These examples are from the Cheese Slice test:

__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7
DragonQ is offline   Reply With Quote