Quote:
Originally Posted by madshi
However, I agree with DragonQ that "proper" video deinterlacing is much better than simple Bob. I don't know of any consumer hardware solution which uses motion compensation for video mode deinterlacing (not sure about what ATI and NVidia do, though, maybe they do some sort of motion compensation, I don't know). The deinterlacer hardware chips in TVs, Receivers, Blu-Ray players etc all use motion-adaptive deinterlacing with some sort of diagonal filtering. The key feature of motion-adaptive deinterlacing is that it detects static vs. moving image areas. For static image areas you can safely weave the fields together without getting artifacts and as a result for static image areas you get full progressive resolution. For moving parts most implementations use a Bob scaling algorithm with diagonal filtering, which produces results which are better than a Bob algorithm based on simple Bilinear scaling.
I would guess that if you used something like Jinc on every separate field (like 6233638 seems to suggest), you'd get better image quality for *moving* parts than what the typical CE deinterlacer chip does. However, if you don't weave static image areas then you're going to lose a lot of resolution during "quiet" scenes.
|
Interesting. I always wondered why some interlaced material looks overly soft after it got deinterlaced. I originally thought that deinterlacing halves the frame rate but at least puts the frames together as close to the original as possible. I didnīt know that thereīs also filtering applied which explains some of the results Iīm getting.
That diagonal filtering also seems to occur with NV GPUs a lot. At least I usually get a WAY sharper image (more fine details) in the interlaced picture compared to active deinterlacing with a lot of my DVDs (I have about 500-600 DVDs that are NTSC, about 100 are PAL). Also, for some reason, on some DVDs, I donīt need to deinterlace at all and have a perfect and pretty sharp picture.
After years of madVR use, I also noticed that sharp algorithms like Lanczos/Bicubic on luma amplify some of the interlacing artifacts a lot, to the point where it gets overly distracting, while SoftCubic makes some of these artifacts to entirely go away. Which probably is one of the reasons why you included it in the first place, because it can hide artifacts. I really hate itīs desaturation, though, because the picture not only looks very soft afterwards, but also a bit dull. If you have a movie with a lot of bright lights (traffic lights in a big city or dark areas with bright lights, like large city camera shots, which there happen to be a lot when you have a TV series that got produced in the US) you will reduce the dynamics a lot and the picture looks almost "dead" afterwards. I think that once we get to OLEDs, the desaturation effect on SoftCubic is going to get even more pronounced, since OLEDs have a much higher contrast. SoftCubic also looked horrible on my old CRT, because CRTs already had quite a soft picture (compared to an LCD), because of the way they produce the image.
If there were some kind of method we could apply that acts like SoftCubic but without the desaturation effect, SoftCubic would work a lot better. Is there any possibility to counter that desaturation either in the SoftCubic algorithm itself (which is probably a result of the interpolation it does) so that edges that get overly softened and desaturated at least get a boost in saturation again. Is it possible to calculate the amount of desaturation according to the percent on SoftCubic (50%-100%) we choose in the madVR scaling options?