View Single Post
Old 18th January 2013, 11:33   #17009  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 934
Quote:
Originally Posted by leeperry View Post
It's only recently that I've gained interest in interlaced material, but CUVID does a hell of a job deinterlacing 29.97fps video material to 59.94fps. Quite frankly, it looks fantastic to me. James Cameron keeps saying that frame rate is more important than resolution(tring to push for 1080/48p instead of 4K/24p) and it sure doesn't look 288p to me

Now how much resolution do we lose when watching a NTSC (video mode) DVD deinterlaced to 59.94fps? GPU's use fairly advanced algorithms IIRC.
It's less "resolution loss" and more "deinterlacing artefacts". But they're usually difficult to spot at normal viewing distances with most material, assuming your deinterlacer is good.

Quote:
Originally Posted by Dodgexander View Post
Okay thats clear, but how is that related to my problem?

If a channel switches from 1080/25p too 1080/25i, if using dxva deinterlacing, shouldn't it detect it needs to go from "no deinterlacing, or "weave" as you describe from the progressive source. Too "vector" for the interlaced source?

Because in my case it doesn't. It always thinks the video is progressive, even when I change the deinterlacing in the AMD control panel to vector or motion, there is no deinterlacing, or just "weave" deinterlacing for the 1080/25i video.

So since this just happens with Madvr, is this a communication problem between the renderer and the amd driver?

Why doesn't it work to set the deinterlacing to vector manually when this occurs?

Why does it work forcing deinterlacing on in Madvr?

If it could detect the change, should it be sending the information to change the deinterlacing type too the driver?
I believe this is a bug in MadVR that madshi mentioned a few pages back - basically it only checks whether the video is interlaced or not at the start. You can get around this by setting LAV Video Decoder to "aggressive" deinterlacing.

With this enabled, my MBAFF videos recorded from DVB-T2 correctly switch between progressive and interlaced.

Quote:
Originally Posted by madshi View Post
I'm not sure about that. First of all: See my reply after the next quote. Furthermore: Your screenshot of the Cheese Slice test pattern does not show how proper motion adaptive deinterlacing with diagonal filtering looks like. Proper motion adaptive deinterlacing looks *much* better than that! You're putting way too much stock into the Cheese Slice tests. Furthermore, AMD's and NVidia's implementation of motion adaptive deinterlacing seems to be far from optimal, as far as I can see...
I've mentioned them once and haven't used them in years, so I disagree with that comment.

I know what crappy deinterlacing ("288p") looks like - for example, the UK Coupling DVDs all exhibit this (576p/25 made from 576i/25 but they've clearly just taken every other field and line doubled them). I do not see this with "progressive video in an interlaced stream" material on any of my PCs or TVs.

Quote:
Originally Posted by madshi View Post
Those Cheese Slice test patterns were the reason that I said that NVidia and ATI *might* do some sort of motion compensation. But I've just checked the Cheese Slice test patterns again, and I have to *totally* change my opinion. Actually, it seems everybody has been misinterpreting the Cheese Slice tests! Because, I've just found out that they're actually encoded as *FILM* with a 2:2 cadence. Just try it out: Play those Cheese Slice tests with madVR film mode. madVR will detect a 2:2 cadence for both 50i and 60i Cheese Slice tests. In the moment when motion starts, madVR will take a moment to switch onto the correct cadence, but once it has switched, playback is perfect. After that I've looked at the separate fields in the video stream, and really, every 2 fields belong together to form a progressive frame. So the Cheese Slice tests must be totally disregarded as *video* mode deinterlacing tests. Motion compensation wouldn't work at all for this test pattern because motion compensation expects movement between every field, which this test pattern does not have.

Edit: Btw, you can now use the 1080i50 Cheese Slice test patterns to check whether your GPU IVTC algorithm can handle PAL film content well... Compare it to madVR film mode. Use frame stepping and compare pixel by pixel. Or use the 1080i60 pattern to check how well your GPU IVTC can handle 2:2 cadences in 60i content. FWIW, madVR takes too long to switch to the correct cadence here. That's something I'll work on in the future.

Edit2: Disable the option "only look at pixels in the frame center" in the madVR settings, and madVR film mode will play all 1080i Cheese Slice test patterns with identical quality to the original progressive source, which is much better than anything AMD and NVidia seem to be able to do.
This is really interesting, I'll have a look when I get home later. Doesn't change the fact that my GPUs seem to handle 2:2 cadence fine but maybe there is a better way of seeing the effects of different deinterlacing algorithms on native interlaced content.

Presumably the best way to do this would be to create a 1080p/50 video, then cut away half of each frame to get a 1080i/25 video, then compare the two with different algorithms.


EDIT: Hmm, for some reason forcing film or video mode doesn't do anything on my laptop (HD4000).
EDIT 2: Oh OK, forcing film mode only works for software decoding.
__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7

Last edited by DragonQ; 18th January 2013 at 14:57.
DragonQ is offline   Reply With Quote