View Single Post
Old 8th April 2009, 19:16   #2  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
FAQ:

A) Which output format (RGB vs YCbCr, 0-255 vs 16-235) should I activate in my GPU control panel?

Windows internally always "thinks" in RGB 0-255. Windows considers black to be 0 and white to be 255. That applies to the desktop, applications, games and videos. Windows itself never really thinks in terms of YCbCr or 16-235. Windows does know that videos might be YCbCr or 16-235, but still, all rendering is always done at RGB 0-255. (The exception proves the rule.)

So if you switch your GPU control panel to RGB 0-255, the GPU receives RGB 0-255 from Windows, and sends RGB 0-255 to the TV. Consequently, the GPU doesn't have to do any colorspace (RGB -> YCbCr) or range (0-255 -> 16-235) conversions. This is the best setup, because the GPU won't damage our precious pixels.

If you switch your GPU control panel to RGB 16-235, the GPU receives RGB 0-255 from Windows, but you ask the GPU to send 16-235 to the TV. Consequently, the GPU has to stretch the pixel data behind Windows' back in such a way that a black pixel is no longer 0, but now 16. And a white pixel is no longer 255, but now 235. So the pixel data is condensed from 0-255 to 16-235, and all the values between 0-15 and 236-255 are basically unused. Some GPU drivers might do this in high bitdepth with dithering, which may produce acceptable results. But some GPU drivers definitely do this in 8bit without any dithering which will introduce lots of nasty banding artifacts into the image. As a result I cannot recommend this configuration.

If you switch your GPU control panel to YCbCr, the GPU receives RGB from Windows, but you ask the GPU to send YCbCr to the TV. Consequently, the GPU has to convert the RGB pixels behind Windows' back to YCbCr. Some GPU drivers might do this in high bitdepth with dithering, which may produce acceptable results. But some GPU drivers definitely do this in 8bit without any dithering which will introduce lots of nasty banding artifacts into the image. Furthermore, there are various different RGB <-> YCbCr matrixes available. E.g. there's one each for BT.601, BT.709 and BT.2020. Now which of these will the GPU use for the conversion? And which will the TV use to convert back to RGB? If the GPU and the TV use different matrixes, color errors will be introduced. As a result I cannot recommend this configuration.

Summed up: In order to get the best possible image quality, I strongly recommend to set your GPU control panel to RGB Full (0-255).

There's one problem with this approach: If your TV doesn't have an option to switch between 0-255 and 16-235, it may always expect black to be 16 (TVs usually default to 16-235 while computer monitors usually default to 0-255). But we've just told the GPU to output black at 0! That can't work, can it? Actually, it can, surprisingly - but only for video content. You can tell madVR to render to 16-235 instead of 0-255. This way madVR will make sure that black pixels get a pixel value of 16, but the GPU doesn't know about it, so the GPU can't screw image quality up for us. So if your TV absolutely requires to receive black as 16, then still set your GPU control panel to RGB 0-255 and set madVR to 16-235. If your GPU supports 0-255, then set everything (GPU control panel, TV and madVR) to 0-255.

Unfortunately, if you want application and games to have correct black & white levels, too, all the above advice might not work out for you. If your TV doesn't support RGB 0-255, then somebody somewhere has to convert applications and games from 0-255 to 16-235, so your TV displays black & white correctly. madVR can only do this for videos, but madVR can't magically convert applications and games for you. So in this situation you may have no other choice than to set your GPU control panel to RGB 16-235 or to YCbCr. But please be aware of that you might get lower image quality this way, because the GPU will have to convert the pixels behind the back of both Windows and madVR, and GPU drivers often do this in inferior quality.

B) FreeSync / G-SYNC

Games create a virtual world in which the player moves around, and for best playing experience, we want to achieve a very high frame rate and lowest possible latency, without any tearing. As a result with FreeSync/G-SYNC the game simply renders as fast as it can and then throws each rendered frame to the display immediately. This results in very smooth motion, low latency and a very good playability.

Video rendering has completely different requirements. Video was recorded at a very specific frame interval, e.g. 23.976 frames per second. When doing video playback, unlike games, we don't actually render a virtual 3D world. Instead we just send the recorded video frames to the display. Because we cannot actually re-render the video frames in a different 3D world view position, it doesn't make sense to send frames to the display as fast as we can render. The movie would play like fast forward, if we did that! For perfect motion smoothness, we want the display to show each video frame for *EXACTLY* the right amount of time, which is usually 1000 / 24.000 * 1.001 = 41.708333333333333333333333333333 milliseconds.

FreeSync/G-SYNC would help with video rendering only if they had an API which allowed madVR to specify which video frame should be displayed for how long. But this is not what FreeSync/G-SYNC were made for, so such an API probably doesn't exist (I'm not 100% sure about that, though). Video renderers do not want a rendered frame to be displayed immediately. Instead they want the frames to be displayed at a specific point in time in the future, which is the opposite of what FreeSync/G-SYNC were made for.

If you believe that using FreeSync/G-SYNC would be beneficial for video playback, you might be able to convince me to implement support for that by fulfilling the following 2 requirements:

1) Show me an API which allows me to define at which time in the future a specific video frame gets displayed, and for how long exactly.
2) Donate a FreeSync/G-SYNC monitor to me, so that I can actually test a possible implementation. Developing blindly without test hardware doesn't make sense.

Last edited by madshi; 18th September 2017 at 17:10.
madshi is offline   Reply With Quote