View Single Post
Old 5th June 2011, 20:53   #7919  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by janos666 View Post
Why can I select 10-bit as native display bit depth if it doesn't really output 10-bit?
Is it there for later versions with 10-bit output capability or did you think that it changes anything with 8-bit output if the display is theoretically capable of more?
It was meant for future versions, I should probably remove it now and only allow 6bit, 7bit and "8bit or higher". The net effect if you switch this option to 9bit now is that the dithering noise will be cut in half, which will re-introduce a tiny amount of banding, since output is still at max 8bit. So setting it to more than 8bit is currently not a good idea.

Quote:
Originally Posted by nevcairiel View Post
Video decoders often use fixed-point optimized iDCT approximations to increase speed at a minimal quality loss.
I'm sure some video decoder pro can explain it properly, but video decoders just don't work like that. They output the same pixel format that was fed in. If it would be beneficial to increase bitdepth on output, i'm sure someone would've come up with the idea before..
I've been told that the h264 spec is so strict that decoders are expected to output bit identical results. Not bit identical to the original source, of course, but bit identical compared to other decoders. I don't have the knowledge to confirm or deny this information, but if it's true then that pretty much says that floating point output would not be an improvement for h264 at least. I've also heard that MPEG2 is different and there decoders seem to have slightly different output results. So maybe for MPEG2 having floating point output would make sense. I've no clue, though.
madshi is offline   Reply With Quote