View Single Post
Old 14th July 2011, 23:55   #10  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by JanWillem32 View Post
If it's about internal codec precision, please make the decoder 32-bit floating point or better on output, so I don't have to static cast every element of the mixer input to 32-bit floating-point anymore. I can handle pretty much any input quantization, only the double precision for the color management section can be a bit intense to process.
As far as I know, h.264 (and probably most video formats) internally use integer math only. So floating point output doesn't make much sense. You can as well do the conversion yourself, if you need FP math for your post-processing. Also: Even if the original input source only used 8-Bit precision (per color channel) and the final output is going to be 8-Bit again, using 10-bit (12-bit) internal codec precision improves compression efficiency. Whether the decoder outputs 8-Bit or 10-Bit (12-Bit) is up to the decoder's choice, i.e. we don't necessarily need "true" 10-bit (12-bit) output to benefit from "high bit-depth" h.264 video...
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 15th July 2011 at 00:00.
LoRd_MuldeR is offline   Reply With Quote