Quote:
Originally Posted by JanWillem32
If it's about internal codec precision, please make the decoder 32-bit floating point or better on output, so I don't have to static cast every element of the mixer input to 32-bit floating-point anymore. I can handle pretty much any input quantization, only the double precision for the color management section can be a bit intense to process.
|
As far as I know, h.264 (and probably most video formats) internally use
integer math only. So floating point output doesn't make much sense. You can as well do the conversion yourself, if you need FP math for your post-processing. Also: Even if the original input source only used 8-Bit precision (per color channel) and the final output is going to be 8-Bit again, using 10-bit (12-bit) internal codec precision improves compression efficiency. Whether the decoder outputs 8-Bit or 10-Bit (12-Bit) is up to the decoder's choice, i.e. we don't necessarily need "true" 10-bit (12-bit) output to benefit from "high bit-depth" h.264 video...