Quote:
Originally Posted by tebasuna51
The medium human ear can distinguish until a precision of 20 bits, then always is better 24 bits than16 when source is lossless.
But with lossy sources these 8 bits are always unexact, the precision is always less than 16 bits, that is lossy encoders.
|
But when "lossy decoders output at least 32 bits float" why is the precision "always less than 16 bits"?
So my understanding:
16 bit lossless --> lossy (24 bit) --> last 8 bits are guessed/interpolated (?)
24 bit lossless --> lossy (24 bit) --> last 8 bits are taken approximated from the lossless
conclusion:
(1) 24 bit lossless --> 24 bit lossy
(2) 24 bit lossless --> 16 bit lossless --> 24 bit lossy
(1) is closer to the originally source and so a 24 bit lossless from a lossy is closer to the originally source (if source was 24 bit).
Is that right?