Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
26th September 2012, 16:37 | #1 | Link |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,771
|
10-bit input to 8-bit output. What happens?
If I'm using a 10-bit capable version of x264, but encoding to 8-bit, how does x264 handle the 10-bit to 8-bit conversion?
I'd expect it to just truncate the two least significant bits. But it would be lovely if it did some dithering at least. My pipe dream for years would be for encoders to take the >8-bit information into the frequency transform stage, so the codec can know what's banding and what's an edge, and make the final psychovisual decisions appropriately. is this anything anyone is doing, or looking at, or would like to have me pitch in more detail ? |
26th September 2012, 16:45 | #2 | Link | |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Quote:
Any current version will always convert to the requested bit depth before passing it on to the actual encoder without making use of any pre-exisiting higher precision AFAIK. |
|
26th September 2012, 16:54 | #3 | Link |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,771
|
Right. Well, a feature request, then. An encoder should be able to do a better job of 10-bit to 8-bit conversion than any preprocessing technique.
A 10-bit build can't encode 8-bit output at all? Weird; it's not like we need to have different builds for 0-255 range etcetera. I imagine the intent is to merge them at some point. |
26th September 2012, 17:35 | #4 | Link |
Registered User
Join Date: Jul 2007
Posts: 552
|
Yes. If you want to encode to 8-bit output than you should use 8-bit build which also support high bitdepth input but will dither it to 8-bit before sending to libx264. As for encoder implemented dithering (as part of encoding for better dithering keeping) I doubt it can be made effectively (at least I don't have idea how to implement this).
|
26th September 2012, 18:09 | #5 | Link | ||
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,771
|
Quote:
Quote:
Alternatively, imagine doing normal 10-bit encode, and then modifying the bitstream at the end to 8-bit precision. At that point, the higher frequency data can be used to determine what sort of dithering is appropriate for different parts of the image. Also, the dithering itself could use some sort of trellis-like approach to find the right dither for optimum compressibility. There's a huge number of ways to dither "well," and being able to pick the one that compresses best could be a big help. As it is, any 10-bit to 8-bit conversion that happens before the codec is either adding noise or banding, blind as to its impact on compression. |
||
26th September 2012, 18:28 | #6 | Link | |
Registered User
Join Date: Jul 2007
Posts: 552
|
Quote:
|
|
27th September 2012, 09:27 | #8 | Link | ||
x264 developer
Join Date: Sep 2004
Posts: 2,392
|
Quote:
Quote:
8-bit vs 10-bit is a different datatype, uint8_t vs uint16_t. To compile both (rather than typedef) you'd have to template every function that touches pixels or coefs. The corresponding change in ffh264 modified 2700 LOC (excluding asm). I have no plans to merge them; and if I did, the obvious strategy would be to compile two whole copies of libx264 and add a wrapper to select one. Last edited by akupenguin; 27th September 2012 at 11:17. |
||
|
|