Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
4th May 2021, 00:39 | #21 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
Quote:
It sounds like you're talking about being able to encode 8-bit source as 10-bit Main10 output or something? The way encoders are structured is they start with input frames of the same depth and subsampling that they will be encoding. Although you can use a 10-bit input with Main x265 encoding if you want. That's what the --dither parameter is for. |
|
4th May 2021, 02:29 | #22 | Link |
Registered User
Join Date: Oct 2012
Posts: 7,925
|
i personally want nothing it's just that i know there is an option to compile different version of x265 with a internal bit deep 8 or 16 and it is fully support and was a common thing back in the days called 8bpp or 16bpp.
this has nothing todo with encode setting 8 bit encoding main10 main12 this doesn't matter to the compile option it could be limited to 10/12 which is is by default but it's also an option for 8 bit. if it is this unknown maybe it's a good idea to compile one so some user can play around with it. and just to be absolutely clear this is a compile option not an encode option. here is the cmake again: Code:
if(X64) # NOTE: We only officially support high-bit-depth compiles of x265 # on 64bit architectures. Main10 plus large resolution plus slow # preset plus 32bit address space usually means malloc failure. You # can disable this if(X64) check if you desparately need a 32bit # build with 10bit/12bit support, but this violates the "shrink wrap # license" so to speak. If it breaks you get to keep both halves. # You will need to disable assembly manually. option(HIGH_BIT_DEPTH "Store pixel samples as 16bit values (Main10/Main12)" OFF) endif(X64) if(HIGH_BIT_DEPTH) option(MAIN12 "Support Main12 instead of Main10" OFF) if(MAIN12) add_definitions(-DHIGH_BIT_DEPTH=1 -DX265_DEPTH=12) else() add_definitions(-DHIGH_BIT_DEPTH=1 -DX265_DEPTH=10) endif() else(HIGH_BIT_DEPTH) add_definitions(-DHIGH_BIT_DEPTH=0 -DX265_DEPTH=8) endif(HIGH_BIT_DEPTH) |
4th May 2021, 05:56 | #23 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
Quote:
A High Bit depth Build of x265 allows for Main10 and Main12 encoding, but encodes good old 8-bit Main the same way as a build without it. Either I don't understand the problem you're having, or you don't understand the problem you aren't actually having. |
|
4th May 2021, 19:33 | #26 | Link |
Registered User
Join Date: Oct 2012
Posts: 7,925
|
it just changes how the encoder works internally the user will not notice any difference on how it is used. the name 16bpp is not my idea.
x265 has code for both stored in the source you just take a different code not for the input part and not for the output part. that's as simple as i can make it. as HolyWu just said the executable doesn't work when it is forced to compile with 8 bit 16bpp unlike in the past so time to use main10/main12 instead and move on. |
5th May 2021, 04:43 | #28 | Link |
Registered User
Join Date: Oct 2012
Posts: 7,925
|
it's just more efficient (and should be slower)but i have not seen tests in many years.
it uses intrapred16 (just like main10/12) instead of intrapred8 and many more examples like this. there is extra code in x265 for 8 bit x265 8bpp. it's just a better 8 bit x265. the simple solution is just using 10 bit x265 and move on i can even come up with a good reason to use 8 bit over 10 bit x265 in general so i can't make a point for the 16bpp. maybe this term help it's not 100 % correct but what ever. a 8 bit x265 16bpp version works internal just like a 10 bit x265 version while been 100 % spec confirm to 8 bit H265. the x265 exe in megui around 2015 is supposed to be 16bpp if you really want to have a go with it. |
6th May 2021, 01:49 | #29 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
I checked with one of the primary x265 developers who is now a colleague of mine. Her response:
Quote:
|
|
6th May 2021, 05:09 | #30 | Link |
Pig on the wing
Join Date: Mar 2002
Location: Finland
Posts: 5,733
|
Good information, thank you. That has also been my reasoning for feeding the processed 16-bit data into x265 instead of dithering down to 10 bits beforehand.
__________________
And if the band you're in starts playing different tunes I'll see you on the dark side of the Moon... |
6th May 2021, 05:45 | #31 | Link | |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
|
This is an odd thread, very interesting incites into how x265 works in response to a bizarre premise.
Quote:
Are you really claiming a build of x265 from 2015 offers better quality per bit than a current build? That does not match my experience at all. There have been very significant improvements in x265 since 2015. x265 wasn't that great yet in 2015, was it?
__________________
madVR options explained Last edited by Asmodian; 6th May 2021 at 05:48. |
|
6th May 2021, 09:52 | #32 | Link |
Registered User
Join Date: Oct 2012
Posts: 7,925
|
ok finally found it:
https://forum.doom9.org/showthread.p...05#post1684205 it clearly works without 16 bit precision in same encoding parts and you can clearly compile without it. why x265 8 bit 16 bpp is not compliable anymore is nothing i can say to but just use x265 10 bit and get the same benefits. why 16 bpp for 8 bit matter? if i remember correctly it's the same reason why x264 10 bit was so massively better than 8 bit. what so ever 16 and 8 bpp where build for a reason. |
6th May 2021, 20:01 | #34 | Link | |
Registered User
Join Date: Apr 2021
Posts: 10
|
Quote:
regarding compression efficiency, they made a big step back and never reached that point since 2015. i really tried more every setting combination, i could imagine, but didn't find a way to reach the same compression with the newer versions. i would suggest, you test it yourself to get an impression of what i'm talking about. as promised, here's a very short sample to show typical artefacts: https://drive.google.com/drive/folde...yN?usp=sharing these are 2 clips of the same video, rendered one time with 8 bit calculation and one time with 16 bit calculation. i used 2-pass encoding and 700 kbits so that you can see the difference pretty well. as you can see in the media info, in the 8 bit calculation clip, the encoder allocated less bitrate than in the 16 bit calculation. i got the impression, that 16 bit calculation is simply better in recognizing in which scenes more bitrate is neccessary and where these bits can be saved. in 16 bit calculation all scenes look proper with 700 kbits, in 8 bit calculation there are 4 scenes that look similar to the one, i uploaded again: for proper testing, i would suggest, you do your own tests |
|
7th May 2021, 12:45 | #36 | Link |
Registered User
Join Date: Apr 2021
Posts: 10
|
no, it was 2-pass abr for both.
as i said, that's just to show you the different look. you have to do your own tests, if you really want proper comparisons. i tested every possible setting i could find in every tool i could find to reach the same compression. if you can find a way to get the same or even better results than in the 2015 versions with 16 bit calculation precision, please let me know how exactly |
7th May 2021, 13:34 | #37 | Link | |
Registered User
Join Date: Apr 2021
Posts: 10
|
Quote:
if you check the binaries, it's not listed as well: https://bitbucket.org/multicoreware/...CMakeLists.txt https://bitbucket.org/multicoreware/...ux/multilib.sh https://bitbucket.org/multicoreware/...4/multilib.bat Same for the Documentation (https://x265.readthedocs.io/en/maste...n-output-depth) again: i don't really mind how it's called or if there are other improvements. i just would like the same quality/filsize quotient as i had in the 2015 x265 version with 16 bit calculation precision and in the versions after 2015 i always needed to use more bitrate to get the same quality (and as i said before, i really tried every setting and every tool i could find). ▬▬▬▬My encoding setting from the 2015 x265 version▬▬▬▬ Encoding Mode: specific filesize/birate (2-pass) - fast 1st pass target bitrate: 600 to 1500 kbit/s for 720p (depending on material) Level/profile/tier: unrestricted/Main10/High Calculation precision: 16 bit In-/Output Bit-Depth: 10 Bit Color Space: i420 Coding QT: max CU size, min CU size : 64 / 8 Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra ME / range / subpel / merge: dia / 57 / 2 / 1 Keyframe min / max / scenecut: 25 / 250 / 40 Lookahead / bframes / badapt: 20 / 4 / 2 b-pyramid / weightp / weightb / refs: 1 / 1 / 0 / 1 AQ: mode / str / qg-size / cu-tree: 1 / 1.0 / 64 / 1 so for further discussion, i would suggest you do your own tests and let me know if you experience the same or if you can find something, i wasn't able to find. If you could tell me an encoding setting for the latest x265 releases, that provides a better or at least not worse quality/filesize quotient, it would be awesome. Last edited by Augur89; 7th May 2021 at 14:41. |
|
7th May 2021, 18:42 | #38 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
Quote:
The actual encoder will always convert to the final color space before starting quantization or anything else. |
|
7th May 2021, 18:48 | #39 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
|
Quote:
The bpp number you're talking about is just how many bits are used to store each pixel value. 8-bit is 8-bit. 10 and 12 use 16 because there aren't 10 or 12 bit register sizes. This is a primary reason 8-bit encoding is faster than 10/12. This is all internal perf optimization stuff, and has no impact on the final encode. Forcing 16-bit for 8-bit content would make encoding slower without changing output. The actual encoder module itself will always start with 8-bit 4:2:0 input with Main Profile. |
|
8th May 2021, 15:11 | #40 | Link |
Registered User
Join Date: Sep 2002
Location: France
Posts: 432
|
During the course of developping HEVC, Main10 was shown to be only 2-5% better (BDRate-wise) than Main because indeed all intermediate computation during prediction are raised to 16 bits internally. 16-bits is just a convenience thing, because a lot of CPU SIMD has this data format. It's no longer the 10+% eg mentioned by that old Ateme paper for H.264.
Main10 is still better, besides HDR, because of eg banding, and because input to prediction still has more precision. RExt (4:2:2/4:4:4 and 4:2:0 > 10 bits) does use more precision (32 bits) on some intermediates, eg transforms, but that basically halves the throughput of the prediction. In any case, the goal was never to have a better 8 bits output, but to have ("contribution") workflows working on more than 10 bits. |
Thread Tools | Search this Thread |
Display Modes | |
|
|