Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 4th May 2021, 00:39   #21  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by huhn View Post
you can clearly compile it with 8 bit internal precision.

in the past these builds are called x265 8 bit 8bpp.

what a time: https://forum.doom9.org/showthread.p...06#post1684406

only 2 ways to fix this "issue":
1. find a comparison between two modern 8bpp and 16bpp x265 builds
2. compile then and do the comparison
3. move on they are not idiots that create x265

the rest is talking around.
Can you specify what internal value you are referring to when you say "8-bit internal?" Frequency or spatial domain?

It sounds like you're talking about being able to encode 8-bit source as 10-bit Main10 output or something? The way encoders are structured is they start with input frames of the same depth and subsampling that they will be encoding. Although you can use a 10-bit input with Main x265 encoding if you want. That's what the --dither parameter is for.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 4th May 2021, 02:29   #22  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,925
i personally want nothing it's just that i know there is an option to compile different version of x265 with a internal bit deep 8 or 16 and it is fully support and was a common thing back in the days called 8bpp or 16bpp.

this has nothing todo with encode setting 8 bit encoding main10 main12 this doesn't matter to the compile option it could be limited to 10/12 which is is by default but it's also an option for 8 bit.

if it is this unknown maybe it's a good idea to compile one so some user can play around with it.

and just to be absolutely clear this is a compile option not an encode option.

here is the cmake again:
Code:
if(X64)
    # NOTE: We only officially support high-bit-depth compiles of x265
    # on 64bit architectures. Main10 plus large resolution plus slow
    # preset plus 32bit address space usually means malloc failure.  You
    # can disable this if(X64) check if you desparately need a 32bit
    # build with 10bit/12bit support, but this violates the "shrink wrap
    # license" so to speak.  If it breaks you get to keep both halves.
    # You will need to disable assembly manually.
    option(HIGH_BIT_DEPTH "Store pixel samples as 16bit values (Main10/Main12)" OFF)
endif(X64)
if(HIGH_BIT_DEPTH)
    option(MAIN12 "Support Main12 instead of Main10" OFF)
    if(MAIN12)
        add_definitions(-DHIGH_BIT_DEPTH=1 -DX265_DEPTH=12)
    else()
        add_definitions(-DHIGH_BIT_DEPTH=1 -DX265_DEPTH=10)
    endif()
else(HIGH_BIT_DEPTH)
    add_definitions(-DHIGH_BIT_DEPTH=0 -DX265_DEPTH=8)
endif(HIGH_BIT_DEPTH)
huhn is offline   Reply With Quote
Old 4th May 2021, 05:56   #23  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by huhn View Post
i personally want nothing it's just that i know there is an option to compile different version of x265 with a internal bit deep 8 or 16 and it is fully support and was a common thing back in the days called 8bpp or 16bpp.

this has nothing todo with encode setting 8 bit encoding main10 main12 this doesn't matter to the compile option it could be limited to 10/12 which is is by default but it's also an option for 8 bit.

if it is this unknown maybe it's a good idea to compile one so some user can play around with it.

and just to be absolutely clear this is a compile option not an encode option.

here is the cmake again:
Code:
if(X64)
    # NOTE: We only officially support high-bit-depth compiles of x265
    # on 64bit architectures. Main10 plus large resolution plus slow
    # preset plus 32bit address space usually means malloc failure.  You
    # can disable this if(X64) check if you desparately need a 32bit
    # build with 10bit/12bit support, but this violates the "shrink wrap
    # license" so to speak.  If it breaks you get to keep both halves.
    # You will need to disable assembly manually.
    option(HIGH_BIT_DEPTH "Store pixel samples as 16bit values (Main10/Main12)" OFF)
endif(X64)
if(HIGH_BIT_DEPTH)
    option(MAIN12 "Support Main12 instead of Main10" OFF)
    if(MAIN12)
        add_definitions(-DHIGH_BIT_DEPTH=1 -DX265_DEPTH=12)
    else()
        add_definitions(-DHIGH_BIT_DEPTH=1 -DX265_DEPTH=10)
    endif()
else(HIGH_BIT_DEPTH)
    add_definitions(-DHIGH_BIT_DEPTH=0 -DX265_DEPTH=8)
endif(HIGH_BIT_DEPTH)
Ah. High Bit Depth, aka support for 10-bit and 12-bit, is only available for x86-64 because the internal calculation precision of Main10 and Main 12 REQUIRE double the floating point precision to operate, which can double the

A High Bit depth Build of x265 allows for Main10 and Main12 encoding, but encodes good old 8-bit Main the same way as a build without it.

Either I don't understand the problem you're having, or you don't understand the problem you aren't actually having.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 4th May 2021, 09:19   #24  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,925
there is no issue (and i'm pretty sure it's int16 doesn't matter).

there was a request for a 16bpp build like in the old days which can be compiled.
and if you go out of your way even for main8 so a x265 8 bit 16bpp.
huhn is offline   Reply With Quote
Old 4th May 2021, 18:14   #25  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by huhn View Post
there is no issue (and i'm pretty sure it's int16 doesn't matter).

there was a request for a 16bpp build like in the old days which can be compiled.
and if you go out of your way even for main8 so a x265 8 bit 16bpp.
Do you mean input support for >8-bit sources to use x265's dithering instead of an external tool?
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 4th May 2021, 19:33   #26  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,925
it just changes how the encoder works internally the user will not notice any difference on how it is used. the name 16bpp is not my idea.
x265 has code for both stored in the source you just take a different code not for the input part and not for the output part. that's as simple as i can make it.

as HolyWu just said the executable doesn't work when it is forced to compile with 8 bit 16bpp unlike in the past so time to use main10/main12 instead and move on.
huhn is offline   Reply With Quote
Old 5th May 2021, 00:57   #27  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
I guess I am lost on what a High Bit Depth exe would do for 8-bit that a non High Bit Depth exe can't.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 5th May 2021, 04:43   #28  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,925
it's just more efficient (and should be slower)but i have not seen tests in many years.
it uses intrapred16 (just like main10/12) instead of intrapred8 and many more examples like this. there is extra code in x265 for 8 bit x265 8bpp.
it's just a better 8 bit x265. the simple solution is just using 10 bit x265 and move on i can even come up with a good reason to use 8 bit over 10 bit x265 in general so i can't make a point for the 16bpp.

maybe this term help it's not 100 % correct but what ever.
a 8 bit x265 16bpp version works internal just like a 10 bit x265 version while been 100 % spec confirm to 8 bit H265.

the x265 exe in megui around 2015 is supposed to be 16bpp if you really want to have a go with it.
huhn is offline   Reply With Quote
Old 6th May 2021, 01:49   #29  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
I checked with one of the primary x265 developers who is now a colleague of mine. Her response:
Quote:
Internal precision in most modules has always been 16-bit, in fact, it wouldn’t even work without it. The MC modules in intrapred and interpred probably use 8b/16b, just because you can’t pack 2 pixels into a 16-bit register. But there would be no increase in precision/anything by using intrapred16 for 8-bit inputs.

Every module from residual block onwards uses 16-bit.
Which also matches my understanding and experience.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 6th May 2021, 05:09   #30  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,733
Good information, thank you. That has also been my reasoning for feeding the processed 16-bit data into x265 instead of dithering down to 10 bits beforehand.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 6th May 2021, 05:45   #31  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
This is an odd thread, very interesting incites into how x265 works in response to a bizarre premise.

Quote:
Originally Posted by Augur89 View Post
i'm still using the old 2015 version since it was way more efficient and would love to see 16 bit calculation precision reimplemented in the new versions
I still cannot wrap my head around this statement.

Are you really claiming a build of x265 from 2015 offers better quality per bit than a current build? That does not match my experience at all. There have been very significant improvements in x265 since 2015. x265 wasn't that great yet in 2015, was it?
__________________
madVR options explained

Last edited by Asmodian; 6th May 2021 at 05:48.
Asmodian is offline   Reply With Quote
Old 6th May 2021, 09:52   #32  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,925
ok finally found it:
https://forum.doom9.org/showthread.p...05#post1684205

it clearly works without 16 bit precision in same encoding parts and you can clearly compile without it.
why x265 8 bit 16 bpp is not compliable anymore is nothing i can say to but just use x265 10 bit and get the same benefits.

why 16 bpp for 8 bit matter? if i remember correctly it's the same reason why x264 10 bit was so massively better than 8 bit.

what so ever 16 and 8 bpp where build for a reason.
huhn is offline   Reply With Quote
Old 6th May 2021, 13:01   #33  |  Link
rwill
Registered User
 
Join Date: Dec 2013
Posts: 349
ITT: confused people
rwill is offline   Reply With Quote
Old 6th May 2021, 20:01   #34  |  Link
Augur89
Registered User
 
Join Date: Apr 2021
Posts: 10
Quote:
Originally Posted by Asmodian View Post
This is an odd thread, very interesting incites into how x265 works in response to a bizarre premise.



I still cannot wrap my head around this statement.

Are you really claiming a build of x265 from 2015 offers better quality per bit than a current build? That does not match my experience at all. There have been very significant improvements in x265 since 2015. x265 wasn't that great yet in 2015, was it?
unfortunately you understood me correctly.
regarding compression efficiency, they made a big step back and never reached that point since 2015.

i really tried more every setting combination, i could imagine, but didn't find a way to reach the same compression with the newer versions.
i would suggest, you test it yourself to get an impression of what i'm talking about.

as promised, here's a very short sample to show typical artefacts: https://drive.google.com/drive/folde...yN?usp=sharing

these are 2 clips of the same video, rendered one time with 8 bit calculation and one time with 16 bit calculation. i used 2-pass encoding and 700 kbits so that you can see the difference pretty well. as you can see in the media info, in the 8 bit calculation clip, the encoder allocated less bitrate than in the 16 bit calculation. i got the impression, that 16 bit calculation is simply better in recognizing in which scenes more bitrate is neccessary and where these bits can be saved. in 16 bit calculation all scenes look proper with 700 kbits, in 8 bit calculation there are 4 scenes that look similar to the one, i uploaded

again: for proper testing, i would suggest, you do your own tests
Augur89 is offline   Reply With Quote
Old 6th May 2021, 23:16   #35  |  Link
MeteorRain
結城有紀
 
Join Date: Dec 2003
Location: NJ; OR; Shanghai
Posts: 894
Am I reading correct that you are comparing a 2-pass 700k vs a ABR 700k?
__________________
Projects
x265 - Yuuki-Asuna-mod Download / GitHub
TS - ADTS AAC Splitter | LATM AAC Splitter | BS4K-ASS
Neo AviSynth+ filters - F3KDB | FFT3D | DFTTest | MiniDeen | Temporal Median
MeteorRain is offline   Reply With Quote
Old 7th May 2021, 12:45   #36  |  Link
Augur89
Registered User
 
Join Date: Apr 2021
Posts: 10
no, it was 2-pass abr for both.
as i said, that's just to show you the different look.
you have to do your own tests, if you really want proper comparisons.

i tested every possible setting i could find in every tool i could find to reach the same compression. if you can find a way to get the same or even better results than in the 2015 versions with 16 bit calculation precision, please let me know how exactly
Augur89 is offline   Reply With Quote
Old 7th May 2021, 13:34   #37  |  Link
Augur89
Registered User
 
Join Date: Apr 2021
Posts: 10
Quote:
Originally Posted by HolyWu View Post
As MeteorRain said, that's a flawed test for comparing single-pass rate control vs. multi-pass rate control. Seeing that you are encoding in main10 profile, which means HIGH_BIT_DEPTH (Store pixel samples as 16bit values) must be enabled while building x265, I wonder how did you conclude that 8 bit calculation precision was used instead of 16 bit calculation precision.
i can't find a way to chose 16 bit calculation precision in the versions of x265 after 2015, so i assume that it's not included.

if you check the binaries, it's not listed as well:
https://bitbucket.org/multicoreware/...CMakeLists.txt
https://bitbucket.org/multicoreware/...ux/multilib.sh
https://bitbucket.org/multicoreware/...4/multilib.bat

Same for the Documentation (https://x265.readthedocs.io/en/maste...n-output-depth)

again: i don't really mind how it's called or if there are other improvements. i just would like the same quality/filsize quotient as i had in the 2015 x265 version with 16 bit calculation precision and in the versions after 2015 i always needed to use more bitrate to get the same quality (and as i said before, i really tried every setting and every tool i could find).

▬▬▬▬My encoding setting from the 2015 x265 version▬▬▬▬
Encoding Mode: specific filesize/birate (2-pass) - fast 1st pass
target bitrate: 600 to 1500 kbit/s for 720p (depending on material)
Level/profile/tier: unrestricted/Main10/High
Calculation precision: 16 bit
In-/Output Bit-Depth: 10 Bit
Color Space: i420
Coding QT: max CU size, min CU size : 64 / 8
Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
ME / range / subpel / merge: dia / 57 / 2 / 1
Keyframe min / max / scenecut: 25 / 250 / 40
Lookahead / bframes / badapt: 20 / 4 / 2
b-pyramid / weightp / weightb / refs: 1 / 1 / 0 / 1
AQ: mode / str / qg-size / cu-tree: 1 / 1.0 / 64 / 1

so for further discussion, i would suggest you do your own tests and let me know if you experience the same or if you can find something, i wasn't able to find.
If you could tell me an encoding setting for the latest x265 releases, that provides a better or at least not worse quality/filesize quotient, it would be awesome.

Last edited by Augur89; 7th May 2021 at 14:41.
Augur89 is offline   Reply With Quote
Old 7th May 2021, 18:42   #38  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by Boulder View Post
Good information, thank you. That has also been my reasoning for feeding the processed 16-bit data into x265 instead of dithering down to 10 bits beforehand.
That definitely won't make a difference unless the dithering algorithm you're using is superior to x265's default one. Which is very basic. The --dither version is somewhat better, but not cutting edge or anything.

The actual encoder will always convert to the final color space before starting quantization or anything else.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 7th May 2021, 18:48   #39  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by huhn View Post
ok finally found it:
https://forum.doom9.org/showthread.p...05#post1684205

it clearly works without 16 bit precision in same encoding parts and you can clearly compile without it.
why x265 8 bit 16 bpp is not compliable anymore is nothing i can say to but just use x265 10 bit and get the same benefits.

why 16 bpp for 8 bit matter? if i remember correctly it's the same reason why x264 10 bit was so massively better than 8 bit.

what so ever 16 and 8 bpp where build for a reason.
You are talking spatial bits per pixel, not frequency bits. IIRC, 8-bit always using 16-bit floating point for all iDCT processing, while 10/12 use 32. But that's different than what you are talking about here. Higher internal precision IS a big reason why 10-bit H.264 is better than 8-bit. The gap is much smaller in HEVC.

The bpp number you're talking about is just how many bits are used to store each pixel value. 8-bit is 8-bit. 10 and 12 use 16 because there aren't 10 or 12 bit register sizes. This is a primary reason 8-bit encoding is faster than 10/12.

This is all internal perf optimization stuff, and has no impact on the final encode. Forcing 16-bit for 8-bit content would make encoding slower without changing output. The actual encoder module itself will always start with 8-bit 4:2:0 input with Main Profile.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 8th May 2021, 15:11   #40  |  Link
Kurosu
Registered User
 
Join Date: Sep 2002
Location: France
Posts: 432
During the course of developping HEVC, Main10 was shown to be only 2-5% better (BDRate-wise) than Main because indeed all intermediate computation during prediction are raised to 16 bits internally. 16-bits is just a convenience thing, because a lot of CPU SIMD has this data format. It's no longer the 10+% eg mentioned by that old Ateme paper for H.264.

Main10 is still better, besides HDR, because of eg banding, and because input to prediction still has more precision.

RExt (4:2:2/4:4:4 and 4:2:0 > 10 bits) does use more precision (32 bits) on some intermediates, eg transforms, but that basically halves the throughput of the prediction. In any case, the goal was never to have a better 8 bits output, but to have ("contribution") workflows working on more than 10 bits.
Kurosu is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:40.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.