Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 29th May 2016, 06:02   #3801  |  Link
littlepox
Registered User
 
Join Date: Nov 2012
Posts: 218
Quote:
Originally Posted by x265_Project View Post
It frustrates us when we see someone say that x265 is no better than x264 for typical video resolutions and bit rates. That's not our experience, or the experience of any of the dozens of video experts at the companies and organizations we work with. We take it as a challenge. And so, we challenge you or anyone else to show us what you're seeing.

Please point us to a good test sequence that x264 encodes better than x265, and let us know the bit rate you prefer. Not something that was already compressed to consumer streaming video bit rates (which already has H.264 compression artifacts) - a real uncompressed or very lightly compressed (very high bit rate) video test sequence, like one of the videos posted on media.xiph.org (or CableLabs, Elemental, Harmonic, etc.). Any test video, at any bit rate. Tell us your preferred x264 settings also, and we'll encode to the same bit rate with x265 and let everyone judge which encode is better.

Before you run your own tests, be sure you're using the latest development build of x265. Littlepox - I know you have your own favorite x265 command-line recipe, but after May 12th, things changed, and I think that recipe won't deliver the best visual quality. I suggest that you start with default settings for --preset veryslow, and if you have time, also try --preset placebo (which is now noticeably better than veryslow). If you want to run faster than veryslow, also try adding --no-recursion-skip to your command line.
I see I've offended you, my apologize. I don't mean to sneer at x265 nor the developers.

Our testcases are carefully taken from commercial BluRay Disc sources so none of them are heavily compressed. I'm asking my teammate to prepare a 720p one(properly down-scaled) that is short enough and it shouldn't violate the forum policies to be posted. Meanwhile, an x264 benchmark will also be available. We shall get back to you in a few days.

The three massive tuning tests we've done before is very expensive to carry(>500 encodes per time) so we only do that for stable builds; the latest one is with v1.9, and the next one shall be with v2.0. Indeed we don't have a good idea about the most updated version; but we shall be the first to cheer for your guys if major breakthrough is seen next time.

Last but not least, don't expect many users to tolerate --placebo. It's not pragmatic for daily use. The slowest one we are able to accept would be veryslow, if not deciding to use x264 when speed is a concern.

Last edited by littlepox; 29th May 2016 at 06:04.
littlepox is offline   Reply With Quote
Old 29th May 2016, 10:07   #3802  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
Just as a suggestion for a commonly available and uncompressed, "Creative Commons" licensed source: "Tears of Steel" should provide a good variety of different scenes, from stills to heavy action. The cartoonish credits may have their own challenge for an encoder possibly optimized for real-world footage.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 29th May 2016, 17:55   #3803  |  Link
x265_Project
Guest
 
Posts: n/a
Quote:
Originally Posted by littlepox View Post
I see I've offended you, my apologize. I don't mean to sneer at x265 nor the developers.
No apologies necessary - I'm not offended. We want your feedback, good or bad. We want to address any concerns head-on. x265 has all of the coding tools of x264, and many, many more. In other words, there is never any reason why x265 should be inferior to x264. In the worst case, x265 could encode the video with the exact same frame types, block structures, motion vectors and modes. But thanks to the HEVC standard, x265 has many other options to choose from, including larger block sizes. If there are any areas where x265 is not equal to or better than x264, we need to understand and fix them. If this is a myth, we want to bust it.
  Reply With Quote
Old 29th May 2016, 17:56   #3804  |  Link
Jamaika
Registered User
 
Join Date: Jul 2015
Posts: 708
Quote:
Originally Posted by eclipse98 View Post
It gets worse though, in addition to bare trees/bushes, I have observed these artifacts on pine trees, rocky mountains and trees with leaves too (to a lesser degree). I can provide more samples if needed.
I think you will have to wait. The patches to the codec are tested so maybe something after the holidays to improve. I present only two pictures 6000kbps option 'best' latest codec Mainconcept (surcharge) and 'veryslow' X265. There is a difference, and that's all. Cyberlink and Corel have even poorer HEVC encoders. Only preset medium. Get worse using X265 on Google.
Mainconcept 6000kbps, 1920x1080, bframes=3, best, pass=1

X265 6000kbps, 1920x1080, bframes=5, veryslow, pass=2


What can I require?
Jamaika is offline   Reply With Quote
Old 29th May 2016, 17:58   #3805  |  Link
pingfr
Registered User
 
Join Date: May 2015
Posts: 185
Quote:
Originally Posted by littlepox View Post
The three massive tuning tests we've done before is very expensive to carry(>500 encodes per time) so we only do that for stable builds; the latest one is with v1.9, and the next one shall be with v2.0. Indeed we don't have a good idea about the most updated version; but we shall be the first to cheer for your guys if major breakthrough is seen next time.
Which is why I believe you should wait for a 2.0 official release before running any further tests since they are expensive to carry as you've stated.

By then you should be able to compare original vs. 1.9+1 encode vs. 2.0 encodes and I believe this is where we'll see most of the improvements between 1.9 and 2.0, from there you should also be able to compare your best tuning results (from 2.0, I believe) vs any x264 encodes of the same segment.

My two cents.

Last edited by pingfr; 29th May 2016 at 18:01.
pingfr is offline   Reply With Quote
Old 29th May 2016, 18:00   #3806  |  Link
pingfr
Registered User
 
Join Date: May 2015
Posts: 185
Quote:
Originally Posted by x265_Project View Post
If there are any areas where x265 is not equal to or better than x264, we need to understand and fix them. If this is a myth, we want to bust it.
On point. Couldn't have said it better myself.
pingfr is offline   Reply With Quote
Old 29th May 2016, 18:03   #3807  |  Link
x265_Project
Guest
 
Posts: n/a
Quote:
Originally Posted by LigH View Post
Just as a suggestion for a commonly available and uncompressed, "Creative Commons" licensed source: "Tears of Steel" should provide a good variety of different scenes, from stills to heavy action. The cartoonish credits may have their own challenge for an encoder possibly optimized for real-world footage.
Yes, it's definitely one of the better test sequences, although it doesn't have any particularly challenging scenes. The higher the original quality, the better (although we need to show that we can handle grainy/noisy content also). There are a bunch of new 4K test sequences on https://media.xiph.org/video/derf/ contributed by Netflix that are excellent 10 bit content. Amazon also contributed high quality gaming captures from Twitch.
  Reply With Quote
Old 29th May 2016, 19:35   #3808  |  Link
eclipse98
1.16 MileHi
 
Join Date: Feb 2008
Location: Denver, CO
Posts: 26
Quote:
Originally Posted by Jamaika View Post
I think you will have to wait. The patches to the codec are tested so maybe something after the holidays to improve. I present only two pictures 6000kbps option 'best' latest codec Mainconcept (surcharge) and 'veryslow' X265. There is a difference, and that's all. Cyberlink and Corel have even poorer HEVC encoders. Only preset medium. Get worse using X265 on Google.
I can wait, no problem - just wanted to make sure developers are aware of the issue so it can get addressed whenever it becomes a higher priority. As I said before, I am very happy with x265 video quality, your Mainconcept screenshots confirm it.

I also tested Nvidia x265 encoder and it produces even worse color ghosting. I also tested similar footage shot in 1080 rather than 4K and there appears to be no issue, it seems to be 4K related.

Cheers !
eclipse98 is offline   Reply With Quote
Old 29th May 2016, 20:54   #3809  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
Quote:
Originally Posted by x265_Project View Post
There are a bunch of new 4K test sequences on https://media.xiph.org/video/derf/ contributed by Netflix that are excellent 10 bit content.
Wonderful. Rollercoaster videos always used to be challenging, now in UHD even.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 29th May 2016, 21:12   #3810  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,277
@x265_Project: are there any plans to further improve the x265 performance on multi socket systems when encoding SD content? (for SD cpu usage&speed became better with https://patches.videolan.org/patch/13436/ but it still doesn't seem good,...)
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 29th May 2016, 23:48   #3811  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 729
Quote:
Originally Posted by Selur View Post
@x265_Project: are there any plans to further improve the x265 performance on multi socket systems when encoding SD content? (for SD cpu usage&speed became better with https://patches.videolan.org/patch/13436/ but it still doesn't seem good,...)
For such usage, I think you should probably consider doing multiple encodes at once.
mandarinka is offline   Reply With Quote
Old 30th May 2016, 01:38   #3812  |  Link
x265_Project
Guest
 
Posts: n/a
Quote:
Originally Posted by Selur View Post
@x265_Project: are there any plans to further improve the x265 performance on multi socket systems when encoding SD content? (for SD cpu usage&speed became better with https://patches.videolan.org/patch/13436/ but it still doesn't seem good,...)
We're always working to improve performance, but with a single instance of x265 we run into Amdahl's law. We can only parallelize so much. With SD encodes, there isn't enough parallel work to keep many cores/threads busy, even with frame parallelism. So, that's one of the reasons we developed UHDkit, which can break a single encode into many chunks, and encode the chunks in parallel. If you're encoding many different videos, you can do them all in parallel. Just be sure to pin each encode to a different thread pool using our pools feature.
  Reply With Quote
Old 30th May 2016, 03:06   #3813  |  Link
pingfr
Registered User
 
Join Date: May 2015
Posts: 185
Quote:
Originally Posted by x265_Project View Post
We're always working to improve performance, but with a single instance of x265 we run into Amdahl's law. We can only parallelize so much. With SD encodes, there isn't enough parallel work to keep many cores/threads busy, even with frame parallelism. So, that's one of the reasons we developed UHDkit, which can break a single encode into many chunks, and encode the chunks in parallel. If you're encoding many different videos, you can do them all in parallel. Just be sure to pin each encode to a different thread pool using our pools feature.
May I ask what OS does UHDKit runs on natively? I would assume Linux?

Any public pricing models available?

Cheers.
pingfr is offline   Reply With Quote
Old 30th May 2016, 07:33   #3814  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
Quote:
Originally Posted by x265_Project View Post
but with a single instance of x265 we run into Amdahl's law.
Thank you, now I know the "RTFM term" to smash against the forehead of people who keep complaining about a "low CPU utilization"

I already suspected that the more complex an algorithm gets, the more restricted parallelizability gets as well (because many intermediate results have to be collected to a final result). You confirmed this assumption here.
__

P.S.:

Matheusz just mentioned in the mailinglist that there are some quirks regarding DLLs and compilers and speed ... not all compilers can handle optimization of multilib builds correctly, so in general, using a build with separate encoder library DLLs per bitdepth should be the fastest solution. On top, GCC 6.1 seems to be faster for Win32 8-bit, but GCC 5.3 for 10-bit and 12-bit code.

I won't be able to use several compiler versions easily, therefore I will keep building with GCC 5.3.

To be able to place both 32-bit and 64-bit builds in the same directory, the scripts will soon produce a new naming pattern, libx265-32_main[10|12].dll for separate Win32 DLLs.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid

Last edited by LigH; 30th May 2016 at 08:09.
LigH is offline   Reply With Quote
Old 30th May 2016, 16:51   #3815  |  Link
Motenai Yoda
Registered User
 
Motenai Yoda's Avatar
 
Join Date: Jan 2010
Posts: 709
Quote:
Thank you, now I know the "RTFM term" to smash against the forehead of people who keep complaining about a "low CPU utilization"
Nop, Amdahl's law says that optimizing a part x of a program y will make y faster in proportion to how much time y spent on x.

ie a program with function a() and function b(), where a() take 80% of time and b() 20%, optimizing b() to run twice faster will make the program run in 80%+(20%/2) = 90% the time not 50%.

In this case x265's unparallelizable parts will slowing down so much any further (if) possible optimization of the parallelized parts, or parallelizations, will give negible speedup with SD content.
__________________
powered by Google Translator
Motenai Yoda is offline   Reply With Quote
Old 31st May 2016, 06:30   #3816  |  Link
x265_Project
Guest
 
Posts: n/a
Quote:
Originally Posted by Motenai Yoda View Post
Nop, Amdahl's law says that optimizing a part x of a program y will make y faster in proportion to how much time y spent on x.

ie a program with function a() and function b(), where a() take 80% of time and b() 20%, optimizing b() to run twice faster will make the program run in 80%+(20%/2) = 90% the time not 50%.

In this case x265's unparallelizable parts will slowing down so much any further (if) possible optimization of the parallelized parts, or parallelizations, will give negible speedup with SD content.
That's right. Amdahl's law tells you that if you have an algorithm that is 100% parallelizable, you can speed this up proportionally by adding more processor cores. But when you have both serial and parallel operations, some threads will end up waiting for the information they need from another thread that isn't finished, and adding more threads ends up having diminishing returns.

There are many routines involved in HEVC encoding that are serial in nature. For example, to find the true "cost" (# of bits) of a candidate encoding mode for a block, we have to encode and decode the predicted block, calculate the residual error (the difference between the source block and the predicted block), calculate the discrete cosine transform of the residual error, quantize the transformed residual error, and compress the encoded result with CABAC entropy coding. All in series. This can't be further parallelized. CABAC encoding itself is an inherently serial process.

Thanks to Wavefront Parallel Processing, we can encode multiple rows of blocks in parallel, and thanks to x265's frame parallelism, we encode multiple frames in parallel. But the number of rows per frame is limited by the frame size. x265 can operate on more rows per frame with larger frames.
  Reply With Quote
Old 2nd June 2016, 12:50   #3817  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
x265 1.9+200-6098ba3e0cf16b11 (oh, revision hashes are longer now): some thread pool changes, git version ID and other fixes
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 4th June 2016, 11:21   #3818  |  Link
Ma
Registered User
 
Join Date: Feb 2015
Posts: 326
Quote:
Originally Posted by LigH View Post
oh, revision hashes are longer now
x264 has 7 chars long revision hash, x265 had 12 chars long, now it is 16 chars long. On page https://bitbucket.org/multicoreware/x265/commits/all there are 7 chars long revision hashes (and it is enough).

I think we should switch to 7 chars long revision hashes in x265.
Ma is offline   Reply With Quote
Old 4th June 2016, 13:46   #3819  |  Link
RiCON
Registered User
 
RiCON's Avatar
 
Join Date: Jan 2004
Posts: 69
I don't know why I thought {node|short} gave 16-char long hash, but yeah, it should be 12 character instead.
It shouldn't be 7 characters because that can be mistaken with git short form and git/hg commits have no relation whatsoever.
RiCON is offline   Reply With Quote
Old 4th June 2016, 22:42   #3820  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
A patch was offered today which reduces the hash length to 12 again.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:40.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.