Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 26th June 2019, 19:03   #21  |  Link
excellentswordfight
Lost my old account :(
 
Join Date: Jul 2017
Posts: 125
Quote:
Originally Posted by benwaggoner View Post
I'm not sure. Perhaps because avx512 needs to be explicitly turned on?

I've got my dual Xeon 6140 workstation coming next week (basically a physical EC2 c5.16xlarge), and I can do some benchmarks when it arrives.

And I've got some 8K test sources too, so I can benchmark those with/without. I would expect that the value may go up with resolution, as avx512 is only supposed to become useful at 2160p.
I’ve done some tests at 2160p with a few skylake-sp platforms. Even at 2160p i had a hard time getting the same utilization as without avx512, and even without that it’s hard to see any big gains cause of the ~500Mhz clockspeed penalty.
excellentswordfight is offline   Reply With Quote
Old 27th June 2019, 03:31   #22  |  Link
RanmaCanada
Registered User
 
Join Date: May 2009
Posts: 116
Quote:
Originally Posted by Atak_Snajpera View Post
Oh boy! You and your questions...
Dude! Those results come from my benchmark
http://forum.pclab.pl/topic/1184884-x265-FHD-Benchmark/
It's obvious some people don't do their "research" before they post

I did something similar to The Stilt, when he was wrong and just called him a random haha.

Thanks for posting everything in one place! It will make things far more easier to compare once we get some runs from Ryzen 2!
RanmaCanada is offline   Reply With Quote
Old 27th June 2019, 11:52   #23  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,858
Quote:
Originally Posted by RanmaCanada View Post
some runs from Ryzen 2!
Since we're liking to be accurate:

Its either Zen 2, or Ryzen 3000. Ryzen 2 is misleading, and not a term used by AMD, since Ryzen 3/5/7/9 are model-classes in their lineup, so it could easily be mistaken for a lower-end model - and next generation there would be even more real confusion.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 27th June 2019 at 11:58.
nevcairiel is offline   Reply With Quote
Old 27th June 2019, 15:52   #24  |  Link
Asilurr
Registered User
 
Join Date: Jan 2019
Posts: 9
Quote:
Originally Posted by Atak_Snajpera View Post
Dude! Those results come from my benchmark
http://forum.pclab.pl/topic/1184884-x265-FHD-Benchmark/
The link you've provided redirects to Mediafire, where one can download the presumably latest version (?) of the benchmark. Id est the package uploaded on July 26th, 2018. Inside the package, one can find:
1. An old version of x265 [2.2+15-a18ab7656c30].
2. An old version of FFmpeg [N-82889-g54931fd].
3. The five test sources, which are all 8-bit, all 4:2:0, all 16:9 1080p.

As for the benchmark itself:
1. As one can't test a given CPU in vacuo, one actually tests an ensemble of CPU+MB+RAM+IO. Rephrasing: one adds degrees of freedom to the system, together those serve the purpose of propagating the uncertainty.
2. As one doesn't test x265 on its own, due to the way the benchmark is designed, one actually tests the pair of FFmpeg+x265. Another degree of freedom, another potential path for the propagation of uncertainty.
3. As the benchmark uses an old version of x265, all the results which are obtained are actually "watermarked" by the common reference frame of a 2.2 x265 encoder. One can't extrapolate the results to a 3.1 x265 encoder, and assume them to hold true by default.
4. As the benchmark tests the default configuration of parameters for x265 (i.e. --preset medium), all the results which are obtained are actually "watermarked" by the common reference frame of that particular preset. One can't extrapolate the results to another preset (or another parameter configuration), and assume them to hold true by default.
5. The benchmark doesn't test a single low complexity source, or a single 10-bit source, or a single non-4:2:0 source, or a single non-FHD source. All the results are again "watermarked" by a highly specific encoding scenario. One can't extrapolate them to another encoding scenario, and assume them to hold true by default.

Do you understand what is the actual result of the benchmark you are providing? It enables one to say: I am encoding this particular source of this particular complexity (at this particular resolution, this particular bit depth, this particular chroma subsampling), within this particular software environment (OS, x265, FFmpeg), on this particular machine (CPU, MB, RAM, IO). It certainly does not enable one to say: CPU MMM achieves an average of abc% better FPS than CPU NNN, across all possible/conceivable encoding scenarios.

At this point, I'd simply reiterate my initial statement: I would advise a healthy dose of skepticism whenever you see results without extensive coverage of the testing methodology.
Asilurr is offline   Reply With Quote
Old 27th June 2019, 17:03   #25  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,317
Quote:
Originally Posted by Asilurr View Post
The link you've provided redirects to Mediafire, where one can download the presumably latest version (?) of the benchmark. Id est the package uploaded on July 26th, 2018. Inside the package, one can find:
1. An old version of x265 [2.2+15-a18ab7656c30].
2. An old version of FFmpeg [N-82889-g54931fd].
3. The five test sources, which are all 8-bit, all 4:2:0, all 16:9 1080p.

As for the benchmark itself:
1. As one can't test a given CPU in vacuo, one actually tests an ensemble of CPU+MB+RAM+IO. Rephrasing: one adds degrees of freedom to the system, together those serve the purpose of propagating the uncertainty.
2. As one doesn't test x265 on its own, due to the way the benchmark is designed, one actually tests the pair of FFmpeg+x265. Another degree of freedom, another potential path for the propagation of uncertainty.
3. As the benchmark uses an old version of x265, all the results which are obtained are actually "watermarked" by the common reference frame of a 2.2 x265 encoder. One can't extrapolate the results to a 3.1 x265 encoder, and assume them to hold true by default.
4. As the benchmark tests the default configuration of parameters for x265 (i.e. --preset medium), all the results which are obtained are actually "watermarked" by the common reference frame of that particular preset. One can't extrapolate the results to another preset (or another parameter configuration), and assume them to hold true by default.
5. The benchmark doesn't test a single low complexity source, or a single 10-bit source, or a single non-4:2:0 source, or a single non-FHD source. All the results are again "watermarked" by a highly specific encoding scenario. One can't extrapolate them to another encoding scenario, and assume them to hold true by default.

Do you understand what is the actual result of the benchmark you are providing? It enables one to say: I am encoding this particular source of this particular complexity (at this particular resolution, this particular bit depth, this particular chroma subsampling), within this particular software environment (OS, x265, FFmpeg), on this particular machine (CPU, MB, RAM, IO). It certainly does not enable one to say: CPU MMM achieves an average of abc% better FPS than CPU NNN, across all possible/conceivable encoding scenarios.

At this point, I'd simply reiterate my initial statement: I would advise a healthy dose of skepticism whenever you see results without extensive coverage of the testing methodology.
Are you trying to convince me that AMD's AVX128 in Zen1 is able to compete with full fat Intel's AVX256. x256 is highly optimized for AVX2 and that's why ZEN1 sucks in this encoder. Period! Luckily Zen2 finally has AVX256 so I'm expecting 3600 to have similar performance as stock 8700k.

BTW. Your requirements for "proper" benchmark are just insane and totally unrealistic! I advise you to stop using any software because there are countless variables affecting performance of your machine (including phases of the moon).

Last edited by Atak_Snajpera; 27th June 2019 at 17:13.
Atak_Snajpera is online now   Reply With Quote
Old 28th June 2019, 07:01   #26  |  Link
Asilurr
Registered User
 
Join Date: Jan 2019
Posts: 9
You're being deliberately obtuse, I suppose I can attempt a simplistic analogy by referring to a system with only two degrees of freedom.

I want to investigate what happens to water when I heat it up. To be rigorous, that's the so-called "pure water" which is known today as UPW. At a pressure of 1 atm, I observe that water is gaseous at 400 K. Conducting a second experiment, I observe that water is gaseous at 500 K too. At this point I ask myself: can I consider the previous results a good predictor of what would happen to water when it's heated to 600 K instead? Can I simply assume that it would be gaseous, thus not having to actually perform a test? The answer to that is no, a proper phase diagram shows what happens to water heated at 600 K and highlights the critical importance of the other parameter of this simplistic system.

To return to the discussion of benchmarking encoding times, we're now looking at a system with dozens of degrees of freedom: the parametric space of the hardware, the parametric space of the software, the parametric space of the source to be encoded, the parametric space of the encoding process itself. This entire thread was initiated by explicitly mentioning UHD/4K encoding scenarios. It's right in front of your eyes, mentioned both in the title and the leading post. To which you've replied (here) by implicitly assuming that the results of FHD benchmarking scenarios (highly specific FHD encoding scenarios, as already pointed out) will simply hold true even if one particular parameter varies drastically, claiming that increasing the resolution four times doesn't alter the expected result. To which I replied that an implicit assumption is not sound, as there is an explicit need of additional testing to account for the variable parameter(s).
Asilurr is offline   Reply With Quote
Old 28th June 2019, 12:24   #27  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,711
Quote:
Originally Posted by Asilurr View Post
To return to the discussion of benchmarking encoding times, we're now looking at a system with dozens of degrees of freedom: the parametric space of the hardware, the parametric space of the software, the parametric space of the source to be encoded, the parametric space of the encoding process itself. This entire thread was initiated by explicitly mentioning UHD/4K encoding scenarios. It's right in front of your eyes, mentioned both in the title and the leading post. To which you've replied (here) by implicitly assuming that the results of FHD benchmarking scenarios (highly specific FHD encoding scenarios, as already pointed out) will simply hold true even if one particular parameter varies drastically, claiming that increasing the resolution four times doesn't alter the expected result. To which I replied that an implicit assumption is not sound, as there is an explicit need of additional testing to account for the variable parameter(s).
Hello.
Could you propose an existent benchmark, covering your needs of proper x265 encoding benchmarking procedure for multithreaded CPUs ?

Or do you have a suggestion how to build one ?
__________________
Win 10 x64 (18363.476) - Core i3-9100F - nVidia 1660 (441.41)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 28th June 2019, 12:57   #28  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,317
Quote:
Originally Posted by Asilurr View Post
You're being deliberately obtuse, I suppose I can attempt a simplistic analogy by referring to a system with only two degrees of freedom.

I want to investigate what happens to water when I heat it up. To be rigorous, that's the so-called "pure water" which is known today as UPW. At a pressure of 1 atm, I observe that water is gaseous at 400 K. Conducting a second experiment, I observe that water is gaseous at 500 K too. At this point I ask myself: can I consider the previous results a good predictor of what would happen to water when it's heated to 600 K instead? Can I simply assume that it would be gaseous, thus not having to actually perform a test? The answer to that is no, a proper phase diagram shows what happens to water heated at 600 K and highlights the critical importance of the other parameter of this simplistic system.

To return to the discussion of benchmarking encoding times, we're now looking at a system with dozens of degrees of freedom: the parametric space of the hardware, the parametric space of the software, the parametric space of the source to be encoded, the parametric space of the encoding process itself. This entire thread was initiated by explicitly mentioning UHD/4K encoding scenarios. It's right in front of your eyes, mentioned both in the title and the leading post. To which you've replied (here) by implicitly assuming that the results of FHD benchmarking scenarios (highly specific FHD encoding scenarios, as already pointed out) will simply hold true even if one particular parameter varies drastically, claiming that increasing the resolution four times doesn't alter the expected result. To which I replied that an implicit assumption is not sound, as there is an explicit need of additional testing to account for the variable parameter(s).
Yes! You can easily extrapolate results from 1920x1080 to 3840x2160. Encoding speed will drop 4 times. 4x more pixels to examine means 4 times more cpu cycles has to be used. End of story. I'm done with you!
Atak_Snajpera is online now   Reply With Quote
Old 29th June 2019, 05:31   #29  |  Link
RanmaCanada
Registered User
 
Join Date: May 2009
Posts: 116
Quote:
Originally Posted by nevcairiel View Post
Since we're liking to be accurate:

Its either Zen 2, or Ryzen 3000. Ryzen 2 is misleading, and not a term used by AMD, since Ryzen 3/5/7/9 are model-classes in their lineup, so it could easily be mistaken for a lower-end model - and next generation there would be even more real confusion.
Ya got me there! and thank you.

Now hopefully someone will leak out something other than stupid geekbench results. I seriously want to see how well this generation will do in encoding. I don't care about games..I want encoding benchmarks.

Sadly, I think we are going to have to wait till one of the forum members gets one as we know review sites don't exactly know how to benchmark when it's not games.
RanmaCanada is offline   Reply With Quote
Old 1st July 2019, 23:00   #30  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,034
Quote:
Originally Posted by Atak_Snajpera View Post
Yes! You can easily extrapolate results from 1920x1080 to 3840x2160. Encoding speed will drop 4 times. 4x more pixels to examine means 4 times more cpu cycles has to be used. End of story.
Not entirely. CABAC can take up a decent chunk of CPU, and since bitrate isn't linear to pixel count, in the real world you get less than a 4x increase for 4x the pixels. Plus the more pixels there are, the less each one matters so some different encoding technique.

Changing resolution can also really impact threading; more pixels mean fewer frame threads are needed on smaller core counts, reducing overhead.

And of course, how well does encoding scale across multiple cores? Across the same or different NUMA nodes?
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 4th July 2019, 05:06   #31  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 737
https://www.ptt.cc/bbs/PC_Shopping/M...742.A.61C.html

This has a (unconfirmed) results for x265 benchmark used on HWBot. I think it has an older binary so gains from AVX256 might be a bit subdued, but the same would be true for the Intel chip.

The benchmark tests and reports max single thread turbo clock at initialization so I think the screenshot means that these are scores for stock Ryzen 5 3600 and Core i7-8700K.

Basically looks nice for Ryzen 3000 and I would wait for it and not get anything else, if this is confirmed.
mandarinka is offline   Reply With Quote
Old 4th July 2019, 05:28   #32  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 3,753
Very nice results, I hope they are true! It is looking like they probably are, with more leaks from other areas.
__________________
madVR options explained
Asmodian is offline   Reply With Quote
Old 4th July 2019, 09:22   #33  |  Link
Forteen88
Herr
 
Join Date: Apr 2009
Location: North Europe
Posts: 408
Quote:
Originally Posted by mandarinka View Post
Unfortunately, they don't give information about the speed of the RAM used.
Forteen88 is offline   Reply With Quote
Old 5th July 2019, 23:29   #34  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 737
https://imgur.com/a/YkoOCgM

Check that Handbrake result Pretty nice if correct.
mandarinka is offline   Reply With Quote
Old 6th July 2019, 12:19   #35  |  Link
Nico8583
Registered User
 
Join Date: Jan 2010
Location: France
Posts: 744
Interesting
What is the difference between i9 9900K 95W and i9 9900K ? 95W is the stock TDP and both are 3,6Ghz. Perhaps the second is O/C ?
Nico8583 is offline   Reply With Quote
Old 6th July 2019, 12:25   #36  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,500
3.6 GHz is only the advertised base clock. The i9-9900K goes up to 5 GHz dynamically. If it's not limited to the default "95W" setting it can keep turbo clocks longer.

https://www.anandtech.com/show/13591...-power-for-sff
sneaker_ger is offline   Reply With Quote
Old 6th July 2019, 21:59   #37  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,034
Quote:
Originally Posted by Asmodian View Post
Very nice results, I hope they are true! It is looking like they probably are, with more leaks from other areas.
It's be helpful if they documented what parameters were actually being tested. "1080p" is obviously not a x265 --preset.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 6th July 2019, 22:37   #38  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 911
Quote:
Originally Posted by benwaggoner View Post
It's be helpful if they documented what parameters were actually being tested. "1080p" is obviously not a x265 --preset.
Zen2 chips and the reviews are out tomorrow so not long until we have some good x265 and x264 benchmarks for zen2 to compare with latest intel chips.
hajj_3 is offline   Reply With Quote
Old 7th July 2019, 15:19   #39  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 911
x265 and x264 benchmarks of the new Ryzen 3000 series chips:









handbrake v1.2.2:





staxrip x264 and x265 benchmarks: https://nl.hardware.info/reviews/939...4-x265-en-flac

Last edited by hajj_3; 7th July 2019 at 16:02.
hajj_3 is offline   Reply With Quote
Old 7th July 2019, 16:11   #40  |  Link
birdie
.
 
birdie's Avatar
 
Join Date: Dec 2006
Posts: 146
The most relevant graph for encoding/rendering:


Last edited by birdie; 8th July 2019 at 00:05. Reason: Fixed the URL
birdie is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:12.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, vBulletin Solutions Inc.