Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 22nd June 2021, 00:39   #1  |  Link
jriker1
Registered User
 
Join Date: Dec 2003
Posts: 485
Does CPU effect encoding features?

When encoding HEVC HDR10 content does the CPU factor in? I have a fairly powerful server that I use for encoding but it's become dated and even though I'm moving away from Premiere Pro, it doesn't allow me to do some things because of the CPU command set or GPU and even crashes my server when encoding content that contains HDR10 metadata or something in the process. So I'm using a computer with a 7th gen Intel CPU but it's only 8 cores where my server is 24 cores. For the below call, is there anything that is CPU specific or maybe with ffmpeg there is nothing CPU specific:

ffmpeg.exe -i <input_file> -c:v libx265 -x265-params level=51:hdr10=1:hdr-opt=1:high-tier=1:repeat-headers=1:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(40000000,50):max-cll=349,86:crf=16:chromaloc=2:no-sao=1:info=0:range=limited -preset slower -pix_fmt yuv420p10le -sn -an <output_name>.hevc

Last edited by jriker1; 22nd June 2021 at 00:42.
jriker1 is offline   Reply With Quote
Old 22nd June 2021, 02:45   #2  |  Link
RanmaCanada
Registered User
 
Join Date: May 2009
Posts: 331
First off what are the processors in your server? New chips have more IPC than previous generations, and also newer instruction sets. For example, if your server doesn't even have AVX2 or even basic AVX, it may be best to replace it with something more modern. I would suggest you check the cpu benchmark that Sagitarre created and see where your server sits in regards to more modern processors.
https://forum.doom9.org/showthread.php?t=174393&page=4

It is possible that you could replace the server with something newer, and you would not only increase your encode speed, possibly by 2-3x, but also use significantly less power. AFAIK the processor should only affect the encode speed, should have nothing to do with any quality changes.
RanmaCanada is offline   Reply With Quote
Old 22nd June 2021, 14:24   #3  |  Link
jriker1
Registered User
 
Join Date: Dec 2003
Posts: 485
Thanks the processors in my server are dual Xeon X5680 3.33GHz (24 threads). Has 192GB of ram in it but don't think ffmpeg uses that much to encode. Workstation I'm using at the moment is an i7-7700k 4.2GHz running at 4.6GHz (8 threads) and 32GB of Ram. I was looking to build a custom Threadripper but keep waiting for the new model to come out. I know it's going to happen, just AMD keeps holding off on releasing one.

Note posted my server stats on that other thread linked. Looks like it's running similar timings with x265 to some of the higher i7 results. So not sure if that basically means the server running 24 threads will output the video in about the same time as my 7700k with 8 threads assuming some of the command sets the server doesn't support won't crap out the conversion. Also looks like Intel does a faster job encoding x265 than AMD with all their cores.

Last edited by jriker1; 22nd June 2021 at 20:31.
jriker1 is offline   Reply With Quote
Old 22nd June 2021, 23:07   #4  |  Link
RanmaCanada
Registered User
 
Join Date: May 2009
Posts: 331
At 2.29 for x265, your server is about the speed of an i7-6700, which is handily beaten by every Ryzen system. My 5800x at 5.67 is more than double your system, at less than half the power.

It appears it is time to replace your old hardware if you want to continue using it for encoding. I know you want to wait for Threadripper, but at the cost, especially with Zen 4 coming, it is more than possible that the next Threadripper will be a dead end, as current gen Ryzen is. I would say wait for the refresh with the new 3D stacked cache, and replace your system with a 5900x or 5950x. Get a board that supports ECC, and go to town. You will be limited to 128GB of ram, but that should be enough. Or wait till next year when Zen 4 with DDR5(?) comes out.

As it is, your current server is just too old to compete! You should also take a look at the hardware encoding thread and see where Quicksync is, and decide if you want to go that route instead. As for Intel being ahead, Zen3 put AMD far ahead of Intel in regards to x265 encoding.
RanmaCanada is offline   Reply With Quote
Old 23rd June 2021, 16:17   #5  |  Link
jriker1
Registered User
 
Join Date: Dec 2003
Posts: 485
Quote:
Originally Posted by RanmaCanada View Post
My 5800x at 5.67 is more than double your system, at less than half the power.
I wonder when you are looking at 6 days to encode a 2 hour video in this case, if time matters anymore? Or getting it to 3 days. Funny, I started back in the day with lower quality video taking weeks, and got it down to 1080p content taking 15 hours over time, and now with 4k HDR10 content back up to almost a week. Could introduce my NVidia Quadro card into the mix but want to ensure quality and the general read on the net is HDR data is lost when NVENC is put into the mix.

Last edited by jriker1; 23rd June 2021 at 16:20.
jriker1 is offline   Reply With Quote
Old 23rd June 2021, 18:46   #6  |  Link
RanmaCanada
Registered User
 
Join Date: May 2009
Posts: 331
Quote:
Originally Posted by jriker1 View Post
I wonder when you are looking at 6 days to encode a 2 hour video in this case, if time matters anymore? Or getting it to 3 days. Funny, I started back in the day with lower quality video taking weeks, and got it down to 1080p content taking 15 hours over time, and now with 4k HDR10 content back up to almost a week. Could introduce my NVidia Quadro card into the mix but want to ensure quality and the general read on the net is HDR data is lost when NVENC is put into the mix.
Time matters, as the faster you can encode something, the more you can encode and the cheaper it will end up being on newer hardware. Also at power usage rates, your processors alone peak out at 350 watts. At 15 hours for 1080P content, what speed preset are you using? Even at slow my 5800x averages about 14fps while encoding 1080p content. As for 4k HDR, I average about 4-6fps. Unless your power is free, I would strongly suggest you replace the system.

Power usage. Just using your CPU's, at 350 watts it would cost about $0.90 a day with an average cost of $0.106 per KW, or about $0.04 an hour. In contrast while encoding my whole system hits 186 watts, and it would cost me $0.475862 a day or about $0.02 an hour. So the 4-6 hours it would take me to encode something would cost $0.08-$0.12 while the 15 hours for you to encode something would cost $0.60.

Yes we are talking pennies on these short instances, but if we go by week long encode for you, that comes out to $6.30 for you to encode a 4k item while it would only cost me anywhere from $0.24-$0.30. I could also encode 7-10 movies in the time it takes you to do 1 4k encode, for less than half the price of your 1 encode. To do the same thing, it would take you 7-10 weeks and anywhere from $40-$63 in electricity costs.

So time to encode matters when it is costing you not only money, but progress!
RanmaCanada is offline   Reply With Quote
Old 24th June 2021, 17:44   #7  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by jriker1 View Post
I wonder when you are looking at 6 days to encode a 2 hour video in this case, if time matters anymore? Or getting it to 3 days. Funny, I started back in the day with lower quality video taking weeks, and got it down to 1080p content taking 15 hours over time, and now with 4k HDR10 content back up to almost a week. Could introduce my NVidia Quadro card into the mix but want to ensure quality and the general read on the net is HDR data is lost when NVENC is put into the mix.
Yeah, sometimes I feel like worst-case encoding time is the same and I just get more and better pixels.

My first professional encoding workstation in 1995 was a Apple PowerMac 8100/80, which encoded 320x240p24 Cinepak at about 80:1 real time.

My first HD-DVD encoding in 2006 was about 80:1 real time.

My first 4K HDR in 2015 was about 80:1 real time.

My first 8K HDR in 2020 was about 80:1 real time.

Of course, that 80:1 in 1995 was production speed, and the others were more prototype speed. I don't think I've gone past 20:1 speed in production in the last decade.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 26th June 2021, 01:14   #8  |  Link
asarian
Registered User
 
Join Date: May 2005
Posts: 1,462
On that note, I wish they would bring back OpenCL support for x265. Every bit helps.
__________________
Gorgeous, delicious, deculture!
asarian is offline   Reply With Quote
Old 26th June 2021, 11:52   #9  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,277
Quote:
I wish they would bring back OpenCL support for x265. Every bit helps.
Why 'bring back', afaik, there never was OpenCL usage in x265,... ?
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 26th June 2021, 13:14   #10  |  Link
asarian
Registered User
 
Join Date: May 2005
Posts: 1,462
Quote:
Originally Posted by Selur View Post
Why 'bring back', afaik, there never was OpenCL usage in x265,... ?
'Bring back' as in: add support for it the way they did for x264.
__________________
Gorgeous, delicious, deculture!
asarian is offline   Reply With Quote
Old 26th June 2021, 14:31   #11  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,277
Quote:
'Bring back' as in: add support for it the way they did for x264.
Ah okay,... problem is in x264 didn't it really help,..
most of the time
a. not using OpenCL was faster
or
b. not using OpenCL produced better results
which probably was the reason why nobody really uses it in x264,... (but to be fair I haven't used it for years and may be with current GPUs it does help,..)
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 26th June 2021, 15:38   #12  |  Link
asarian
Registered User
 
Join Date: May 2005
Posts: 1,462
Quote:
Originally Posted by Selur View Post
Ah okay,... problem is in x264 didn't it really help,..
most of the time
a. not using OpenCL was faster
or
b. not using OpenCL produced better results
which probably was the reason why nobody really uses it in x264,... (but to be fair I haven't used it for years and may be with current GPUs it does help,..)

Thx. I remember OpenCL theoretically made quality drop a tiny bit. Didn't know it actually made encoding slower too.
__________________
Gorgeous, delicious, deculture!
asarian is offline   Reply With Quote
Old 28th June 2021, 16:23   #13  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by asarian View Post
Thx. I remember OpenCL theoretically made quality drop a tiny bit. Didn't know it actually made encoding slower too.
My recollection is that doing an OpenCL lookahead in x265 could reduce total encoding time by ~20% by offloading stuff like frame type decisions, coarse motion estimation, etc. But there would have been a lot of work to get that speed without material quality loss, and there was a lot better low-hanging fruit to tackle in optimization.

Also, if a HW accelerated lookahead pass is desired, a CPU's built-in hardware encoder can also be used to generate analysis pass data even faster and with less overhead than OpenCL. And even from a H.264/AVC encode.

That stuff requires API use. Check out --refine-mv-type and similar.

The great promise of "GPU Encoding" was really from an era with few cores and weak SIMD, where a GPU's large number of vector processing units was very appealing. When we can 64 cores with a fast AVX2 implementation into a single socket, there's huge vector processing right next to the L3 cache. Sure a GPU has more, but they're also a lot less flexible when it comes to branching logic needed for all the mode decisions and early exists in complex modern codecs with So Many Tools to choose between. The latency between CPU and GPU becomes a much bigger bottleneck than the GPU's potential benefits.

GPU is great for waterfall processes with one correct result, like source decode and preprocessing. But the only tech that shows potential to offer better speed at similar quality to a CPU encode is perhaps FPGA.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:17.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.