Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 Encoder GUIs

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th November 2024, 17:50   #20961  |  Link
rlev11
Registered User
 
Join Date: Aug 2020
Location: Pennsylvania
Posts: 133
Quote:
Originally Posted by Ryushin View Post
When I got my 7950X, I ended up under clocking mine. I found that for that extra 5% encoding boost to get to 100% it required twice the power. So my encoding speed is 95% of what the CPU is capable of all the while using half the power and staying under 70 Celsius while encoding 4K. Not bad to extend the life of the processor and use less power.

I have 40 kWh of battery storage for my solar system and I can keep encoding through the night just fine this way.
Back when I first got my 5950x and before I put a NH-d15 cooler on it, what I did was just in the windows power plan was change max processor from 100% to 99%. What this did was disable the cpu from going into it's boost clock speeds and just run at the base speed. Don't remember exactly how much, but this dropped the output wattage by quite a bit, and like you, only noticed a slight drop in performance. This might be a quick way to test other than changing the bios to eco mode. Back then I made a powerplan desktop shortcuts that let me switch between 100% and 99% on the fly for when I was encoding. Might be something I look into again just to see with the whole farm encoding how much performance drop I really see now.

In fact yesterday I tried setting eco mode using ryzen master as a test and ended up having to pull the cmos battery and rest the bios after ryzen master rebooted and nothing came up. Screwed it all up, so I gave up on that idea.
rlev11 is offline   Reply With Quote
Old 25th November 2024, 09:15   #20962  |  Link
Wishbringer
Silent Reader
 
Wishbringer's Avatar
 
Join Date: Dec 2003
Location: Germany
Posts: 306
For example:
5950X with 99% Powerplan = 3.4 Ghz for all cores (Stock clock)
5950X with 100% Powerplan = 4.25 GHz for all cores (value is different for each processor)
So you get around 20% higher clock.
If power consumtion is 20% higher too, there is no implact on IPW, else lower clocks with much lower wattage have a higher IPW.
Encoding is slower but takes lower power (cheaper) overall.

In Bios there are settings for THM, TDC, EDC and PPT to play around.
And with curve optimizer you can reduce wattage too (or increase clock at stock wattage).
My core values differ from -17 to -24 with stable encoding.
Wishbringer is offline   Reply With Quote
Old 26th November 2024, 13:59   #20963  |  Link
Ryushin
Registered User
 
Ryushin's Avatar
 
Join Date: Mar 2011
Posts: 440
Quote:
Originally Posted by rlev11 View Post
Back when I first got my 5950x and before I put a NH-d15 cooler on it, what I did was just in the windows power plan was change max processor from 100% to 99%. What this did was disable the cpu from going into it's boost clock speeds and just run at the base speed. Don't remember exactly how much, but this dropped the output wattage by quite a bit, and like you, only noticed a slight drop in performance. This might be a quick way to test other than changing the bios to eco mode. Back then I made a powerplan desktop shortcuts that let me switch between 100% and 99% on the fly for when I was encoding. Might be something I look into again just to see with the whole farm encoding how much performance drop I really see now.
I was using Ryzen Master to make the change manually for a few weeks when the system was rebooted. After I was sure it was stable I manually changed it in the EFI/BIOS. I did not know Windows itself had the option to drop the speed of the processor. I'll need to look into that.

Quote:
Originally Posted by Wishbringer View Post
For example:
5950X with 99% Powerplan = 3.4 Ghz for all cores (Stock clock)
5950X with 100% Powerplan = 4.25 GHz for all cores (value is different for each processor)
So you get around 20% higher clock.
If power consumtion is 20% higher too, there is no implact on IPW, else lower clocks with much lower wattage have a higher IPW.
Encoding is slower but takes lower power (cheaper) overall.

In Bios there are settings for THM, TDC, EDC and PPT to play around.
And with curve optimizer you can reduce wattage too (or increase clock at stock wattage).
My core values differ from -17 to -24 with stable encoding.
At least for me, that last 5% performance boost literally consumed 100% more power. So for me it was worth it to under clock the processor.
Ryushin is offline   Reply With Quote
Old 26th November 2024, 18:20   #20964  |  Link
chainring
Registered User
 
chainring's Avatar
 
Join Date: Sep 2006
Posts: 179
Before setting the Windows Balanced power plan to 99% for the CPU max, my girlfriend would remark that the four machines participating in DE sounded like a vortex. They also pumped out quite a bit of heat. Now, they're silent, cooler and still get plenty of FPS.
chainring is offline   Reply With Quote
Old 27th November 2024, 20:34   #20965  |  Link
stryker412
Registered User
 
Join Date: Feb 2002
Posts: 162
Quote:
Originally Posted by rlev11 View Post
This is the total Package Power as reported by CPUID HWMonitor when each server is in the middle of a chunk of a 4k encode. Only thing i do on the Ryzens is set a Thermal Limit in the PBO section of the bios. I set the 9000 and 7000 series to 85c, and the 5000 and 3000 to 75c

First wattage is total package and second is core/ia core watts

9950x 20.02 200w, 146w
7950x 18.3 190w, 144w
7900x 17.37 170w 125w
7700x 11.26 128w, 95w
5950x 11.15 147w, 104w
5900x 9.07 148w, 95w
3950x 9.43 112w, 85w
3900x 9.78 130w, 101w
i7-14700 13.98 175w, 170w
i5-14500 12.44 133w, 130w
i5-13500 12.64 119w, 117w
i5-12600k 9.59 115w, 112w
How do you think the 5700X or 5700X3D might do? I've had a few people suggest I just upgrade my 3700X to the 5700X and call it a day without having to do a full build.

Last edited by stryker412; 27th November 2024 at 20:36.
stryker412 is offline   Reply With Quote
Old 27th November 2024, 21:08   #20966  |  Link
rlev11
Registered User
 
Join Date: Aug 2020
Location: Pennsylvania
Posts: 133
Quote:
Originally Posted by stryker412 View Post
How do you think the 5700X or 5700X3D might do? I've had a few people suggest I just upgrade my 3700X to the 5700X and call it a day without having to do a full build.
Seeing how there is very little difference between the 3900x and 5900x, I doubt there would be much of a difference between the 3700x and 5700x in encoding performance. Remember encoding is based a lot on core and thread count, maybe a 5900x or a 5950x from the 3700x to get more cores. You will have to account for additional cooling though, either an aoi or a noctua d15, but that also brings into play the case on if it can handle an aio or the 165mm cooler height of the Noctua.

Last edited by rlev11; 27th November 2024 at 22:30.
rlev11 is offline   Reply With Quote
Old 27th November 2024, 21:22   #20967  |  Link
rlev11
Registered User
 
Join Date: Aug 2020
Location: Pennsylvania
Posts: 133
Since we have been talking power usage lately, i found this that is interesting. Instead of changing the max cpu state in the power plan to 99%, leave that at 100%. then regedit here:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\54533251-82be-4824-96c1-47b60b740d00\be337238-0d82-4146-a960-4f3749d470c7 and change "Attributes" (DWORD) and set to "2"

Now, there is a new setting on advanced power settings: (in the legacy control panel power options) "Processor performance boost mode". Set it to "Disabled" and cpu power drops considerably.
This works for both AMD and Intel CPU's and forces everything to run at the base clock speed (setting to 99% didn't do anything for me on my Intel systems)

Have found on my AMD boxes, there is little performance difference with a considerable power decrease. On the Intel systems, since they have a much lower base clock (larger spread from base to boost speed) it does hamper performance a bit more. On my i7-14700, disabling boost, dropped p-cores to 2gz and E-cores to 1500 mhz. but cpu power dropped from 175 watts to 35 watts. lost about 30-40% performance

Just something else to consider if looking at lowering power usage quickly without messing with BIOS settings.

If I get some time this weekend, I'll run some numbers comparing the same video/chunk for each machine both boost enabled/disabled and come up with a fps versus watts used and see where things shake out.

Last edited by rlev11; 27th November 2024 at 21:59.
rlev11 is offline   Reply With Quote
Old 28th November 2024, 02:05   #20968  |  Link
Retired@55
Not new to this...
 
Retired@55's Avatar
 
Join Date: Nov 2024
Posts: 12
Why do ppl buy multi core CPU's, then hobble them just to save some energy / money ??

Why not buy a lesser CPU, and run it at 100%, what's the point in having a 16C CPU, that is hobbled down to be equivalent to a 12C ??
Retired@55 is offline   Reply With Quote
Old 29th November 2024, 16:24   #20969  |  Link
Ryushin
Registered User
 
Ryushin's Avatar
 
Join Date: Mar 2011
Posts: 440
Quote:
Originally Posted by Retired@55 View Post
Why do ppl buy multi core CPU's, then hobble them just to save some energy / money ??

Why not buy a lesser CPU, and run it at 100%, what's the point in having a 16C CPU, that is hobbled down to be equivalent to a 12C ??
The upgraded processor is able to process more FPS. Lowering the max speed by 5% results in processor that is more efficient for each clock cycle and more importantly, it will extend the life of the processor by not pushing it to thermal limits for days/weeks on end.

And as someone else said, their machines won't sound like a jet turbine just to gain another 5% performance at double the power usage and most likely killing the processor sooner then later.

Last edited by Ryushin; 30th November 2024 at 14:09.
Ryushin is offline   Reply With Quote
Old 30th November 2024, 02:01   #20970  |  Link
Retired@55
Not new to this...
 
Retired@55's Avatar
 
Join Date: Nov 2024
Posts: 12
Quote:
Originally Posted by Ryushin View Post
The upgraded processor is able to process more FPS. Lowering the max speed by 5% results in processor that is more efficient for each clock cycle and more importantly, it will extend the life of the processor by not pushing it to thermal limits for days/weeks on end.

And as someone else said, their machines won't sound like a jet turbine just to gain another 5% performance at double the power usage and most likely killing the sooner then later.
Well, clearly everyone has a different opinion on this subject, as expected.

You run your system as you see fit, and I will too.

AFAIK, current gen CPU's are designed to run hot & fast, and it's almost mandatory to run with really good cooling, but by the time they have degraded to an unusable level, it's probably time to update, anyway !!

But I doubt there'd be too many users that run their system(s) for days, let alone weeks at a time, and those that do, should probably be running Server grade systems, NOT desktops.
Retired@55 is offline   Reply With Quote
Old 2nd December 2024, 13:22   #20971  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 465
Quote:
Originally Posted by rlev11 View Post
This is the total Package Power as reported by CPUID HWMonitor when each server is in the middle of a chunk of a 4k encode. Only thing i do on the Ryzens is set a Thermal Limit in the PBO section of the bios. I set the 9000 and 7000 series to 85c, and the 5000 and 3000 to 75c

First wattage is total package and second is core/ia core watts

9950x 20.02 200w, 146w
7950x 18.3 190w, 144w
7900x 17.37 170w 125w
7700x 11.26 128w, 95w
5950x 11.15 147w, 104w
5900x 9.07 148w, 95w
3950x 9.43 112w, 85w
3900x 9.78 130w, 101w
i7-14700 13.98 175w, 170w
i5-14500 12.44 133w, 130w
i5-13500 12.64 119w, 117w
i5-12600k 9.59 115w, 112w
Thanx, this is really helpfull. Without doing the math, at first glance the 7900x seems like a performance/watt winner.
ReinerSchweinlin is offline   Reply With Quote
Old 5th December 2024, 12:59   #20972  |  Link
hardkhora
Registered User
 
Join Date: Mar 2014
Posts: 8
Is there a way to force files shorter than 60 seconds to still be encoded distributively?
The problem I'm having is that my main RipBot controlling computer, which I don't usually encode with but like using as the central server, uses a iGPU (UHD 770).
Any video with KNLMeansCL used gets extra blurry/halos around the edges of things when encoding with the UHD 770, but with KNLMeansCL off or when I use any of my other computers with discrete GPUs I don't have that issue.
Example of the issue:
https://imgur.com/a/X09dQrh

Last edited by hardkhora; 5th December 2024 at 13:14.
hardkhora is offline   Reply With Quote
Old 5th December 2024, 14:39   #20973  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 465
Quote:
Originally Posted by hardkhora View Post
Is there a way to force files shorter than 60 seconds to still be encoded distributively?
The problem I'm having is that my main RipBot controlling computer, which I don't usually encode with but like using as the central server, uses a iGPU (UHD 770).
Any video with KNLMeansCL used gets extra blurry/halos around the edges of things when encoding with the UHD 770, but with KNLMeansCL off or when I use any of my other computers with discrete GPUs I don't have that issue.
Example of the issue:
https://imgur.com/a/X09dQrh

do you really need nlmeans noise reduction on this footage? Looks fine to me.... I usually use CPU degrain on anime, rarely nlmeans does a better job.
ReinerSchweinlin is offline   Reply With Quote
Old 5th December 2024, 18:18   #20974  |  Link
hardkhora
Registered User
 
Join Date: Mar 2014
Posts: 8
Quote:
Originally Posted by ReinerSchweinlin View Post
do you really need nlmeans noise reduction on this footage? Looks fine to me.... I usually use CPU degrain on anime, rarely nlmeans does a better job.
I agree, I don't need it for that footage, that was to illustrate the problem.
I was trying to understand if there a way to force files shorter than 60 seconds to still be encoded distributively?

I also tried updating the iGPU driver and that didn't make a difference.
hardkhora is offline   Reply With Quote
Old 5th December 2024, 19:28   #20975  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 465
Quote:
Originally Posted by hardkhora View Post
I agree, I don't need it for that footage, that was to illustrate the problem.
I was trying to understand if there a way to force files shorter than 60 seconds to still be encoded distributively?

I also tried updating the iGPU driver and that didn't make a difference.
hm, I just looked up the chunk size - guess you tried that too - and found 1 minute as the smallest chunk...
if you donīt run a encoding server on the master PC - shouldnīt it pass one chunk onto one of the network encoders? Have you tried that ?
ReinerSchweinlin is offline   Reply With Quote
Old 5th December 2024, 20:49   #20976  |  Link
hardkhora
Registered User
 
Join Date: Mar 2014
Posts: 8
Quote:
Originally Posted by ReinerSchweinlin View Post
hm, I just looked up the chunk size - guess you tried that too - and found 1 minute as the smallest chunk...
if you donīt run a encoding server on the master PC - shouldnīt it pass one chunk onto one of the network encoders? Have you tried that ?
When the file can't be chunked (is below 60 seconds), then the process bypasses the distributed encoding and encodes it locally via the main RipBot window (same as if you turned off distributed encoding all together). So, even if there is no encoding server it still acts locally.
I tried setting the chunk size to 0 but it still won't pass it to encoding servers as I'd expect.
hardkhora is offline   Reply With Quote
Old 5th December 2024, 22:32   #20977  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 465
ah ok... sorry, I canīt help then.
ReinerSchweinlin is offline   Reply With Quote
Old Yesterday, 05:53   #20978  |  Link
Retired@55
Not new to this...
 
Retired@55's Avatar
 
Join Date: Nov 2024
Posts: 12
Quote:
Originally Posted by ReinerSchweinlin View Post
do you really need nlmeans noise reduction on this footage? Looks fine to me.... I usually use CPU degrain on anime, rarely nlmeans does a better job.
RB hasn't got nlmeans, per se, and KNLmeansCL is only good on certain video types, and requires a good AMD or nVidia GPU, afaik.

And by CPU degraining, you must be referring to good old mdegrain ?

You should be using SMDegrain, is just better & faster in most cases.
Retired@55 is offline   Reply With Quote
Old Today, 08:31   #20979  |  Link
bar72
Registered User
 
Join Date: Oct 2024
Location: Scotland
Posts: 3
Quote:
Originally Posted by Retired@55 View Post
Why do ppl buy multi core CPU's, then hobble them just to save some energy / money ??

Why not buy a lesser CPU, and run it at 100%, what's the point in having a 16C CPU, that is hobbled down to be equivalent to a 12C ??
It's a bit like having a 2.0 litre over a 1.6 litre diesel engine in my car.

Doesn't have to work as hard to achieve sitting at the speed limit which puts less strain on engine overall.
bar72 is offline   Reply With Quote
Old Today, 19:58   #20980  |  Link
rlev11
Registered User
 
Join Date: Aug 2020
Location: Pennsylvania
Posts: 133
So I got a chance to do a full test for FPS and wattage on my entire encoding farm with cpu boost enabled and disabled. I did a separate run with a 4k source (3840x1600 light degraining) and a 1920x1080 source with heavy degraining just to see how they compare. The below link is for the spreadsheet with the results. Ran each run and got the fps from the distributed encoding window and for each cpu it was the same chunk number so that matches up. I also timed each run and saw about a 10-12% increase in total encoding time without boost, which is close to what the fps difference is. Saw about a 48% total drop in combined power usage.
So my math may be fuzzy on the end result, but it looks like it is much more cost efficient to run at least AMD processors with boost disabled since their spread is minimal. Just using my raw numbers, If an encode runs for 1 hour at full power, I would use appx 106380 total watts. If running with boost off and it takes 15% longer (69 minutes) I get 63135 total watts used for the same encode.

I am not advocating anything with this, just was curious of the difference and how it all fits together. And plus, as others have mentioned, your computers don't sound like a turboprop taking off.

https://docs.google.com/spreadsheets...#gid=983798900

Last edited by rlev11; Today at 20:03.
rlev11 is offline   Reply With Quote
Reply

Tags
264, 265, appletv, avchd, bluray, gui, iphone, ipod, ps3, psp, ripbot264, x264 2-pass, x264 gui, x264_64, x265, xbox360

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:32.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.