View Single Post
Old 30th August 2022, 18:37   #2459  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,770
Quote:
Originally Posted by rwill View Post
Are we looking at the same graphs ?

If I am looking at the ones at 11:50 I think, well, looks like Intel's GPU is getting beaten by a large margin compared to the Intel software encoder set to realtime mode. Also looks like H.264 cannot keep up at 3500k due to higher bitstream overhead but this normalizes out somewhat at 6000k. Only 3 sample points for a graph are suboptimal too, I'd have used 5+.
Yeah, the graph looks impressive, but its utility goes down the more one tries to get applicable information out of it.

Quote:
Also keep in mind, H.264 is 20 years old and 1080p@60hz with 3500k comes out to ~1600k at 24hz for non motion blurred content.
Bitrate has a less-than-linear increase with frame rate, same as with frame size. With a higher frame rate less happens between frames, so individual frame predictions are more accurate. Also, any visual defect is visible for less time and so less noticeable. Also, as IDR placement is generally in seconds, not frames, higher frame rates mean a lower IDR percentage, which also improves efficiency.

Motion blur is a whole other matter. 60p stuff tends to have a 1/60th of a second shutter at the slowest, and can be much, much faster for daylight shoots. 24p, except for cell animation, almost always used a 1/48th of a second shutter. Generally more motion blur is helpful for encoding, although there are complexities that sometimes confound that.

My rule of thumb is that doubling the frame rate requires a 20-40% increase in bitrate, content dependent. If it's the same content (like encoding a 60p source at 60p and 30p), it's on the lower end of the range.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote