Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-2 Encoding

Reply
 
Thread Tools Search this Thread Display Modes
Old 1st September 2021, 08:35   #21  |  Link
rwill
Registered User
 
Join Date: Dec 2013
Posts: 343
Quote:
Originally Posted by Blue_MiSfit View Post
Any suggestions on tuning ffmpeg's mpeg-2 encoder or x262 for quality? I'm happy to burn as much CPU time as possible!

I'm targeting 6 Mbps for 720p59.94. Aggressive? ... Yes
I could pitch y262 here but Id rather not.
rwill is offline   Reply With Quote
Old 2nd September 2021, 19:25   #22  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,750
Quote:
Originally Posted by pandy View Post
Don't get me wrong but dither like FS is nothing than stress to encoder (and it is usually filtered out if possible).

Also FS temporal characteristic made impossible to compress this kind of dither.
From my perspective FS is like noise i.e. suboptimal from spatial and temporal perspective.
Perhaps i'm bit naive but from my perspective residue is noise and noise is extremely difficult to compress especially by lossy encoder...
You are totally correct. It's generally a desired property of dithering to look like noise rather than have an obvious repeated pattern to it, like older techniques yielded.

A key thing is the strength of the dither. With xscaler about a 0.5 worked pretty well to prevent banding without adding too much high-entropy noise that is harder to compress. Different kids of content benefit from different strengths. Lots of film grain can make dither unneeded, while clean-noise free CGI may need stronger as banding can become extra obvious. Anime can benefit from some more adaptive techniques, as dithering flat areas is undesired, but gradients need it, and thresholding would be a big problem using a "dither/don't dither" mask instead of having variable strength.

Quote:
Also instead FS, some refined ordered dither such as Ulichney could be better at least from temporal perspective.
Because you'd get better temporal matches? Definitely true for an Animated GIF, but I've not seen it proven out for block-based motion-compensation codecs. It's tricky to separate the similar high-frequency AC coefficients while having quite different lower-frequency AC coefficients.

Quote:
Depends what is your goal - if "objective" then definitely yes, always apples shall be compared with apples - if subjective then if you define properly area then apples can be compared with for example pineapples.
And always - you need clearly define goal and methodology.
A good test would be to do a matrix of different dithering techniques with different encoders. Some encoder products may have built-in dithering that can be compared to (x265 as a basic mode, and a more advanced mode triggered by --dither).

Quote:
Dithering is unavoidable - whenever quantization is involved then dithering (and best with psychovisual matched noiseshaping/errorshaping) is mandatory - 10 bit solving some problems - of course modern display technology quickly reaching level where for average consumer this will be enough but human eyes are capable way more than 10 bits (depends on conditions and context somewhere between 12 and 14 bits).
this is same like ultra high frame rate - 300...600 frames per second will be key to get full immersion....
Yes, dithering is always needed when converting between color spaces. But the visible and compression impact of dithering is far greater in SDR 8-bit than in SDR 10-bit, and especially PQ 10-bit.

Quote:
But my question was triggered by FS dithering - from my experience FS dither raising QP dramatically and literally stealing bits from video details.
It is a tradeoff, reducing banding at the cost of somewhat higher QPs. For lots of content and scenarios, it's a good tradeoff.

Truncation can also increase QP and ringing, as truncation can leave sharper edges that aren't DCT-friendly.

An optimal ditherer would probably have different strengths at different luma levels. And even be in-loop with the encoder so it could adjust dynamically adjust how dithering is done relative to QPs.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 18th September 2021, 14:38   #23  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
FS+bit of noise was my choice for high bitrate Blu-rays. Anything below 30Mbit would almost destroy your dithering efforts anyway. With todays 10bit pipelines it's not much needed though.
kolak is offline   Reply With Quote
Old 18th September 2021, 21:32   #24  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Quote:
Originally Posted by kolak View Post
With todays 10bit pipelines it's not much needed though.
Yeah, well, generally post-processing like LUT Conversion and other kind of filtering is done in 16bit anyway, so we still kinda need something to go back to 10bit as that's how we go out to our viewers for UHD materials and Floyd Steinberg is still very much needed eheheheh



About FULL HD... well... In Italy we're still stuck with XDCAM-50 MPEG-2 8bit, so...



(that's my face every time I have to encode in MPEG-2 in 2021)
FranceBB is offline   Reply With Quote
Old 18th September 2021, 21:35   #25  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
This is not just Italy problem. Probably way more than half of broadcast industry is still XDCAM based.
I would never even bother doing any dithering for broadcast anyway
kolak is offline   Reply With Quote
Old 19th September 2021, 09:27   #26  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
LOL yep, who cares? They're going to just uplink with some craptastic live encoder that's probably not even set up in a reasonable way (either due to ignorance or negligance). Broadcast is brutal when it comes to video quality.
__________________
These are all my personal statements, not those of my employer :)
Blue_MiSfit is offline   Reply With Quote
Old 20th September 2021, 13:58   #27  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Quote:
Originally Posted by Blue_MiSfit View Post
LOL yep, who cares? They're going to just uplink with some craptastic live encoder that's probably not even set up in a reasonable way (either due to ignorance or negligance). Broadcast is brutal when it comes to video quality.
Well, even if the mezzanine is an MPEG-2 at 50 Mbit/s, when it goes to the live H.264 hardware encoder it has a 4s delay to encode the stream live (which can be raised up to a maximum of 20s for added complexity if it's absolutely necessary). The final H.264 25i yv12 FULL HD .ts stream has 12 Mbit/s over here.

The UHD one instead is an XAVC Intra Class 300 10bit 50p at 500 mbit/s which is encoded live by an hardware encoder in H.265 10bit at 25 Mbit/s.

I mean, it's not such a terrible quality to be fair, especially considering that all this is live. (Of course VOD has more bitrate and most importantly a better, more complex encoding, but still Satellite is still very much alive and kicking eheheheh).
FranceBB is offline   Reply With Quote
Old 20th September 2021, 23:20   #28  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
How nice that your facility has reasonable bit budgets and probably sensibly configured hardware
__________________
These are all my personal statements, not those of my employer :)
Blue_MiSfit is offline   Reply With Quote
Old 16th August 2023, 06:17   #29  |  Link
Lyris
Registered User
 
Join Date: Sep 2007
Location: Europe
Posts: 602
This thread makes me want to benchmark all the available MPEG2 options for the rare times I have to make a DVD. I'm using CCE SP3 for this, even still. Although the last time I compared encoders was in the late 2000s.
Lyris is offline   Reply With Quote
Old 22nd August 2023, 19:20   #30  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,750
Quote:
Originally Posted by Lyris View Post
This thread makes me want to benchmark all the available MPEG2 options for the rare times I have to make a DVD. I'm using CCE SP3 for this, even still. Although the last time I compared encoders was in the late 2000s.
Campus ProCoder/Carbon Coder were really impressive back then for automated encoding without scene-by-scene optimization. I used it to good effect on some weird content types, like the first Criterion Collection Stan Brakhage DVDs.

I know that Elemental was still making good money with big MPEG-2 compression efficiency improvements within the last decade. Cables companies would pay a lot to get more channels out of fixed bandwidth.

The MIPS/pixel we can apply to MPEG-2 encoding today is >>100x than back in the Minerva days. If there was a market reason for a new MPEG-2 encoder, I bet we could squeeze out another 25% using ever more advanced preprocessing and psychovisual optimization.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:58.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.