Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#81 | Link | |
Registered User
Join Date: Aug 2009
Posts: 218
|
Quote:
If you want to use something newer than H.264, you have to encode in both VP9 and HEVC (and deal with the patent mess of HEVC) because Apple still plays hardball and doesn't support VP9 in Safari. Or, more realistically, encode in VP9 and give everyone else H.264 until they decide to use a browser that doesn't such as much as Safari or whatever. I just hope everyone agrees to implement 10-bit H.264 with HDR support, so at least FullHD HDR is possible everywhere. But even if not, expect H.264 to be implemented for the rest of time in browsers like GIF. Last edited by kurkosdr; 16th February 2023 at 23:35. |
|
![]() |
![]() |
![]() |
#82 | Link |
Registered User
Join Date: Jul 2018
Posts: 870
|
"I consider H.264 to be the video equivalent of GIF: it doesn't have the bit depth we want it to have, or the compression efficiency we want it to have, but everyone has implemented its "high" profile so it's the lowest common denominator."
The 8bit SDR is really the high-visual art standard for decades in the great civilization of previous century. Its 1000:1 dynamic range is selected very close to nominal output dynamic range of reversible professional film for artists. It is not limitation of the reversible film type - the great old civilization capable of the flight to the moon can easily reach 1000000:1 display dynamic range simply combining 2 layers of 1000:1 for example but it looks the 1000:1 was good enough. Also about input dynamic range for good art images - it is also very narrow like about 5 'stops' (also see Fuji Velvia datasheet for example or any Kodak reversal natural shooting film). It is because to make scene image pleasant to look it required complex enough expanding at middle and compression at the edges (so incoming about 5'stops' about 50:1 scene light range is expanded to about 1000:1 display light range by film processing for example). Also the real art transfer curve is not linear (and really curve - not straight line) and it is also not limitation of old century chemicals. If required 3.0D range linear transfer user can shoot linear duplication film for reversal film (so it have 3.0D input range and 3.0D output with linear transfer). But good visual artists already found in the previous century about linear scene shooting in the 'large range' for example 3.0D is about 10'stops' is not any good for viewers and not make good money in business. It is applied to correctly designed by visual artists images - not to reatime 'live' broadcasts shot with some random non-prepared scene by very cheap personnel without special visual arts education. But as we see at least some part of 8bit xvid encodings are for really good mastered (good DVD titles for example) content so it is perfectly enough. If we use xvid to compress poorly shot cheap TV shows - it is not issue of xvid or 8bit and only badly shot input content. So when we have correctly mastered by enough good educated person and visual artist content we can freely use SDR 8bit and xvid to make perfectly good encodings. "10-bit H.264 with HDR support" We can assign some HDR decompression profile to standard 8bit encodings - it only may have more banding at highlights. The HDR itself not limited to any bitdepth - only limitation is more or less visible banding (if dithering is not applied) at some target physical display brightness levels. It is close to 'legal expanding' of over-whites (range of 236..254 code values of Y-channel in 8bit or even to 255 because we are not limited to system-service-reserved words like in SDI) into standartized-HDR like HLG. Older displays may already do it in customized vendor's solutions for 'image enchancements'. HDR is not main part of good image design - it only a small additional way to a bit more impress user when all other underlying structure of tone and colours in image is perfectly mastered and most of important range is already fit in SDR (and standard colour gamut). If you can not view in HDR mode - you can easily skip HDR-addition to perfectly mastered SDR content and you will not lost significant part. Last edited by DTL; 16th February 2023 at 20:21. |
![]() |
![]() |
![]() |
#83 | Link | |
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,565
|
Quote:
Sony has been doing 8bit H.264 BT2020 HLG HDR in their old Sony A7 III cameras and they were using the expanded values of Full PC Range to take advantage of the extra headroom to make it fit better. I've seen plenty of footage shot like that. In most cases it was "good enough" (i.e it wasn't terrible, but it wasn't great either). Sony is gonna be your best buddy, here, then, 'cause they actively support 10bit H.264 BT2020 HLG HDR in their cameras, both Intra and Long GOP (i.e I-P-B) in their XAVC flavor and of course x264 can encode those and add the right metadata. Unfortunately for you, though, no consumer hardware will play those, only professional hardware playout ports, so... it looks like it's not gonna be a thing at consumer level as such a stream is almost always re-encoded to H.265... but you know what? Never say never and most importantly if all you care about is software playback on computers, you can totally do it even today as support for HDR metadata was introduced in 2017 in x264. |
|
![]() |
![]() |
![]() |
#84 | Link |
Registered User
Join Date: Jul 2018
Posts: 870
|
" 8bit H.264 BT2020 HLG HDR "
Expanding frame size 2x linearly +dithering (2x2 samples dithered) may be about same as 10bit (8bit +2bit from 2x2 dithering) per sample with original frame size (in terms of banding). Only 2x frame size +dithering may cause more or less higher MPEG output bitrate. So when wider compatibility with 8bit MPEG-4ASP/AVC decoders +HDR built-in is required it may be achieved with increasing frame size +dithering (or simply not downsize 8K..4K or FullHD too much). Also WCG from BT.2020 is not very required in general public content so it is no need to shrink colour gamut of 'normal scenes' to 8bit system and risk to have additional colour banding. The main point of 'artistic' image creation is not shrinking all-world-around data (luma range, chroma range and angle of view) to the single image frame but finding pleasant to view limited size scenes and expanding part of (luma and chroma) range for even better visibility of something pleasant to see (because viewer have limited ability to see small deviations in luma and chroma in natural scenes). So using of WCG in the bit-limited 8bit system is additional source of banding errors and lost of valuable low contrast colour details when operating with natural scenes. So that attempt to use 8bit BT.2020 (+HDR) system was simple temporal marketing buff with attempt to sell the colour-details-shrinking system to artists. The sales possibly was not any good as well as shooting result. So 8bit-dithered MPEG-4ASP/AVC bt.709 chroma +HLG range compression encoding (mapping) may be wide-used and backward compatible 'HDR for general public' format. It possibly will run a bit dimmer at SDR decoders if use 75% as nominal white point as in HLG. Sort of SDR+ 8bit widely compatible format. At the good enough users playback devices with 'HDR light power output' and good enough 'AI SDR to HDR' processing in display it will run closer to 'real 10-bit HDR'. Though HLG HDR range mapping was already designed to very easily downconverting to SDR (for example with applying AUTO KNEE camera processing to decoded to linear scene light content). The SDR digital motion pictures system was designed by the great designers of the nice great technical civilization of past so enough self-balanced. The only good addition (as well as defining finally 4:2:0 display UV filtering transfer curve at decoding to 4:4:4) is putting standard to usage of upper part of Y-code values for HDR-range expansion. In old times CRT displays where very poor to run over 100..200 nit at home-sized TVs so the great old designers of the great old civilization are gone until the usage of upper codevalues of 230..254..255 of 8bit system for highlights compression up to 600..1000 nits become really usable at endusers displays. Last edited by DTL; 16th February 2023 at 22:23. |
![]() |
![]() |
![]() |
#85 | Link | |
Registered User
Join Date: Aug 2009
Posts: 218
|
Quote:
Last edited by kurkosdr; 16th February 2023 at 22:01. |
|
![]() |
![]() |
![]() |
#86 | Link |
Registered User
Join Date: Feb 2020
Posts: 495
|
No UHD TVs are 100 nits out of the box. So in fact you should be extra rich and clever to go buy Calman and X-rite and calibrate (though LG C2 has Filmmaker that is supposed to be 100 nits and LG C9 has 100 nits mode in Technicolor expert, right).
|
![]() |
![]() |
![]() |
#87 | Link | |
Registered User
Join Date: Jul 2018
Posts: 870
|
Quote:
I really impressed how business-level sub $10000 display of LG (simple 8bit SDR for xvid playback) eat only 400 Watts peak and run at up to 4000 nits fullscreen all time. Having about 50 inches sized screen. https://www.lg.com/uk/business/digit...ge/lg-49XS4F-B Really nice LEDs installed in backlight. May be much higher 100 lm/W. Really right 8bit SDR display running dark blacks at about 4 nit and peak whites at 4000 nits - close to 'full visible colours down to the very deep darks'. Last edited by DTL; 16th February 2023 at 22:52. |
|
![]() |
![]() |
![]() |
#88 | Link |
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,488
|
Apple's HEVC HLG with Dolby Vision dynamic metadata is really a very clever solution as well. Certainly not the headroom possible with native Slog3 or PQ, but about as much as you're going to get while still having decent playback without tone mapping on existing SDR devices.
Getting back to xvid, yes, there are all kinds of ways to make stuff forward compatible. But any device that can handle the above will support at least H.264 High Profile. I wonder if there are new devices shipping that have dropped MPEG-4 pt 2 decode. I saw one of last year's GPUs dropped HW VC-1 decode, for example. And VC-1 was somewhat more advanced that MPEG-4 ASP, with Overlap Transform and (lightweight) in-loop deblocking. |
![]() |
![]() |
![]() |
#89 | Link |
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,488
|
Well, it's inaccurate to say that there isn't any tonemapping. All flat panel technologies respond a lot differently than a Cathode Ray Tube would. Lots of tone mapping takes place in the panel controller to get it to respond to digital values like a CRT would to voltage. It's not like a LCD has intrinsic 2.2 or 2.4 gamma!
|
![]() |
![]() |
![]() |
#90 | Link | |
Registered User
Join Date: Aug 2009
Posts: 218
|
Quote:
|
|
![]() |
![]() |
![]() |
#91 | Link |
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,565
|
Uhmmm, I'll tell you what, I think it could work even right now. H.264 can transport HDR info in SEI so it's very easy to create an 8bit BT2020 HLG file. Chrome also is able to recognize the desktop colorspace and the normal video stream info (chrome://gpu), so in theory it should be able to recognize an H.264 stream flagged as arib-std-b67 and bt2020-10 just like it's able to recognize it if it's an AV1. I mean, it should actually work already out of the box with the current technology without any further update, in theory, but no one is doing it.
|
![]() |
![]() |
![]() |
#92 | Link |
Registered User
Join Date: Jul 2018
Posts: 870
|
Here is example of article why 8bit SDR xvid is always enough to encode correctly shot content - https://www.kenrockwell.com/tech/highlight-shadow.htm . The extra HDR is mostly way to bring to viewer 'RAW' shooting content so it need to create colour and tone grading by itself or suffer from non-graded cheap content. For live broadcasting without controllable scene light is may be some solution.
Short text extra is: I was watching some old movies from the 1930s, and noticed how even back in those days that they had perfect shadow and highlight detail in every scene, be it daylight, back light, side light, indoors, moonlight, low light, candle light, or whatever. Check out Mohawk Valley, shot in Technicolor in 1939, or The Plainsman, shot in black-and-white in 1937. These aren't even technical masterpieces; they just happened to be old Indian movies we were watching as I noticed this. An indoor scene with a door open to the outside world? Perfect detail everywhere. The same thing, but moonlight outside reflecting off a river seen as through a window while the actors are indoors at night? Again, perfect! No matter how tough the light, they always had perfect shadows and perfect highlights, be it in color or in black-and-white. How can this be? There must be at least 15 stops of dynamic range needed, and film was primitive in those days. Of course the movies have always had perfect highlights and shadows. How? Why? Because they are shot by real photographers, typically ASC members, who know how to light a scene. When shooting a movie, you spend a couple of days lighting each set and each scene. You bring four generator trucks and eight trucks of lighting and grip equipment, and have at it. You'll scrim, gel, gobo and reflector everything until you go blind. In the end, you get perfect results, even if it was the 1930s. Photographers know how to get perfect highlight and shadow detail in every shot, regardless of the light — or lack thereof — while hobbyists freak out and start buying more equipment, like pro digital backs claiming "18 stops dynamic range," which has nothing to do with getting good highlight and shadow detail. It is always the photographer who is responsible for highlights and shadows, not the Great Spirit inside a new camera. Last edited by DTL; 17th February 2023 at 16:09. |
![]() |
![]() |
![]() |
#93 | Link |
Registered User
Join Date: Apr 2011
Posts: 3
|
I still convert all movies with Xvid. I have the most experiences with this codec (13 years).
I really dislike h264 deblocking. The Xvid results are better for me than with x264. But there is also one issue that bothers me. Xvid changes the naturally vivid colors. I think it converts yv12 to rgb. Is there a way to prevent this problem? yv12 input --> yv12 output Last edited by 102030; 8th March 2023 at 10:22. |
![]() |
![]() |
![]() |
#94 | Link |
Big Bit Savings Now !
Join Date: Feb 2007
Location: close to the wall
Posts: 1,353
|
YUV to RGB OOTB ?
I don't think so, implementing MPEG-4 ASP after all why would one sacrifice the efficiency of chroma subsampling. mediainfo will tell.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain) "Data reduction ? Yep, Sir. We're working on that issue. Synce invntoin uf lingöage..." |
![]() |
![]() |
![]() |
#97 | Link | |
SuperVirus
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,346
|
Quote:
you can confirm that both DivX and Xvid accept YV12 inputs. But if you want or have-to use DirectShow anyway, then 1) replace graphedit with GraphStudioNext, and 2) don't let the ""smartness"" of DirectShow add unnecessary colorspace conversions to the graph.
__________________
«Your software patents have expired.» |
|
![]() |
![]() |
![]() |
#98 | Link | |
Registered User
Join Date: Apr 2011
Posts: 3
|
Quote:
I did always deactivate the colorspace filter in the Windows registry. Last edited by 102030; 12th March 2023 at 06:07. |
|
![]() |
![]() |
![]() |
#99 | Link |
Registered User
Join Date: Jul 2018
Posts: 870
|
MPEG-4 still not dead !
Some new idea in progress to make ASP/AVC releases looks better - https://forum.doom9.org/showthread.p...72#post1984472 . Sort of backporting some features from h.265/HEVC to lower numbered MPEGs. So wider compatibility moving pictures files my be created. |
![]() |
![]() |
![]() |
#100 | Link | |
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,488
|
Quote:
HEVC is the current standard for HDR because:
Sure, whacky things can be done with full range mapping and dithering and all that. Heck, I could make HDR MPEG-2 with sufficient motivation and time. But in the end it's a lot of work and a lot more bits for worse results, and no real reason to bother. Essentially everything with an HDR tone mapper and display supports 10-bit HEVC decode. We'll upgrade to better thing in the future, like VVC and/or AV1. But I can't imagine why anyone would practically benefit from using a sub-HEVC codec for HDR. |
|
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|