Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
#21 | Link |
|
Registered User
Join Date: Oct 2014
Posts: 268
|
First thing to learn about HDR (to SDR) is that it's not as simple as converting bitdepths and colorspaces. There is a different gamma-encoding at play too, and more importantly, HDR talks about light levels, not 'pixel values'.
HDR files can contain soo much more light and color gamut than a regular SDR encode, that you can't 'save it all'. If you just convert the bitdepth, gamma and color primaries you still up end up with _way_ too much signal to display in a regular 0-255 range, resulting in most of the movie will be _very_ dark and undetailed, and only in certain scenarios will it use the full brightness in the direction of '255' values. Think of a few explosions or a single shot straight into the sun or something. So the simplest thing to do is to clip a bunch of the (high)lights, and adjust the resulting video with a gamma-change till you think _most_ of the movie looks OK. If some scenes are too dark or clip too much, well, that's too bad. If you don't just use a gamma change but also some kind of highlight compression you can maybe save more highlights, but still. This is where the magic of MadVR's HDR-to-SDR stuff comes in. It's basically constantly re-evaluating the boost, the cut-off, the compression in highlights, the compression in shadows, the compression in gamut to have something 'enjoyable' to watch. Unless you can capture the output of MadVR, or you have similar filters / algorithms in Avisynth or Vapoursynth, there is no way to get 'the exact same' effect as MadVR. Using a Lut with transform functions sits somewhere in between. It's a more advanced gamma-change, but still a static mapping, not a dynamic one as I believe MadVR does. The best method is to load it into a video editor and go on it with a scene-by-scene basis where do you want the blackpoints, whitepoints, shadow compression and highlight compression to be.. and the saturation to control the gamut. But this is tedious work High bitdepth stuff I mostly do in Vapoursynth. Having loaded the source, I convert it to RGB taking the bt2020 transform into account (There are two, or even more, not a single 'bt2020' watch it ). That RGB I convert to linear-RGB (make sure to be at least in 16 bits, maybe even 32-bit float) saying the source gamma (once again, there are multiple gamma encodings for different HDR standards). Once in linear RGB you can convert the color primaries to your target primaries (from bt2020 to bt709 I'm guessing, but once again, there are different files and standards out there). Not that color primaries is not the same thing as the yuv<->rgb color matrix, although you will see things as bt2020 and bt709 in both.Then, while in linear RGB but in the correct color primaries, I use Vapoursynth 'levels' to clip of some highlights, and do a gamma adjustment. Then I convert from linear-RGB back to gamma-correct sRGB, and then from there it's 'easy' to go back to yuv444 in bt709 and then subsample to yuv420 , dither and encode as a regular yv12-8bit sdr encode. The 'levels' and 'gamma change' are according 'to taste', there is no magic number here (and like said before, will not be perfect for a whole movie but you do the best you can). Remember, as I see it in a dumbed-way-down kind of way: HDR files don't talk about pixel values anymore, there is _no_ simple conversion. They describe the amount of light (and in what color) that should be coming out of your monitor / TV. Then, your monitor / TV tries it's best to actually realize that or come up with a interpretation that best fits it. If a file talks about light levels of 1500 nits or more, and most TV's have a maximum brightness of (well) under 1000 nits, there has to be some algorithms to come up with 'the best interpretation of...'. The HDR light data is 'mapped' to whatever your monitor / TV / projector is capable of. And yes, this differs per monitor / TV / projector since they all have different capabilities, So, judge a 'HDR monitor' not only by how bright it can go and how much of the color gamut it can match, but also by the quality of the algorithms that try to map the HDR data onto it's screen, since there is no 'single answer' or '_THE_ algorithm' for this. The same reason that the HDR-MadVR conversion asks for the maximum number of nits of your screen. And also why HDR files need metadata like the minimum and maximum light levels encountered in the file, so the algorithms know what to expect (although without these you can still make a good effort). Basically, if you just do the colorspace / gamma/ primaries / bitdepth conversion, you still end up with 'raw' HDR data displayed on a SDR screen. What then is missing is the 'tone mapping' step in between. Everybody who shoots with a (good) digital camera and played with the bayer-RAW files (or X-trans for the Fuji guys :P) from these camera's know what the tone mapping is... HDR photographers who merge multiple exposures to get a HDR shot and play around with HDR software knows what 'tone mapping' is. |
|
|
|
|
|
#22 | Link |
|
Registered User
Join Date: Oct 2002
Location: France
Posts: 2,610
|
So, you mean that a player supposed to be able to play HDR and send it to "any" display screen (so outputing standard RGB value by HDMI for exemple) has a hell of things do do ?
What's happening if i get an... HDR mkv file for exemple and start it on my PC ? Will it be finaly displayed properly or not ? Because in all of this, you have in final to have something which output standard RGB to be able to be displayed ? Or am it still missing something (it's realy the feeling i have). Or, if i play an UHD Blu-Ray (but maybe these are not HDR...), to be displayed properly on any screen, the player has to output standard RGB... Again, i'm a little confused with it. Last edited by jpsdr; 23rd March 2017 at 14:23. |
|
|
|
|
|
#23 | Link | |
|
Registered User
Join Date: Oct 2016
Posts: 56
|
In HDMI 2.1 there was a so-called dynamic HDR (HDR dynamic metadata)
Maybe this will help somehow ... Quote:
Last edited by zub35; 23rd March 2017 at 14:35. |
|
|
|
|
|
|
#24 | Link | |
|
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Quote:
UltraHD Blu-ray supports HDR and indeed the player converts to SDR unless the display has support for HDR input via HDMI. |
|
|
|
|
|
|
#25 | Link |
|
Formerly davidh*****
Join Date: Jan 2004
Posts: 2,765
|
Using Avisynth+, what's the best way to open the HDR rip in the first post to get back 10-bit frames?
I've tried installing LSmashSource and using LWLibavVideoSource but that just gives me what Avisynth claims is a YV12 clip which is green and stripey. Alternatively, could someone make a 16-bit PNG, or as stacked 8-bit PNG, out of this frame that I could play with? |
|
|
|
|
|
#29 | Link | |
|
Registered User
Join Date: Oct 2002
Location: France
Posts: 2,610
|
Quote:
So, if these data are present in an HDR stream, it means that it's up to the codec to use them to proper decode HDR stream, but for now, the codec are not updated or able to use them. The thing i have trooble with is, when i read all of this, the first feeling is that "nothing can properly read HDR stream". Which is totaly stupid and illogical, so, i must miss something. An UltraHD Blu-ray player can convert HDR to SDR, so... What prevent a PC player codec to do that ? Is it because on PC, no coded/player is able to get the added meta data in the video stream ? If i get an HDR stream, these data have to be in it, otherwise, nothing would be able to play it. Is it because the informations needed to do that are "closed" ? This is what i've trouble with. I mean if you said, for exemple "Indeed, in your H265 HDR stream, there is the YCbCr compressed video information, and there is also metadata avaible to properly decode the HDR, but for now, the codecs are only able to decode the video, and are not able to use the meta data information for... whatever reason (license, closed, etc...)", Ok fine. But, i would like, if possible and if anyone know why, having an answer. Why a PC can't do what an hardware player can...
|
|
|
|
|
|
|
#30 | Link | |
|
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Quote:
It's always that way with new technology. How long till MPC-HC/madvr supported 3D? Part is lack of interest of developers. Part is missing APIs. Part the "closed" nature of some specs or simply missing samples to test with (e.g. in the case of Dolby Vision). "Missing samples" also implies "lack of interest" because it means no one needs the feature. Last edited by sneaker_ger; 24th March 2017 at 10:46. |
|
|
|
|
|
|
#32 | Link |
|
Registered User
Join Date: Oct 2014
Posts: 268
|
Basically understanding the video signal is not closed, and not 'that' special really.
But the video-information that is in an HDR file (after decoding) then needs to be 'mapped to the screen'. Since this is different for every screen, this 'mapping stage' is done _in the screen_ so to say. This tone-mapping stage was not (really) present in SDR, and is an aspect of the HDR-movement that a lot of people (even here on doom9) don't know of or seem to forget. If you're on an older monitor (or maybe even an HDR screen) the best method I know of now is to play HDR files through a (very) recent MPC-HC with a somewhat recent MadVR as has been stated here. You then need to tell MadVR how bright your screen can go (since MadVR has no way of knowing this) and it will do it's best to display it. Yes, that means that if you have a 8bit (or even 6 bit like is more common that you might think) SDR panel and you tell MadVR your screen is around 300 nits, it will do a very descent HDR-to-SDR conversion. If you have a new fancy screen with good contrast and very high brightness (SDR or HDR, doesn't matter now) you can tell MadVR you have a brighter screen and it will take that into account while doing the conversion / tonemapping. Mapping the light-and-colour-intensity information in an HDR file to pixel values (like RGB or YUV) needs information of the screen you're displaying it on..and since a screen knows itself the best, the tone-mapping algorithms are in the screen, not the media player. (things like your lightning situation _around_ the screen, the ambient lightning, also matter and good HDR tvs will have different presets or something for this). Also, there is no 'standard' or 'one true correct way' to do this tone mapping, so it also isn't part of the HDR standard or something. The HDR media-file-standard just presents you with light and colour intensities. You have a screen that wants RGB values for the pixels. How you go from A to B is not part of the standard. This is why a lot of people now play HDR content by placing the HDR file on an USB stick or something and plugging it into the TV, and let the smart-TV portion of the TV play the file. This is the simplest method .Otherwise, madVR on the desktop. Maybe recent things like Windvd or something have some sort of HDR support? I'm just guessing there. |
|
|
|
|
|
#33 | Link | ||
|
Registered User
Join Date: Nov 2004
Location: Poland
Posts: 2,869
|
Quote:
That's why Blu-ray stores this info in h265 headers: Quote:
DolbyVision goes step further as it standardises math behind this needed conversions in order to have consistency. It can also use dynamic metadata which is stored per scene in order to optimise end experience per scene depending on the user screen. First: Code:
The Dolby Vision workflow is very similar to existing color-grading workflows. The goal is to preserve more of what the camera captured and limit creative trade-offs in the color- grading and mastering process. The Dolby Vision HDR reference monitor (capable of up to 4,000 nits luminance) is used to make the color and brightness decisions. The goal of the Dolby Vision grading process is to capture the artistic intent in the reference grade. Directors, editors, and colorists should use the grading system and the monitors to make the best, most engaging imagery they can, taking full advantage of the dynamic range of the display. After the reference grade is fnished, the Dolby Vision color-grading system will analyze and save the dynamic metadata that describes the creative decisions made on the display. The content mapping unit (CMU) maps the content with the metadata to a reference display at a standard brightness (100 nits). After the Standard Dynamic Range version has been approved, the colorist exports the images with metadata. The dynamic metadata generated to create the SDR grade can be used to render the Dolby Vision master on displays, which may offer a wide performance range. A 600-nit TV will look great; a 1,200-nit TV will look even better— both referencing the same metadata and Dolby Vision reference images. The same algorithms used in the Content Mapping Unit for off-line grading can be used to create a traditional compatible grade for live broadcasts in Dolby Vision. Code:
A major difference between the Dolby Vision approach and other HDR solutions is the metadata that accompanies each frame of the video all the way to the display manager in the consumer-playback device. Systems with generic HDR carry only static metadata that describes the properties of the color-grading monitor that was used to create the content and some very basic information about the brightness properties (maximum and average light levels) for the entire piece of content. Dolby Vision adds dynamic metadata that is produced during content creation; the dynamic properties of each scene are captured. With this information, the Dolby Vision display manager is able to adapt the content to the properties of the display much more accurately. It allows hues to be preserved properly, which is critical for display of skin tones. Even with mass- market edge-lit TVs, the overall impression of colors is preserved much more accurately. Guided by the Dolby Vision metadata, the Dolby Vision display manager enables great visual experiences on a wide range of display devices ranging from higher-end OLED TVs with stunning black levels to LCD TVs with advanced technologies like quantum dot, all the way down to mass-market edge-lit TVs. Last edited by kolak; 27th March 2017 at 00:16. |
||
|
|
|
|
|
#34 | Link |
|
Registered User
Join Date: Oct 2002
Location: France
Posts: 2,610
|
videoh has updated the DGIndexNV tools to HEVC-10/12 bits, and also has the possibility to extract the HDR metadatas (when present).
For now, they are put in the index dgi file. It may be not the perfect thing, but at least there is something, and it's a big first step. Now, to try to improve this and make a second step, does anyone has any idea how these data could be send directly to avisynth's frames, without the need to get them on an external file ? The only stupid thing i tought is to use the alpha channel plan...
Last edited by jpsdr; 6th June 2017 at 13:33. |
|
|
|
|
|
#35 | Link | |
|
Registered User
Join Date: Jan 2014
Posts: 2,482
|
Quote:
|
|
|
|
|
|
|
#38 | Link |
|
Registered User
Join Date: Oct 2002
Location: France
Posts: 2,610
|
I've looked at R-REC-BT.2100-0-201607-I!!PDF-E, and want to check if i've understood properly.
In case of PQ configuration, if i want to do a HDR to SDR convertion, with input parameters Y, Cb, Cr comming from an HDR h265 stream, i have to do the following : Compute R,G,B from Y, Cb, Cr using standard linear matrix BT.2020 coeff. Then, "delinear" R, G, B to R0, G0, B0 using function (for R for exemple) R0=f(R) with f(x)=EOTF with EOTF is L(V) in the eq.1 page 15 (Annex 3). Then, finaly, do a (BT.709 matrix for exemple) linear combination of R0, G0, B0. Is it right ? What is the relation between the parameters in the formula, and the SEI mastering information in for exemple an HDR h265 data stream ? mastering_display_colour_volume => display_primaries_x[0] display_primaries_y[0] display_primaries_x[1] display_primaries_y[1] display_primaries_x[2] display_primaries_y[2] white_point_x white_point_y max_display_mastering_luminance min_display_mastering_luminance content_light_level_info => max_content_light_level max_pic_average_light_level |
|
|
|
|
|
#39 | Link |
|
Excessively jovial fellow
Join Date: Jun 2004
Location: rude
Posts: 1,100
|
I'm not sure if I understand you correctly but you might be interested in this discussion: https://github.com/sekrit-twc/zimg/i...ment-269100013
|
|
|
|
|
|
#40 | Link |
|
Registered User
Join Date: Oct 2002
Location: France
Posts: 2,610
|
I'm not sure i've understood properly, and i can't correlate the SEI HDR informations with the parameters in the formula.
For exemple, L(V)=((c-(V-m)*s*t)/(V-m-s))^(1/n). So, how do i get c, m, s, t, n values from the following mastering informations values : mastering_display_colour_volume => display_primaries_x[0] display_primaries_y[0] display_primaries_x[1] display_primaries_y[1] display_primaries_x[2] display_primaries_y[2] white_point_x white_point_y max_display_mastering_luminance min_display_mastering_luminance content_light_level_info => max_content_light_level max_pic_average_light_level Edit : After reading, my question would be more : How do you use these mastering informations in the equation formula ? It seems that c, m, s, t, n are static defined values. Despite reading docs, i still can't figure out the equation of the last step : Non linear R,G,B to linear R,G,B. The EOTF is clearly involved, but how the mastering informations are used, and how exactly the EOTF equation is used ? Last edited by jpsdr; 26th June 2017 at 19:13. |
|
|
|
![]() |
| Thread Tools | Search this Thread |
| Display Modes | |
|
|