Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 23rd March 2017, 14:00   #21  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
First thing to learn about HDR (to SDR) is that it's not as simple as converting bitdepths and colorspaces. There is a different gamma-encoding at play too, and more importantly, HDR talks about light levels, not 'pixel values'.
HDR files can contain soo much more light and color gamut than a regular SDR encode, that you can't 'save it all'. If you just convert the bitdepth, gamma and color primaries you still up end up with _way_ too much signal to display in a regular 0-255 range, resulting in most of the movie will be _very_ dark and undetailed, and only in certain scenarios will it use the full brightness in the direction of '255' values. Think of a few explosions or a single shot straight into the sun or something.

So the simplest thing to do is to clip a bunch of the (high)lights, and adjust the resulting video with a gamma-change till you think _most_ of the movie looks OK. If some scenes are too dark or clip too much, well, that's too bad.
If you don't just use a gamma change but also some kind of highlight compression you can maybe save more highlights, but still.

This is where the magic of MadVR's HDR-to-SDR stuff comes in. It's basically constantly re-evaluating the boost, the cut-off, the compression in highlights, the compression in shadows, the compression in gamut to have something 'enjoyable' to watch.
Unless you can capture the output of MadVR, or you have similar filters / algorithms in Avisynth or Vapoursynth, there is no way to get 'the exact same' effect as MadVR.

Using a Lut with transform functions sits somewhere in between. It's a more advanced gamma-change, but still a static mapping, not a dynamic one as I believe MadVR does.

The best method is to load it into a video editor and go on it with a scene-by-scene basis where do you want the blackpoints, whitepoints, shadow compression and highlight compression to be.. and the saturation to control the gamut. But this is tedious work

High bitdepth stuff I mostly do in Vapoursynth.
Having loaded the source, I convert it to RGB taking the bt2020 transform into account (There are two, or even more, not a single 'bt2020' watch it ). That RGB I convert to linear-RGB (make sure to be at least in 16 bits, maybe even 32-bit float) saying the source gamma (once again, there are multiple gamma encodings for different HDR standards). Once in linear RGB you can convert the color primaries to your target primaries (from bt2020 to bt709 I'm guessing, but once again, there are different files and standards out there). Not that color primaries is not the same thing as the yuv<->rgb color matrix, although you will see things as bt2020 and bt709 in both.
Then, while in linear RGB but in the correct color primaries, I use Vapoursynth 'levels' to clip of some highlights, and do a gamma adjustment. Then I convert from linear-RGB back to gamma-correct sRGB, and then from there it's 'easy' to go back to yuv444 in bt709 and then subsample to yuv420 , dither and encode as a regular yv12-8bit sdr encode. The 'levels' and 'gamma change' are according 'to taste', there is no magic number here (and like said before, will not be perfect for a whole movie but you do the best you can).


Remember, as I see it in a dumbed-way-down kind of way: HDR files don't talk about pixel values anymore, there is _no_ simple conversion. They describe the amount of light (and in what color) that should be coming out of your monitor / TV. Then, your monitor / TV tries it's best to actually realize that or come up with a interpretation that best fits it. If a file talks about light levels of 1500 nits or more, and most TV's have a maximum brightness of (well) under 1000 nits, there has to be some algorithms to come up with 'the best interpretation of...'. The HDR light data is 'mapped' to whatever your monitor / TV / projector is capable of. And yes, this differs per monitor / TV / projector since they all have different capabilities,
So, judge a 'HDR monitor' not only by how bright it can go and how much of the color gamut it can match, but also by the quality of the algorithms that try to map the HDR data onto it's screen, since there is no 'single answer' or '_THE_ algorithm' for this. The same reason that the HDR-MadVR conversion asks for the maximum number of nits of your screen. And also why HDR files need metadata like the minimum and maximum light levels encountered in the file, so the algorithms know what to expect (although without these you can still make a good effort).


Basically, if you just do the colorspace / gamma/ primaries / bitdepth conversion, you still end up with 'raw' HDR data displayed on a SDR screen. What then is missing is the 'tone mapping' step in between. Everybody who shoots with a (good) digital camera and played with the bayer-RAW files (or X-trans for the Fuji guys :P) from these camera's know what the tone mapping is... HDR photographers who merge multiple exposures to get a HDR shot and play around with HDR software knows what 'tone mapping' is.
dipje is offline   Reply With Quote
Old 23rd March 2017, 14:20   #22  |  Link
jpsdr
Registered User
 
Join Date: Oct 2002
Location: France
Posts: 2,610
So, you mean that a player supposed to be able to play HDR and send it to "any" display screen (so outputing standard RGB value by HDMI for exemple) has a hell of things do do ?
What's happening if i get an... HDR mkv file for exemple and start it on my PC ? Will it be finaly displayed properly or not ?
Because in all of this, you have in final to have something which output standard RGB to be able to be displayed ? Or am it still missing something (it's realy the feeling i have).
Or, if i play an UHD Blu-Ray (but maybe these are not HDR...), to be displayed properly on any screen, the player has to output standard RGB... Again, i'm a little confused with it.

Last edited by jpsdr; 23rd March 2017 at 14:23.
jpsdr is offline   Reply With Quote
Old 23rd March 2017, 14:25   #23  |  Link
zub35
Registered User
 
Join Date: Oct 2016
Posts: 56
In HDMI 2.1 there was a so-called dynamic HDR (HDR dynamic metadata)
Maybe this will help somehow ...

Quote:
Dynamic HDR ensures every moment of a video is displayed at its ideal values for depth, detail, brightness, contrast, and wider color gamuts—on a scene-by-scene or even a frame-by-frame basis.
upd: In the first post I added a link to more samples

Last edited by zub35; 23rd March 2017 at 14:35.
zub35 is offline   Reply With Quote
Old 23rd March 2017, 16:15   #24  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by jpsdr View Post
What's happening if i get an... HDR mkv file for exemple and start it on my PC ? Will it be finaly displayed properly or not ?
In most players it will look washed out. You can use MPC-HC + madVR and set-up the display in madvr to get some kind of proper output.


Quote:
Originally Posted by jpsdr View Post
Or, if i play an UHD Blu-Ray (but maybe these are not HDR...), to be displayed properly on any screen, the player has to output standard RGB... Again, i'm a little confused with it.
UltraHD Blu-ray supports HDR and indeed the player converts to SDR unless the display has support for HDR input via HDMI.
sneaker_ger is offline   Reply With Quote
Old 23rd March 2017, 23:03   #25  |  Link
wonkey_monkey
Formerly davidh*****
 
wonkey_monkey's Avatar
 
Join Date: Jan 2004
Posts: 2,765
Using Avisynth+, what's the best way to open the HDR rip in the first post to get back 10-bit frames?

I've tried installing LSmashSource and using LWLibavVideoSource but that just gives me what Avisynth claims is a YV12 clip which is green and stripey.

Alternatively, could someone make a 16-bit PNG, or as stacked 8-bit PNG, out of this frame that I could play with?
__________________
My AviSynth filters / I'm the Doctor
wonkey_monkey is offline   Reply With Quote
Old 23rd March 2017, 23:10   #26  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Try ffms2. L-smash doesn't output native 10 bit as supported by AviSynth+.
But if you don't know about >8bit handling in AviSynth you will have problems...

Last edited by sneaker_ger; 23rd March 2017 at 23:13.
sneaker_ger is offline   Reply With Quote
Old 23rd March 2017, 23:24   #27  |  Link
wonkey_monkey
Formerly davidh*****
 
wonkey_monkey's Avatar
 
Join Date: Jan 2004
Posts: 2,765
Not mainstream ffms2, though? I need one of the forks, right?

(Gah, this is confusing)
__________________
My AviSynth filters / I'm the Doctor

Last edited by wonkey_monkey; 23rd March 2017 at 23:35.
wonkey_monkey is offline   Reply With Quote
Old 23rd March 2017, 23:48   #28  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
If you want native AviSynth+ formats you can use latest mainstream ffms2. If you want "hacked" use l-smash.
sneaker_ger is offline   Reply With Quote
Old 24th March 2017, 09:16   #29  |  Link
jpsdr
Registered User
 
Join Date: Oct 2002
Location: France
Posts: 2,610
Quote:
Originally Posted by sneaker_ger View Post
UltraHD Blu-ray supports HDR and indeed the player converts to SDR unless the display has support for HDR input via HDMI.
So, it means, staying in the exemple of UltraHD Blu-ray, that in the... H265 stream for exemple, with the YCbCr data, there is also additionnal informations allowing proper HDR decoding ?
So, if these data are present in an HDR stream, it means that it's up to the codec to use them to proper decode HDR stream, but for now, the codec are not updated or able to use them.
The thing i have trooble with is, when i read all of this, the first feeling is that "nothing can properly read HDR stream". Which is totaly stupid and illogical, so, i must miss something.
An UltraHD Blu-ray player can convert HDR to SDR, so... What prevent a PC player codec to do that ?
Is it because on PC, no coded/player is able to get the added meta data in the video stream ? If i get an HDR stream, these data have to be in it, otherwise, nothing would be able to play it.
Is it because the informations needed to do that are "closed" ?
This is what i've trouble with.
I mean if you said, for exemple "Indeed, in your H265 HDR stream, there is the YCbCr compressed video information, and there is also metadata avaible to properly decode the HDR, but for now, the codecs are only able to decode the video, and are not able to use the meta data information for... whatever reason (license, closed, etc...)", Ok fine.
But, i would like, if possible and if anyone know why, having an answer. Why a PC can't do what an hardware player can...
jpsdr is offline   Reply With Quote
Old 24th March 2017, 10:44   #30  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by jpsdr View Post
An UltraHD Blu-ray player can convert HDR to SDR, so... What prevent a PC player codec to do that ?
Is it because on PC, no coded/player is able to get the added meta data in the video stream ? If i get an HDR stream, these data have to be in it, otherwise, nothing would be able to play it.
Like I said earlier: MPC-HC/LAV + madvr can play it - well, at least the common HDR10/VP9 PQ kind. They can read the HDR metadata that's contained in an HEVC stream or MKV/WebM container and use it. (What madvr does not yet support is using the "native" HDR format of HDMI. The Microsoft APIs for that are fairly recent.)

Quote:
Originally Posted by jpsdr View Post
Is it because the informations needed to do that are "closed" ?
It's always that way with new technology. How long till MPC-HC/madvr supported 3D? Part is lack of interest of developers. Part is missing APIs. Part the "closed" nature of some specs or simply missing samples to test with (e.g. in the case of Dolby Vision). "Missing samples" also implies "lack of interest" because it means no one needs the feature.

Last edited by sneaker_ger; 24th March 2017 at 10:46.
sneaker_ger is offline   Reply With Quote
Old 24th March 2017, 11:57   #31  |  Link
jpsdr
Registered User
 
Join Date: Oct 2002
Location: France
Posts: 2,610
Ok, thanks, i have my answer now.
jpsdr is offline   Reply With Quote
Old 25th March 2017, 01:04   #32  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
Basically understanding the video signal is not closed, and not 'that' special really.

But the video-information that is in an HDR file (after decoding) then needs to be 'mapped to the screen'. Since this is different for every screen, this 'mapping stage' is done _in the screen_ so to say. This tone-mapping stage was not (really) present in SDR, and is an aspect of the HDR-movement that a lot of people (even here on doom9) don't know of or seem to forget.

If you're on an older monitor (or maybe even an HDR screen) the best method I know of now is to play HDR files through a (very) recent MPC-HC with a somewhat recent MadVR as has been stated here. You then need to tell MadVR how bright your screen can go (since MadVR has no way of knowing this) and it will do it's best to display it.
Yes, that means that if you have a 8bit (or even 6 bit like is more common that you might think) SDR panel and you tell MadVR your screen is around 300 nits, it will do a very descent HDR-to-SDR conversion.

If you have a new fancy screen with good contrast and very high brightness (SDR or HDR, doesn't matter now) you can tell MadVR you have a brighter screen and it will take that into account while doing the conversion / tonemapping.

Mapping the light-and-colour-intensity information in an HDR file to pixel values (like RGB or YUV) needs information of the screen you're displaying it on..and since a screen knows itself the best, the tone-mapping algorithms are in the screen, not the media player. (things like your lightning situation _around_ the screen, the ambient lightning, also matter and good HDR tvs will have different presets or something for this). Also, there is no 'standard' or 'one true correct way' to do this tone mapping, so it also isn't part of the HDR standard or something. The HDR media-file-standard just presents you with light and colour intensities. You have a screen that wants RGB values for the pixels. How you go from A to B is not part of the standard.

This is why a lot of people now play HDR content by placing the HDR file on an USB stick or something and plugging it into the TV, and let the smart-TV portion of the TV play the file. This is the simplest method .
Otherwise, madVR on the desktop. Maybe recent things like Windvd or something have some sort of HDR support? I'm just guessing there.
dipje is offline   Reply With Quote
Old 26th March 2017, 23:52   #33  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,869
Quote:
Originally Posted by jpsdr View Post
So, you mean that a player supposed to be able to play HDR and send it to "any" display screen (so outputing standard RGB value by HDMI for exemple) has a hell of things do do ?
What's happening if i get an... HDR mkv file for exemple and start it on my PC ? Will it be finaly displayed properly or not ?
Because in all of this, you have in final to have something which output standard RGB to be able to be displayed ? Or am it still missing something (it's realy the feeling i have).
Or, if i play an UHD Blu-Ray (but maybe these are not HDR...), to be displayed properly on any screen, the player has to output standard RGB... Again, i'm a little confused with it.

That's why Blu-ray stores this info in h265 headers:

Quote:
SMPTE ST 2086 mastering display color volume SEI info, specified as a string which is parsed when the stream header SEI are emitted.
The string format is “G(%hu,%hu)B(%hu,%hu)R(%hu,%hu)WP(%hu,%hu)L(%u,%u)” where %hu are unsigned 16bit integers and
%u are unsigned 32bit integers. The SEI includes X,Y display primaries for RGB channels and white point (WP) in units of 0.00002
and max,min luminance (L) values in units of 0.0001 candela per meter square. Applicable for HDR content.

Example for a P3D65 1000-nits monitor, where
G(x=0.265, y=0.690), B(x=0.150, y=0.060), R(x=0.680, y=0.320), WP(x=0.3127, y=0.3290), L(max=1000, min=0.0001):

G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)

--max-cll <string>
Maximum content light level (MaxCLL) and maximum frame average light level (MaxFALL) as required by the
Consumer Electronics Association 861.3 specification.


Example for MaxCLL=1000 candela per square meter, MaxFALL=400 candela per square meter:
–max-cll 1000,400
Ideally you want TV which can meet this parameters, but if it doesn't then it may have some processing built in to compensate possible differences.

DolbyVision goes step further as it standardises math behind this needed conversions in order to have consistency. It can also use dynamic metadata which is stored per scene in order to optimise end experience per scene depending on the user screen.


First:

Code:
The Dolby Vision workflow is very similar to existing color-grading workflows.
The goal is to preserve more of what the camera captured and limit creative trade-offs in the color- grading and mastering process.
The Dolby Vision HDR reference monitor (capable of up to 4,000 nits luminance) is used to make the color and brightness decisions.
The goal of the Dolby Vision grading process is to capture the artistic intent in the reference grade. Directors, editors, and colorists
should use the grading system and the monitors to make the best, most engaging imagery they can, taking full advantage of
the dynamic range of the display.
After the reference grade is fnished, the Dolby Vision color-grading system will analyze and save the dynamic metadata that
describes the creative decisions made on the display. The content mapping unit (CMU) maps the content with the metadata
to a reference display at a standard brightness (100 nits).
After the Standard Dynamic Range version has been approved, the colorist exports the images with metadata.
The dynamic metadata generated to create the SDR grade can be used to render the Dolby Vision master on displays, which may
offer a wide performance range.
A 600-nit TV will look great; a 1,200-nit TV will look even better— both referencing the same metadata and Dolby Vision
reference images. The same algorithms used in the Content Mapping Unit for off-line grading can be used to create a traditional
compatible grade for live broadcasts in Dolby Vision.
Then:

Code:
A major difference between the Dolby Vision approach and other HDR solutions is the metadata that accompanies each frame
of the video all the way to the display manager in the consumer-playback device. Systems with generic HDR carry only static 
metadata that describes the properties of the color-grading monitor that was used to create the content and some very basic
information about the brightness properties (maximum and average light levels) for the entire piece of content.
Dolby Vision adds dynamic metadata that is produced during content creation; the dynamic properties of each scene are captured. With this
information, the Dolby Vision display manager is able to adapt the content to the properties of the display much more accurately. It allows
hues to be preserved properly, which is critical for display of skin tones. Even with mass- market edge-lit TVs,
the overall impression of colors is preserved much more accurately.
Guided by the Dolby Vision metadata, the Dolby Vision display manager enables great visual experiences on a wide range of display devices
ranging from higher-end OLED TVs with stunning black levels to LCD TVs with advanced technologies like quantum dot, all the way down to
mass-market edge-lit TVs.
More TVs are getting DolbyVision, so with correctly mastered HDR Blu-ray your end result should be very decent regardless if you have high-end TV or rathe basic one (assuming it has DolbyVision support).

Last edited by kolak; 27th March 2017 at 00:16.
kolak is offline   Reply With Quote
Old 6th June 2017, 09:16   #34  |  Link
jpsdr
Registered User
 
Join Date: Oct 2002
Location: France
Posts: 2,610
videoh has updated the DGIndexNV tools to HEVC-10/12 bits, and also has the possibility to extract the HDR metadatas (when present).
For now, they are put in the index dgi file.
It may be not the perfect thing, but at least there is something, and it's a big first step.
Now, to try to improve this and make a second step, does anyone has any idea how these data could be send directly to avisynth's frames, without the need to get them on an external file ?

The only stupid thing i tought is to use the alpha channel plan...

Last edited by jpsdr; 6th June 2017 at 13:33.
jpsdr is offline   Reply With Quote
Old 6th June 2017, 13:50   #35  |  Link
pinterf
Registered User
 
Join Date: Jan 2014
Posts: 2,482
Quote:
Originally Posted by jpsdr View Post
videoh has updated the DGIndexNV tools to HEVC-10/12 bits, and also has the possibility to extract the HDR metadatas (when present).
For now, they are put in the index dgi file.
It may be not the perfect thing, but at least there is something, and it's a big first step.
Now, to try to improve this and make a second step, does anyone has any idea how these data could be send directly to avisynth's frames, without the need to get them on an external file ?

The only stupid thing i tought is to use the alpha channel plan...
Implementing frame properties, like VS
pinterf is offline   Reply With Quote
Old 6th June 2017, 15:22   #36  |  Link
videoh
Useful n00b
 
Join Date: Jul 2014
Posts: 1,666
Quote:
Originally Posted by pinterf View Post
Implementing frame properties, like VS
Does Avisynth+ support frame properties? If so, where is it documented? Thank you.
videoh is offline   Reply With Quote
Old 6th June 2017, 15:43   #37  |  Link
pinterf
Registered User
 
Join Date: Jan 2014
Posts: 2,482
Quote:
Originally Posted by videoh View Post
Does Avisynth+ support frame properties? If so, where is it documented? Thank you.
No, does not support, that's why it needs to be implemented :-)
pinterf is offline   Reply With Quote
Old 26th June 2017, 13:35   #38  |  Link
jpsdr
Registered User
 
Join Date: Oct 2002
Location: France
Posts: 2,610
I've looked at R-REC-BT.2100-0-201607-I!!PDF-E, and want to check if i've understood properly.
In case of PQ configuration, if i want to do a HDR to SDR convertion, with input parameters Y, Cb, Cr comming from an HDR h265 stream, i have to do the following :
Compute R,G,B from Y, Cb, Cr using standard linear matrix BT.2020 coeff.
Then, "delinear" R, G, B to R0, G0, B0 using function (for R for exemple) R0=f(R) with f(x)=EOTF with EOTF is L(V) in the eq.1 page 15 (Annex
3).
Then, finaly, do a (BT.709 matrix for exemple) linear combination of R0, G0, B0.
Is it right ?

What is the relation between the parameters in the formula, and the SEI mastering information in for exemple an HDR h265 data stream ?

mastering_display_colour_volume =>
display_primaries_x[0]
display_primaries_y[0]
display_primaries_x[1]
display_primaries_y[1]
display_primaries_x[2]
display_primaries_y[2]
white_point_x
white_point_y
max_display_mastering_luminance
min_display_mastering_luminance

content_light_level_info =>
max_content_light_level
max_pic_average_light_level
jpsdr is offline   Reply With Quote
Old 26th June 2017, 14:28   #39  |  Link
TheFluff
Excessively jovial fellow
 
Join Date: Jun 2004
Location: rude
Posts: 1,100
I'm not sure if I understand you correctly but you might be interested in this discussion: https://github.com/sekrit-twc/zimg/i...ment-269100013
TheFluff is offline   Reply With Quote
Old 26th June 2017, 18:31   #40  |  Link
jpsdr
Registered User
 
Join Date: Oct 2002
Location: France
Posts: 2,610
I'm not sure i've understood properly, and i can't correlate the SEI HDR informations with the parameters in the formula.
For exemple, L(V)=((c-(V-m)*s*t)/(V-m-s))^(1/n). So, how do i get c, m, s, t, n values from the following mastering informations values :
mastering_display_colour_volume =>
display_primaries_x[0]
display_primaries_y[0]
display_primaries_x[1]
display_primaries_y[1]
display_primaries_x[2]
display_primaries_y[2]
white_point_x
white_point_y
max_display_mastering_luminance
min_display_mastering_luminance

content_light_level_info =>
max_content_light_level
max_pic_average_light_level

Edit :
After reading, my question would be more : How do you use these mastering informations in the equation formula ?
It seems that c, m, s, t, n are static defined values.
Despite reading docs, i still can't figure out the equation of the last step : Non linear R,G,B to linear R,G,B.
The EOTF is clearly involved, but how the mastering informations are used, and how exactly the EOTF equation is used ?

Last edited by jpsdr; 26th June 2017 at 19:13.
jpsdr is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:57.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, vBulletin Solutions Inc.