Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 11th March 2014, 12:55   #24801  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by madshi
Seriously, this is getting messy. And I think I'll cheap out with keeping things as they are.
Its not cheapping out, its conforming to a standard.
Stick with the standard 2.2 power curve for LL Dithering and Smooth Motion, and be done with it.
You can always change it if suddenly ITU comes with a new standard (or OLED's get extremely cheap ).

One curve cannot fit ALL displays, but all displays can be calibrated to ONE curve (if the display is capable enough).
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 11th March 2014 at 13:33.
James Freeman is offline   Reply With Quote
Old 11th March 2014, 13:20   #24802  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by madshi View Post
Could we please stop these arguments? They don't go anywhere. I've decided to not add settings controls for dither transfer functions. This is set in stone. No further discussion needed. You may not like my decision. But you'll have to live with it.
Yes, this is getting really tedious for what should amount to an almost imperceptible change.

It's not what I would prefer,but I assume it's going to be fixed at 1/0.45, which is fine.
6233638 is offline   Reply With Quote
Old 11th March 2014, 13:20   #24803  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Personally I think I'll vote for BT.1886 hardcoded to a contrast ratio of 1200:1 (best match for sRGB). I don't think the fact that the slope is a little steeper near zero is going to hurt anyone (even monitors with ridiculous contrast ratios) and it should be somewhat better in the reduced bitdepth modes near zero (so for people with 6-bit LCDs, say) while still being continuous (where sRGB is not). IMO these are the configurations we should be focusing on here, since the lower bitdepth modes benefit more from this (at 8-bit the difference from gamma light is miniscule).

But if you end up sticking with 1.0/0.45, I won't complain (like I said I can't tell the difference in 8-bit mode using 8-bit source material, which is all I regularly use).

Last edited by Ver Greeneyes; 11th March 2014 at 13:24.
Ver Greeneyes is offline   Reply With Quote
Old 11th March 2014, 13:25   #24804  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 496
Quote:
Originally Posted by madshi View Post
Look: If I had simply ignored the whole linear light topic, we would have stopped at v0.87.6, and every one of you guys would still have been happy, isn't that right? Now the next build will give you even more quality than v0.87.6, but suddenly you're unhappy?
I would not interpret that as unhappyness, it's just that sometimes it is possible to compromise and satisfy everyone, while in this case, there are just too many variables.

I can only speak for myself here, madshi, but I already am very happy with the hardcoded 1/0.45 right now, because the improvements I see at lower bit depths and up easily were worth the time to include that.

If there would have been a "simple" fix for the dark areas, I guess that the majority would have never even thought about anything else.

Power users always want everything accurate in every case, which is not possible, because sRGB and BT.1886 were never meant for the same tasks.

Quote:
Originally Posted by 6233638 View Post
Yes, this is getting really tedious for what should amount to an almost imperceptible change.

It's not what I would prefer,but I assume it's going to be fixed at 1/0.45, which is fine.
I agree.
iSunrise is offline   Reply With Quote
Old 11th March 2014, 13:31   #24805  |  Link
mindz
Registered User
 
Join Date: Apr 2011
Posts: 57
Does the colored noise and change dither every frame work for all dither algorithms, or only for ED?
mindz is offline   Reply With Quote
Old 11th March 2014, 13:51   #24806  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by James Freeman View Post
Its not cheapping out, its conforming to a standard.
Stick with the standard 2.2 power curve for LL Dithering and Smooth Motion, and be done with it.
Quote:
Originally Posted by 6233638 View Post
Yes, this is getting really tedious for what should amount to an almost imperceptible change.

It's not what I would prefer,but I assume it's going to be fixed at 1/0.45, which is fine.
Quote:
Originally Posted by Ver Greeneyes View Post
Personally I think I'll vote for BT.1886 hardcoded to a contrast ratio of 1200:1 (best match for sRGB). I don't think the fact that the slope is a little steeper near zero is going to hurt anyone (even monitors with ridiculous contrast ratios) and it should be somewhat better in the reduced bitdepth modes near zero (so for people with 6-bit LCDs, say) while still being continuous (where sRGB is not). IMO these are the configurations we should be focusing on here, since the lower bitdepth modes benefit more from this (at 8-bit the difference from gamma light is miniscule).

But if you end up sticking with 1.0/0.45, I won't complain (like I said I can't tell the difference in 8-bit mode using 8-bit source material, which is all I regularly use).
Quote:
Originally Posted by iSunrise View Post
I would not interpret that as unhappyness, it's just that sometimes it is possible to compromise and satisfy everyone, while in this case, there are just too many variables.

I can only speak for myself here, madshi, but I already am very happy with the hardcoded 1/0.45 right now, because the improvements I see at lower bit depths and up easily were worth the time to include that.
Thanks, guys. FWIW, I'm currently considering using sRGB for bitdepths lower than 8bit, because display which don't natively support 8bit (mainly cheap LCD panels) probably also have rather bad contrast. For 8bit dithering I haven't fully decided yet. Could be a 2.2 power curve, or could be a BT.1886 curve with black & white levels set to match a good TV.

Quote:
Originally Posted by mindz View Post
Does the colored noise and change dither every frame work for all dither algorithms, or only for ED?
Should work for all algorithms, except "none", of course.
madshi is offline   Reply With Quote
Old 11th March 2014, 13:56   #24807  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
Quote:
Originally Posted by madshi View Post
I've decided to not add settings controls for dither transfer functions.
Probably pretty lame question but why not make dither transfer functions dependent on display calibration settings?
Qaq is offline   Reply With Quote
Old 11th March 2014, 14:14   #24808  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by madshi View Post
For 8bit dithering I haven't fully decided yet. Could be a 2.2 power curve, or could be a BT.1886 curve with black & white levels set to match a good TV.
Well what do you consider a "good TV"? Because I don't like anything that can't do black, and when your black level gets below 0.01 (which many displays are) you then use 2.40 power.

I'd rather see 1/0.45 than some arbitrary BT.1886 curve.
6233638 is offline   Reply With Quote
Old 11th March 2014, 14:32   #24809  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by 6233638 View Post
i'd rather see 1/0.45 than some arbitrary bt.1886 curve.
...
+1
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.
James Freeman is offline   Reply With Quote
Old 11th March 2014, 14:53   #24810  |  Link
Audionut
Registered User
 
Join Date: Nov 2003
Posts: 1,281
Quote:
Originally Posted by madshi View Post
Look: If I had simply ignored the whole linear light topic, we would have stopped at v0.87.6, and every one of you guys would still have been happy, isn't that right? Now the next build will give you even more quality than v0.87.6, but suddenly you're unhappy?
Not unhappy. Just making sure we're getting ever last miniscule percent

Quote:
Originally Posted by 6233638 View Post
I'd rather see 1/0.45 than some arbitrary BT.1886 curve.
+1
__________________
http://www.7-zip.org/

Last edited by Audionut; 11th March 2014 at 15:00.
Audionut is offline   Reply With Quote
Old 11th March 2014, 14:55   #24811  |  Link
*Touche*
Registered User
 
Join Date: May 2008
Posts: 84
Quote:
Originally Posted by 6233638 View Post
Well what do you consider a "good TV"? Because I don't like anything that can't do black, and when your black level gets below 0.01 (which many displays are) you then use 2.40 power.

I'd rather see 1/0.45 than some arbitrary BT.1886 curve.
+1

Either 1/0.45 or pp2.4.
*Touche* is offline   Reply With Quote
Old 11th March 2014, 15:00   #24812  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
Quote:
Originally Posted by 6233638 View Post
i'd rather see 1/0.45 than some arbitrary bt.1886 curve.
...
+1
__________________

Last edited by QBhd; 11th March 2014 at 15:03.
QBhd is offline   Reply With Quote
Old 11th March 2014, 15:04   #24813  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
Ouch. It seems Shiandow was right.

Unfortunately this gets all very complicated very fast because again we don't necessarily know the transfer function of the display, and there might be calibration on the HTPC involved, which is performed *after* smooth motion processing, and then we have dithering involved, too, because we can't really send floating point pixels to the display.

Seriously, this is getting messy. And I think I'll cheap out with keeping things as they are. Unless you guys have great new ideas. But they would need to cover all the countless different situations, with internal and external calibrations, overlay mode, windowed mode, FSE mode. Unknown native display curves etc etc.
Well, FWIW I think that it doesn't really matter if you use the monitor's gamma curve after smooth motion or not, it's just somewhat convenient since you've got everything in linear light anyway. What does matter is how you eventually use the monitor's gamma curve, I'm reasonably sure that this should be done somewhere and preferably this should be before dithering and after smooth motion (you can also do it before smooth motion but then you've got to convert to linear light again so that's just a waste of time).

Now the problem is that I don't really know exactly how the different settings are applied. The easiest case seems to be "this display is already calibrated" where you directly get the gamma curve, then it's just a matter of applying this correctly. It gets more complicated when you've got a LUT, I fear that currently those don't exactly have the information you need. What you need is a table that tells you which pixel value has which colour, I fear the LUT does this the other way around. The linear interpolation I proposed also gets quite a lot harder in 3D and depends on whether you use monoColor or oppositeColor noise. I currently know no easy way to solve this. Also if the 3DLUT is created by linear interpolation then it also doesn't make sense to use the shader I proposed (you could technically run the shader using the 3DLUT as gamma curve, but that is probably infeasible.)

Anyway I don't think it makes sense to use that shader with a LUT, and I think it will be too hard to apply this LUT correctly (unless you can actually change the way the LUT is created but that's a different matter altogether). So I think that currently the best way to do it is to at least try and apply the "this display is already calibrated" setting 'correctly', otherwise if someone either has no calibration or displays at a low bitdepth then I'd use the shader with some well chosen gamma curve (Bt 1886 sounds reasonable). It might actually be possible to also apply the "calibrate this display by using yCMS" setting 'correctly' since you've got to input a gamma curve, but that will probably require you to change the code for that option entirely, so I don't think that's a good idea.
Shiandow is offline   Reply With Quote
Old 11th March 2014, 15:14   #24814  |  Link
DarkSpace
Registered User
 
Join Date: Oct 2011
Posts: 204
Quote:
Originally Posted by madshi View Post
Ok, let's think this through. Let's say we want to smooth motion blend a white and a black frame, 50% each. Let's say the source gamma is a pure power 2.2 curve and the display is externally calibrated to a pure power 2.4 curve, to make things simple. So currently smooth motion calculates the mixed gray in linear light by using "(pow(0.0, 2.2) + pow(1.0, 2.2)) / 2 = 0.5". Now if we convert this back to gamma light by using the source gamma we get "pow(0.5, 1 / 2.2) = 0.72974". If we convert it back to gamma light by using the display gamma we get "0.74915353843834075" instead. We send either value to the display, let's suppose we can do this in floating point. Now the display converts this back to linear light by using the display's gamma. So it does "pow(value, 2.4)". But we do want to end up with 0.5 in linear light.

Ouch. It seems Shiandow was right.
I see. Then, however, you will also have to gamma-correct the non-SM frames to have them match colors in case of identical images being sent from the decoder.
And I thought madVR's Gamma Correction (and 3D LUT Correction) features were supposed to take care of that, not Smooth Motion itself?

Anyway, what I was confident about was merely that you should use the same gamma curve to convert to and from LL in SM. And your example gives the exact same result when using a 2.4 Gamma Curve to convert to Linear Light, by the way (that's why I tried blending identical colors: I know what result to expect, while with your example, I'm not sure).

EDIT: I think I know why Shiandow and I have different opinions: I am under the impression that madVR mainly works in Gamma Light, and whenever it uses Linear Light, it converts the end result back to Gamma Light for the next step. I believe Shiandow thinks that madVR works in Linear Light and converts to and from Gamma Light whenever necessary (basically the exact opposite of what I think).

Last edited by DarkSpace; 11th March 2014 at 15:18.
DarkSpace is offline   Reply With Quote
Old 11th March 2014, 15:23   #24815  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by 6233638 View Post
I'd rather see 1/0.45 than some arbitrary BT.1886 curve.
Remember though that the constant offset introduced by BT.1886 with a specific contrast ratio doesn't matter for our purposes, since we only care about the (nonlinear) differences between colors. So the only effect of a different contrast ratio is that the slope changes. Since the drawback of using a pure power function is that black gets crushed, I think using a limited contrast is actually safer. It still gets most of the effect of using a nonlinear transfer function without the drawbacks of a pure power law or the discontinuity of sRGB.

That's my reasoning, anyway. I don't think the difference is big enough to make a big deal out of it though.

Last edited by Ver Greeneyes; 11th March 2014 at 15:27.
Ver Greeneyes is offline   Reply With Quote
Old 11th March 2014, 15:36   #24816  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Qaq View Post
Probably pretty lame question but why not make dither transfer functions dependent on display calibration settings?
This has been discussed on the last 10 pages up and down, back and forth.

Quote:
Originally Posted by Shiandow View Post
Well, FWIW I think that it doesn't really matter if you use the monitor's gamma curve after smooth motion or not, it's just somewhat convenient since you've got everything in linear light anyway. What does matter is how you eventually use the monitor's gamma curve, I'm reasonably sure that this should be done somewhere
Why? The normal CE playback path is this: Source device decodes video and passes it to the display untouched, in its original gamma encoding. That's it. No processing at all in the source device (except chroma upsampling from 4:2:0 to 4:2:2 because HDMI doesn't support 4:2:0). So why would madVR have to use the monitor's gamma curve at all?

Of course the situation changes if we're talking about doing calibration on the HTPC, or complicated processing (like smooth motion FRC) in linear light, or dithering in linear light. But all of those are strictly optional.

Quote:
Originally Posted by DarkSpace View Post
I see. Then, however, you will also have to gamma-correct the non-SM frames to have them match colors in case of identical images being sent from the decoder.
And I thought madVR's Gamma Correction (and 3D LUT Correction) features were supposed to take care of that, not Smooth Motion itself?

Anyway, what I was confident about was merely that you should use the same gamma curve to convert to and from LL in SM. And your example gives the exact same result when using a 2.4 Gamma Curve to convert to Linear Light, by the way (that's why I tried blending identical colors: I know what result to expect, while with your example, I'm not sure).

EDIT: I think I know why Shiandow and I have different opinions: I am under the impression that madVR mainly works in Gamma Light, and whenever it uses Linear Light, it converts the end result back to Gamma Light for the next step. I believe Shiandow thinks that madVR works in Linear Light and converts to and from Gamma Light whenever necessary (basically the exact opposite of what I think).
To be honest, I'm still half way confused about how it would have to be done for mathematically perfect results, in all the complicated combinations, taking possible internal and/or external calibration into account, and linear light dithering, and some processing steps being applied before calibration, and others after calibration etc etc. Probably we'd need a rocket scientist to figure all that out, and even the rocket scientist would have to ask the user all kinds of questions which most users wouldn't know the answer to, in order to be able to calculate the mathematically perfect algorithm.

Last edited by madshi; 11th March 2014 at 15:38.
madshi is offline   Reply With Quote
Old 11th March 2014, 16:03   #24817  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
Why? The normal CE playback path is this: Source device decodes video and passes it to the display untouched, in its original gamma encoding. That's it. No processing at all in the source device (except chroma upsampling from 4:2:0 to 4:2:2 because HDMI doesn't support 4:2:0). So why would madVR have to use the monitor's gamma curve at all?
For consistency? I think that ideally a video should look the same regardless of the monitor that's used to play it, up to some difference in brightness/contrast. And you've taken the trouble of asking people what their monitor's characteristics are, are you suggesting to completely ignore those settings? What is it exactly that you are doing with those settings now?
Shiandow is offline   Reply With Quote
Old 11th March 2014, 16:14   #24818  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Shiandow View Post
For consistency? I think that ideally a video should look the same regardless of the monitor that's used to play it, up to some difference in brightness/contrast. And you've taken the trouble of asking people what their monitor's characteristics are, are you suggesting to completely ignore those settings? What is it exactly that you are doing with those settings now?
Calibration is supposed to take ambient light levels into account. E.g. in a batcave with total light control you will calibrate your display differently than you would if there's a lot of ambient light. So with default settings it doesn't make sense for madVR to try and "undo" the display calibration by forcing the video to look the same on every display. Only if the HTPC is used to calibrate the display (e.g. through a 3dlut), only then it's up to the HTPC to define with which exact gamma curve the video should ideally be shown to the user. Or alternatively if the user tells madVR to modify the display's gamma curve. Otherwise it's the job of the display calibration to decide that.

Currently the settings in the "this display is already calibrated" section are only used for 2 things:

(1) If the source gamut differs from the display gamut, madVR converts the source gamut. This is necessary because the display doesn't know which gamut madVR is sending.

(2) If you enable gamma processing, madVR looks at the desired gamma curve and the display's native gamma curve to calculate a correction which converts the display to the desired gamma curve. But this is only done if you manually activate gamma processing. It's turned off by default. And it should be, IMHO.
madshi is offline   Reply With Quote
Old 11th March 2014, 16:56   #24819  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
Calibration is supposed to take ambient light levels into account. E.g. in a batcave with total light control you will calibrate your display differently than you would if there's a lot of ambient light. So with default settings it doesn't make sense for madVR to try and "undo" the display calibration by forcing the video to look the same on every display. Only if the HTPC is used to calibrate the display (e.g. through a 3dlut), only then it's up to the HTPC to define with which exact gamma curve the video should ideally be shown to the user. Or alternatively if the user tells madVR to modify the display's gamma curve. Otherwise it's the job of the display calibration to decide that.
Well I think I've isolated the confusing part, according to wikipedia:

Quote:
"Rec. 709 is written as if it specifies the capture and transfer characteristics of HDTV encoding - that is, as if it were scene-referred. However, in practice it is output (display) referred with the convention of a 2.4-power function display."
Which seems to say that the luminance that was recorded is different from the one that should be displayed. Which is incredibly confusing... Anyway this seems to suggest doing the following:
  1. Convert to linear light using a 2.4-power function (or whatever has been specified by the "use gamma processing option").
  2. Blend the frames
  3. Encode using the monitor's gamma curve.
Which would probably require a complete overhaul of the entire way gamma is currently processed. Anyway the above method would ensure that the output of the monitor will look like whatever monitor has been specified by the "use gamma processing" option. I'll leave it to you to decide whether it's worth the trouble to have technically correct output. I can't even judge if the above is compatible with 3DLUTs (I fear not).
Shiandow is offline   Reply With Quote
Old 11th March 2014, 16:56   #24820  |  Link
Vyral
Registered User
 
Vyral's Avatar
 
Join Date: Oct 2012
Posts: 70
Quote:
Originally Posted by madshi View Post
Calibration is supposed to take ambient light levels into account. E.g. in a batcave with total light control you will calibrate your display differently than you would if there's a lot of ambient light. So with default settings it doesn't make sense for madVR to try and "undo" the display calibration by forcing the video to look the same on every display. Only if the HTPC is used to calibrate the display (e.g. through a 3dlut), only then it's up to the HTPC to define with which exact gamma curve the video should ideally be shown to the user. Or alternatively if the user tells madVR to modify the display's gamma curve. Otherwise it's the job of the display calibration to decide that.

Currently the settings in the "this display is already calibrated" section are only used for 2 things:

(1) If the source gamut differs from the display gamut, madVR converts the source gamut. This is necessary because the display doesn't know which gamut madVR is sending.

(2) If you enable gamma processing, madVR looks at the desired gamma curve and the display's native gamma curve to calculate a correction which converts the display to the desired gamma curve. But this is only done if you manually activate gamma processing. It's turned off by default. And it should be, IMHO.
From where madVR retrieve the display's native gamma curve ?

And, if we use "disable calibration controls for this display" and don't enable gamma processing, the source gamut is used without correction ?
__________________
iiyama prolite xb2483hsu 1080p60 Gamma=2.25 - Intel Core i3-2100 3.10GHz - AMD Radeon HD 6850, RGB 4:4:4 Full range - MPC-HC + XYSubFilter + madVR
Vyral is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:15.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.