Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th February 2014, 12:49   #23181  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by cyberbeing View Post
leeperry was strongly in favor of ED5 (floyd-stein, weight sum 1.00, old random generator), while at the end my top two favorites of NL5 (floyd-stein, 0.97, old random generator, bugfixes) & NL3 (floyd-stein, weight sum 1.00, old random generator, bugfixes) were probably closest to his initial preference. We both switched algorithms starting with the second build set.
There's no irony whatsoever tbh, I prefered ED5 to ED7 because the latter looked fuzzy. I was well aware that it looked more "popping" and subjectively more impressive but it's only after madshi made the changes in N3 that things changed. I rest my case that N3 looks much clearer than ED7 and that NL6 looks even better.

Quote:
Originally Posted by Ver Greeneyes View Post
I think Gamma Light looks smoother.
Quote:
Originally Posted by mithra66 View Post
With Pioneer LX6090 display , gamma at 2.45, I vote for Gamma Light. Though I expected the opposite.

I don't know for sure if it's the most accurate, but I prefer its 3D effect, without the inconvenience of sharpening.

If LL wins the vote, I'd like GL to remain as some kind of "3D cool setting".

Anyway many thanks to Madshi -and all testers- to achieve this great work.

PS: DVD have never looked so nice thank to NNEDI3
Major +1 here, I would really hate being stuck with the current LL build.

And your DVD comment is interesting because I read a long time ago that Reclock could process inverse pal-speed up in realtime with DVB-T tuners and this would be a killer combo with NNEDI 64x, OMG

The only last problem is that I would need CUVID deinterlacing as its PQ would appear to be quite a bit better than YADIF on 576i/1080i

And yesterday I ran more tests with NNEDI on 720p@1080p, it seems rather clear that 64x for luma looks amazingly good....hard to believe it's not genuine 1080p.

I ran a quick search and couldn't find what NNEDI really does, even tritical doesn't explain....I wonder why and how this thing looks so good.

Last edited by leeperry; 15th February 2014 at 12:53.
leeperry is offline   Reply With Quote
Old 15th February 2014, 12:50   #23182  |  Link
bacondither
Registered User
 
Join Date: Oct 2013
Location: Sweden
Posts: 128
Quote:
Originally Posted by mithra66 View Post
Sorry if I misinterprate the code but shouldn't LL value be calculated from Gamma settings (i.e. fields "The display is calibrated to the following transfer function / gamma" and/or "desired display gamma/transfer function".

Unless this does not make much difference with 2.2 gamma?
Well no, to linearise image you should take the inverse of the gamma function that the image was recorded/made with.

//added

A better explanation then i would do i found here:
http://www.cambridgeincolour.com/tutorials/gamma-correction.htm

Last edited by bacondither; 15th February 2014 at 13:04.
bacondither is offline   Reply With Quote
Old 15th February 2014, 13:05   #23183  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by leeperry View Post
There's no irony whatsoever tbh
Well that's a matter of perspective. If madshi had never introduced the new RNG, we could have ended up preferring the same build.
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 13:32   #23184  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by cyberbeing View Post
Well that's a matter of perspective. If madshi had never introduced the new RNG, we could have ended up preferring the same build.
Hah, good point! But it was really too bad that ED7 suffered from that fuzzy look because everything else looked very impressive.

madshi's got more tricks than a clown's pocket so it's only a matter of time before we might very well agree again

See you when he gets there
leeperry is offline   Reply With Quote
Old 15th February 2014, 14:16   #23185  |  Link
The 8472
Registered User
 
Join Date: Jan 2014
Posts: 51
Quote:
Originally Posted by madshi View Post
BT.2020 supports 10bit+, but not 8bit, anymore, so yes, it appears we're going to get higher bitdepth. Hope the same will be true for 4K Blu-Ray, but at this point we can only guess, cause there's no information available yet about that.
UHD standards still seem to be in flux, i've even seen some slides talking about 300Hz 4:4:4 12bit video as potential future goals.

This image comes from some european broadcasting union/DVB meeting (may 2013):
The 8472 is offline   Reply With Quote
Old 15th February 2014, 14:19   #23186  |  Link
mithra66
Registered User
 
Join Date: Jan 2013
Location: Savoie FRANCE
Posts: 13
Quote:
Originally Posted by bacondither View Post
Well no, to linearise image you should take the inverse of the gamma function that the image was recorded/made with.

A better explanation then i would do i found here:
http://www.cambridgeincolour.com/tutorials/gamma-correction.htm
I was confused because I thought LL was some kind of display gamma reverse where it it is about file gamma. So thanks you to correct my mistake.
Very interesting link by the way

Quote:
Originally Posted by leeperry View Post
And your DVD comment is interesting because I read a long time ago that Reclock could process inverse pal-speed up in realtime with DVB-T tuners and this would be a killer combo with NNEDI 64x, OMG
Having a 650ti, I already have to put first NNEDI3 upscale in Avisynth filter for 30fps files. I think I'll wait for 20nm Maxwell GPU (supposedly gtx870 for my budget) for 64 neurons and 50/60 fps deinterlaced.

Quote:
Originally Posted by Aikibana View Post
Actually, I just updated to 14.1 beta because 13.12 didn't work either enabling smooth NNEDI3-playback. For me 14.1 works 'as good/bad' as version 13.12.

Also tried overclocking my 7770 Gh edition, but doesn't change much and makes my system unstable.

From what I understood, a recent AMD GPU shouldn't have too much troubles with OpenCL and NNEDI3, right?
@Aikibana: nnedi3 is very GPU consuming. You may prior GPU power for luminance upscale and stick to jinc upscale or lower for chroma upscale. Using profiles, you can still do nnedi3 chroma upscale when no luma upscale is needed .
mithra66 is offline   Reply With Quote
Old 15th February 2014, 14:21   #23187  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 934
Hmm, why all the phases? It just means loads of equipment will be made that supports 2160p/50 and then no broadcaster will be able to move to 2160p/100 or higher because none of the equipment around will support it. They made the same mistake when not including 1080p/50 (and 1080p/60) in the HD standards.
__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7
DragonQ is offline   Reply With Quote
Old 15th February 2014, 14:27   #23188  |  Link
tschi
Registered User
 
Join Date: Apr 2006
Posts: 71
Quote:
Originally Posted by Aikibana View Post
Actually, I just updated to 14.1 beta because 13.12 didn't work either enabling smooth NNEDI3-playback. For me 14.1 works 'as good/bad' as version 13.12.

Also tried overclocking my 7770 Gh edition, but doesn't change much and makes my system unstable.

From what I understood, a recent AMD GPU shouldn't have too much troubles with OpenCL and NNEDI3, right?
I had same problem with gigabyte p55ud4 (i7 860- and R9 280x GFX
I tried clean win7 install and it was still KO with 13.12.
I tried nvidia gtx560ti gfx and no problem
I tried R9 280x on other motherboard (asus Z77 i7 3770k) and no problem
I guess there is some issues with AMD - p55 - openGL - 7xxx series.
tschi is offline   Reply With Quote
Old 15th February 2014, 15:26   #23189  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
I've never had a single problem with OpenCL performance and a 1Ghz 7850 on W7SP1 using the latest WHQL drivers, maybe you guys could troubleshoot using that OpenCL benchmark?

You can see what kind of figures to expect at http://www.extremetech.com/computing...tcoin-mining/2

You did update DX9.0c, right? Not sure if it's needed on W8 but W7SP1 definitely needs it, it's available at http://www.microsoft.com/en-us/downl...s.aspx?id=8109

Quote:
Originally Posted by mithra66 View Post
Having a 650ti, I already have to put first NNEDI3 upscale in Avisynth filter for 30fps files. I think I'll wait for 20nm Maxwell GPU (supposedly gtx870 for my budget) for 64 neurons and 50/60 fps deinterlaced.
Yeah, hopefully someday nvidia with catch up with AMD for raw GPGPU performance as I'd very much fancy the ability to use CUVID deinterlacing again.

Last edited by leeperry; 15th February 2014 at 15:31.
leeperry is offline   Reply With Quote
Old 15th February 2014, 15:50   #23190  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
Well, I think my code is alright, but have a look yourself:

gamma light:
Code:
float3 calcPixelGamma(float3 gammaPixel, float3 collectedError, float randomValue, out float3 newError)
{
  float3 tempPixel = gammaPixel * 255.0f;
  float3 floorG = floor(tempPixel) / 255.0f;
  float3  ceilG = ceil (tempPixel) / 255.0f;
  float3 result = (gammaPixel + randomValue + collectedError < 0.5 * (floorG + ceilG)) ? floorG : ceilG;
  newError = gammaPixel + collectedError - result;
  return result;
}
linear light:
Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 1.0 / 0.45);
}

float3 calcPixelLinear(float3 gammaPixel, float3 collectedError, float randomValue, out float3 newError)
{
  float3 tempPixel = gammaPixel * 255.0f;
  float3 floorG = floor(tempPixel) / 255.0f;
  float3  ceilG = ceil (tempPixel) / 255.0f;
  float3 floorL = convertToLinear(floorG);
  float3  ceilL = convertToLinear( ceilG);
  bool3 useFloor = convertToLinear(gammaPixel + randomValue) + collectedError < 0.5 * (floorL + ceilL);
  newError = clamp(convertToLinear(gammaPixel) + collectedError, floorL, ceilL) - ((useFloor) ? floorL : ceilL);
  return (useFloor) ? floorG : ceilG;
}
One thing I'm not happy about is the clamp() call. I think it could potentially introduce a small error into the math. However, it was necessary to make the limiter work. Without the clamp() call the limiter produced artifacts in linear light. Of course I could remove both the limiter and the clamp() call, but then we would reintroduce some rare stray dots.
FWIW you are using a very different version of linear light then I was thinking about. It also thought that gammaLight used gamma correction, but that doesn't seem to be the case. I'm also not sure why the call to clamp is necessary, ideally it shouldn't be there; you mention that it is necessary to make the limiter work, but as far as I can tell the code you used doesn't need a limiter since it will always output either floor(gammaValue) or ceil(gammaValue).

Anyway, the version of linear light I expected was the following:

Code:
float3 convertToLinear(float3 gammaValue)
{
  return (x<=0.04045) ? gammaValue/12.92 : pow((gammaValue+0.055)/1.055, 2.4)
}
as opposed to simple gamma correction, which would be:

Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 2.2);
}
If the gammaLight version does not use any gamma correction then that would explain why it performs better on darker colours, because actual linear light has a gamma of 1.0 near black not a gamma of 1/0.45 that was used in the code. This also explains why they seemed to behave exactly the opposite way of how I expected them to behave because I assumed gammaLight used gamma correction with a gamma of 2.2, but instead it didn't use gamma correction at all which is 'correct' for values near black.
Shiandow is offline   Reply With Quote
Old 15th February 2014, 16:05   #23191  |  Link
xabregas
Registered User
 
Join Date: Jun 2011
Posts: 121
Hi

why all say that downscaling with linear light checked is better? what does this option do?

I tried to downscale lanczo 4 ar and ll with chroma NNEDI3 16 taps and dropped frames and after uncheck linear light i get no dropped frames, all settings equal...

TIA
xabregas is offline   Reply With Quote
Old 15th February 2014, 16:43   #23192  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Aikibana View Post
Only NNEDI3 up to 32 neurons for Chroma upscaling works, anything higher then that eats up all GPU power and results in dropped frames.

Image doubling is just impossible, even just for Luma at 16 neurons. I get massive dropped frames and the video stutters heavily.
Try switching all algorithms (chroma upscaling, image upscaling, image downscaling) to Bilinear, and then enable NNEDI3 image doubling with 16 neurons. That should work just fine. At least it does on my PC with my HD7770. Once you get that working smoothly, you can set the other algorithms to better algorithms one-by-one.

Quote:
Originally Posted by bacondither View Post
Could you try a gamma compensation value of 1/2.0=0.5
Yes, can try that.

Quote:
Originally Posted by cyberbeing View Post
How does this differ from the gamma->linear->gamma conversion you do for linear light scaling which has a large performance impact?
Not sure what you mean? Error diffusion and scaling are completely different algorithms.

Quote:
Originally Posted by cyberbeing View Post
Also isn't that reversed?

If you are converting from 2.2 Gamma to 1.0 Linear shouldn't it be something like the following:

Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 1.0 / 2.2);
}
No. That would be "convertLinearToGamma()". See also here:

http://www.avsforum.com/t/912720/color-correction-with-a-htpc-simpler-solution-and-now-it-really-works/360

The pixel shader there shows you the right order of the pow() factors.

Quote:
Originally Posted by Shiandow View Post
FWIW you are using a very different version of linear light then I was thinking about. It also thought that gammaLight used gamma correction, but that doesn't seem to be the case. [...]

Anyway, the version of linear light I expected was the following:

Code:
float3 convertToLinear(float3 gammaValue)
{
  return (x<=0.04045) ? gammaValue/12.92 : pow((gammaValue+0.055)/1.055, 2.4)
}
as opposed to simple gamma correction, which would be:

Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 2.2);
}
If the gammaLight version does not use any gamma correction then that would explain why it performs better on darker colours, because actual linear light has a gamma of 1.0 near black not a gamma of 1/0.45 that was used in the code. This also explains why they seemed to behave exactly the opposite way of how I expected them to behave because I assumed gammaLight used gamma correction with a gamma of 2.2, but instead it didn't use gamma correction at all which is 'correct' for values near black.
The formula you're suggesting is the BT.601/709 transfer function. That is the function the original content was encoded with. But it's not the function the content is supposed to be viewed with. BT.601/709 were made at a time when CRTs still ruled the world, and CRTs have a pure power curve transfer function. Even BT.1886, the latest spec on how to calibrate your display, suggests a pure power curve calibration of the display - if the display has perfect black levels. Ok, the BT.1886 calibration differs from a pure power curve if the display black levels are not perfectly zero, so in most cases. But the BT.601/709 transfer function is usually only used for encoding, and then never used again.

It's a valid question which gamma curve to use for converting data between gamma <-> linear, for the purpose of processing the data in linear light. Personally, I think using a pure power curve for that is the way to go, because that's the way the content should ideally be displayed, but that's just my personal opinion.

Quote:
Originally Posted by Shiandow View Post
I'm also not sure why the call to clamp is necessary, ideally it shouldn't be there; you mention that it is necessary to make the limiter work, but as far as I can tell the code you used doesn't need a limiter since it will always output either floor(gammaValue) or ceil(gammaValue).
The "limiter" functionality is the code which uses either floor() or ceil() of the gammaValue. When using this code, there are artifacts if you don't do the clamp(). If I want to get rid of clamp() I'd have to drop the floor/ceil solution and instead round the pixel to the nearest gamma value, which can sometimes be lower than floor(gammaValue) or higher than ceil(gammaValue).

Quote:
Originally Posted by xabregas View Post
why all say that downscaling with linear light checked is better? what does this option do?

I tried to downscale lanczo 4 ar
The usual recommendation is to not use Lanczos4 for downscaling, but instead simple Catmull-Rom AR with linear light. Linear light improves certain things, but also increases the danger of getting ringing artifacts. Because of that a 2-tap scaling algorithm (like Catmull-Rom) is probably the better choice, when using linear light downscaling.

Last edited by madshi; 15th February 2014 at 16:55.
madshi is offline   Reply With Quote
Old 15th February 2014, 17:04   #23193  |  Link
Aikibana
Registered User
 
Join Date: Aug 2012
Posts: 12
Quote:
Originally Posted by tschi View Post
I had same problem with gigabyte p55ud4 (i7 860- and R9 280x GFX
I tried clean win7 install and it was still KO with 13.12.
I tried nvidia gtx560ti gfx and no problem
I tried R9 280x on other motherboard (asus Z77 i7 3770k) and no problem
I guess there is some issues with AMD - p55 - openGL - 7xxx series.
I have a Gigabyte P55UD(3) mobo aswell!

Maybe others with this config could confirm or dispute these findings?
Aikibana is offline   Reply With Quote
Old 15th February 2014, 17:06   #23194  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
The formula you're suggesting is the BT.601/709 transfer function. That is the function the original content was encoded with. But it's not the function the content is supposed to be viewed with. BT.601/709 were made at a time when CRTs still ruled the world, and CRTs have a pure power curve transfer function. Even BT.1886, the latest spec on how to calibrate your display, suggests a pure power curve calibration of the display - if the display has perfect black levels. Ok, the BT.1886 calibration differs from a pure power curve if the display black levels are not perfectly zero, so in most cases. But the BT.601/709 transfer function is usually only used for encoding, and then never used again.

It's a valid question which gamma curve to use for converting data between gamma <-> linear. Personally, I think using a pure power curve for that is the way to go, because that's the way the content should ideally be displayed, but that's just my personal opinion.
As far as I know that transfer function is the on that is used for sRGB, in theory this should correspond to the brightness that is used by the monitor. Unless the monitor is using a pure gamma curve but I thought that most would use the sRGB standard. Unfortunately it seems unlikely that you can do this in a way that looks right for all screens, and the brightened examples don't really help since the brightening step itself needs to make assumptions about the way the screen displays the values.

I fear that you'd need more than 1 version too make it look right for all screens, ideally this should depend on the calibration settings for your screen in madVR.

Quote:
Originally Posted by madshi View Post
The "limiter" functionality is the code which uses either floor() or ceil() of the gammaValue. When using this code, there are artifacts if you don't do the clamp(). If I want to get rid of clamp() I'd have to drop the floor/ceil solution and instead round the pixel to the nearest gamma value, which can sometimes be lower than floor(gammaValue) or higher than ceil(gammaValue).
I'm still not entirely sure why this 'clamp' is needed to avoid artifacts, it should only very rarely have an effect. The best explanation I've got is that randomValue is biased for some reason. The reason that I think so is that without randomValue the clamp shouldn't even have any effect.
Shiandow is offline   Reply With Quote
Old 15th February 2014, 17:26   #23195  |  Link
bacondither
Registered User
 
Join Date: Oct 2013
Location: Sweden
Posts: 128
This might be relevant to the linear light discussion.

http://forum.doom9.org/showthread.php?t=162323
bacondither is offline   Reply With Quote
Old 15th February 2014, 17:28   #23196  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 496
Quote:
Originally Posted by Shiandow View Post
As far as I know that transfer function is the on that is used for sRGB, in theory this should correspond to the brightness that is used by the monitor. Unless the monitor is using a pure gamma curve but I thought that most would use the sRGB standard. Unfortunately it seems unlikely that you can do this in a way that looks right for all screens, and the brightened examples don't really help since the brightening step itself needs to make assumptions about the way the screen displays the values.
Not entirely relevant, but since you mentioned sRGB.

Just FYI, we had talks about including a sRGB calibration setting in madVR some time go, but ultimately madshi decided it wasn´t worth it, because the argument was that it´s not explicitly video-related, which means that if you want madVR to behave perfectly, you´d need to calibrate to the offered video targets.

The problem with all screenshot-comparisons out of madVR is that (same problem I had my last 2 posts) there are lots of people with different calibration targets and madVR calibration settings already, which are specifically optimized for madVR. They are not meant for desktop usage at all.

Some of us use BT.1886, some use BT.1886 with adjusted black values, some use BT.709 (even though sRGB would be a lot more optimal to the actual display characteristics, including a gamma of 2.2), some use SMPTE-C (e.x. some CRT users) and so on. Additionally, not everyone even uses madVR's 3DLUTs.

But from outside of madVR, at least on PC monitors, we usually have a pure power gamma again, so I´m not even sure if everyone is looking at the comparisons correctly. Additionally, not all monitors can even show all 0-255 levels, even if calibrated correctly.

@madshi:
Adding the DCI-P3 bug report right now.

Last edited by iSunrise; 15th February 2014 at 17:45.
iSunrise is offline   Reply With Quote
Old 15th February 2014, 17:38   #23197  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by iSunrise View Post
not everyone even uses madVRs 3DLUTs
A very small minority does actually, personally I've calibrated my TV to D65 using HCFR, an i1d2 and the 10/12bit TV built-in settings and then used that PS script to automatically roll gamuts in my media player based on resolution/fps. I know it's supposedly not as good as 3DLUT's but it's up and running in 2 mins and screenshots comparisons against tritical's ddcc() didn't show any difference, it also opens instantly even though 3DLUT's on a ramdisk would load equally as fast I guess.

The vast majority of mVR users prolly didn't bother calibrating their display with a sensor.

Last edited by leeperry; 15th February 2014 at 17:40.
leeperry is offline   Reply With Quote
Old 15th February 2014, 17:48   #23198  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
No. That would be "convertLinearToGamma()".
Then I guess my understanding of linear gamma is reversed? The point isn't to change the display gamma of the source from 2.2 gamma to 1.0 linear gamma, but rather to remove the inverse transfer function so the source values will be linear on a 1.0 linear gamma device?

Quote:
Originally Posted by madshi View Post
Not sure what you mean? Error diffusion and scaling are completely different algorithms.
My question is if your R'G'B' to RGB to R'G'B' operation was the same for both. In both cases you apply the conversion to linear light on textures(?), perform processing, then convert back to gamma corrected light? Or is something different? Essentially, could this problem with near-black tones we are seeing with the linear light error-diffusion, also be detrimental to the linear light scaling performed in madVR?

Quote:
Originally Posted by madshi View Post
Linear light improves certain things, but also increases the danger of getting ringing artifacts.
Linear light scaling increases the danger of aliasing as well.

Last edited by cyberbeing; 15th February 2014 at 18:22.
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 17:55   #23199  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 496
Quote:
Originally Posted by leeperry View Post
The vast majority of mVR users prolly didn't bother calibrating their display with a sensor.
Indeed. And even if you do that, the thing with calibration is that, even with very high-end LCD screens, you would need to re-calibrate at least every couple of months, since the CCFL backlights lose their light intensity over time.

But this would be a strict requirement of we´re doing such comparisons, therefore I´m not sure if - when we are comparing a lot of the lower level bars - we ever come to the same conclusion.

But it definitely is like Ver Greeneyes posted here, GL is closer to optimal than LL is (at least for me), for whatever reason there may be.

Maybe madshi has another idea about that, because I trust his algorithms, otherwise someone would have noticed something really wrong.

Last edited by iSunrise; 15th February 2014 at 17:59.
iSunrise is offline   Reply With Quote
Old 15th February 2014, 18:43   #23200  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by iSunrise View Post
Not entirely relevant, but since you mentioned sRGB.

Just FYI, we had talks about including a sRGB calibration setting in madVR some time go, but ultimately madshi decided it wasn´t worth it, because the argument was that it´s not explicitly video-related, which means that if you want madVR to behave perfectly, you´d need to calibrate to the offered video targets
Indeed it seems that madVR does not actually have a setting to use a sRGB gamma curve, although you can specify that it should use a BT. 709 gamma curve with gamma 2.2, which (I think) should be identical. Although I'm not exactly sure since as far as I can tell a BT. 709 gamma curve should correspond to a gamma of 2.4 by definition.

But it wasn't my intention to suggest that madshi should use the algorithms I posted. I mainly posted them to try and explain the difference between what I expected and what actually happened. My main point was that you'd expect gamma light to behave better when viewed on a monitor calibrated with an sRGB gamma curve. This explained both why people preferred gamma-light (since good behaviour around black is preferable to good behaviour around white) and it explained why linear light was darker than I expected (since I misunderstood and thought gamma-light used a pure power gamma correction). I do think that in an ideal world madVR would contain an option to specify that your display is callibrated to an sRGB gamma curve and error dithering would use this setting to convert values to linear light. But given that most people haven't callibrated their displays or even know which gamma curve is closest to their display's, this may be overly complicated and not particularly useful.

FWIW as I mentioned I had attempted to reconstruct bacondithers images and if I use sRGB linear light then gamma-light is preferrable but if I use madshi's gamma correction then linear-light is better. What this means for actual viewing depends heavily on which calibration is most used by displays.
Shiandow is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:26.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.