Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th February 2014, 02:11   #23161  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 752
Quote:
Originally Posted by Asmodian View Post
I prefer LL over GL dithering and I do not think these test images can be used to decide given the above.
Actually, I tried to reconstruct what bacondither did but with the brightening performed in linear light, as far as I can tell this makes GL perform better (less banding, more accurate). I'll post some images in the near future, I still need to make sure that I haven't made any silly mistakes.
Shiandow is offline   Reply With Quote
Old 15th February 2014, 02:15   #23162  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 445
I did the same thing bacondither did but with device RGB spacing, and generating an expanded version of the original myself (it's my program, after all). There's no trickyness going on here: every image is expanded from 0-6 to 0-255, except the expanded original which I generated. Let me know if this helps clear up the difference between GL and LL.

no dithering
random dithering
gamma light error diffusion
linear light error diffusion
expanded original

(I think it shows that Gamma Light is more true to the source. How is the Linear Light conversion being done?)

Last edited by Ver Greeneyes; 15th February 2014 at 02:18.
Ver Greeneyes is offline   Reply With Quote
Old 15th February 2014, 02:36   #23163  |  Link
sajara
Registered User
 
Join Date: Jan 2013
Posts: 18
Quote:
Originally Posted by iSunrise View Post
madshi, I wanted to do some further testing with some TIF images (8bit/16bit per component), but I only get a black screen when loading them into madVR. LAV doesnīt seem to be the problem, because the file is being loaded and LAV also shows the file properties correctly, but the picture just stays black no matter what I do (disabled smooth motion to be sure).

Can I solve this somehow?

Here are two examples (first one 8bit, second one 16bit):
http://www.mediafire.com/download/6n7ji6q9f2hi2jp/BrightnessCal.rar
http://www.mediafire.com/download/9sbb5terv272meo/ColourRamp-1.rar
opens here without problems

Quote:
Originally Posted by Ver Greeneyes View Post

(I think it shows that Gamma Light is more true to the source. How is the Linear Light conversion being done?)
Indeed it is. I also see that. Thanks for the original image.
sajara is offline   Reply With Quote
Old 15th February 2014, 02:43   #23164  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 497
Quote:
Originally Posted by Ver Greeneyes View Post
I did the same thing bacondither did but with device RGB spacing, and generating an expanded version of the original myself (it's my program, after all). There's no trickyness going on here: every image is expanded from 0-6 to 0-255, except the expanded original which I generated. Let me know if this helps clear up the difference between GL and LL.

no dithering
random dithering
gamma light error diffusion
linear light error diffusion
expanded original

(I think it shows that Gamma Light is more true to the source. How is the Linear Light conversion being done?)
Iīm not sure how you come to that conclusion based on these examples.

If you compare your expanded original with the dark grays/blacks at the top left or bottom right edge of the gamma light example, it brightens the dark grays/blacks up quite a lot, while linear light has an extremely smooth transition down to black and finally reference black that closely resembles that of your expanded original.

Why would we suddenly accept something that brightens up all the important blacks by such a considerable amount? Black isnīt like the black in your original anymore, itīs a block of gray. This would seriously hurt dark movies or dark scenes.

Also, when you look closely, with gamma light I can see big repeating blocks of gray levels, while with linear light this is barely visible. The transitions themselves are way smoother with linear light.

This perfectly resembles baconditherīs findings, indeed. Because in his examples I also prefered linear light for both of these reasons.

Other than that though, for some reason the gamma light build gives the impression of a more dynamic and vibrant picture, sharper edges and it seems to be a bit better in terms of detail.

Quite frustrating, really. Because when looking at still pictures like that, linear light wins (IMHO), but when pictures start to move, thatīs when our eyes seem to like gamma light more.

Quote:
Originally Posted by sajara View Post
opens here without problems
Thanks.

Last edited by iSunrise; 15th February 2014 at 03:21.
iSunrise is offline   Reply With Quote
Old 15th February 2014, 03:21   #23165  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 445
Quote:
Originally Posted by iSunrise View Post
If you compare your expanded original with the dark grays/blacks at the top left or bottom right edge of the gamma light example, it brightens the dark grays/blacks up quite a lot, while linear light has an extremely smooth transition down to black and finally reference black that closely resembles that of your expanded original.

Why would we suddenly accept something that brightens up all the important blacks by such a considerable amount? Black isnīt like the black in your original anymore, itīs a block of gray. This would seriously hurt dark movies or dark scenes.
I don't see this at all on my calibrated monitor. Both the expanded original and Gamma Light only go to black right near the end (matching a power law of x^(1/2.2), more or less), whereas Linear Light goes to black much sooner. It would be nicer to have a perceptual sample and perceptually expand everything so the transition isn't so sudden, but this will take a bit more work.

Quote:
Originally Posted by iSunrise View Post
Also, when you look closely, with gamma light I can see big repeating blocks of gray levels, while with linear light this is barely visible. The transitions themselves are way smoother with linear light.
I see them pretty clearly for both Gamma Light and Linear Light - but for the latter they stand out more near black, so I think Gamma Light looks smoother.
Ver Greeneyes is offline   Reply With Quote
Old 15th February 2014, 03:41   #23166  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 497
Quote:
Originally Posted by Ver Greeneyes View Post
I don't see this at all on my calibrated monitor. Both the expanded original and Gamma Light only go to black right near the end (matching a power law of x^(1/2.2), more or less), whereas Linear Light goes to black much sooner. It would be nicer to have a perceptual sample and perceptually expand everything so the transition isn't so sudden, but this will take a bit more work.

I see them pretty clearly for both Gamma Light and Linear Light - but for the latter they stand out more near black, so I think Gamma Light looks smoother.
Argh, I actually forgot to switch modes on my hardware calibrated monitor to a pure power gamma of 2.20 when I do these picture comparisons. That includes my last answer to bacondither, so everyone please ignore these posts. Sorry about that. You totally reminded me of that when you wrote that you also are on a pure power curve of 2.20.

Yes, you are perfectly right, gamma light is indeed more representative of the expanded original, because only the outer edge is indeed pure black.

At least for me, I think this is settled now, because I also strongly prefered gamma light when watching actual movies, anyway. My confusion suddenly has left the building. Hurray!

@madshi:
Just stumbled over a possible bug by accident. When I enable DCI-P3 calibration in the settings, I get this:


Going to bed now, way too late.

Last edited by iSunrise; 15th February 2014 at 04:14.
iSunrise is offline   Reply With Quote
Old 15th February 2014, 06:14   #23167  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by 6233638 View Post
I would much rather the algorithm be chosen based on objective tests for distortion/accuracy
I would as well, but madshi never set any such guidelines for testing such things. With it being a majority vote, the algorithm with the most preferred subjective appearance was always destined to win.

Quote:
Originally Posted by leeperry View Post
Nitpicking about grossly magnified and overexposed static screenshots is one thing but at the end of the day what matters is the subjective impression from a distance.
And this is why I decided to just take a step back and just watch from a distance which conclusions people came to through their own subjective testing. Here and there I'd give little tidbits from my objective tests to encourage closer examination in various regards, but that's about it. The irony is that way back in the beginning of testing, leeperry was strongly in favor of ED5 (floyd-stein, weight sum 1.00, old random generator), while at the end my top two favorites of NL5 (floyd-stein, 0.97, old random generator, bugfixes) & NL3 (floyd-stein, weight sum 1.00, old random generator, bugfixes) were probably closest to his initial preference. We both switched algorithms starting with the second build set.


Quote:
Originally Posted by madshi View Post
Modified the shader code a bit, but image output should be bit by bit identical to NL6.
Ah, in any case my testing on the linear and gamma builds was rather non-conclusive.

Quote:
Originally Posted by Ver Greeneyes View Post
That image Ver Greeneyes posted of the linear light build expansion from 0-6 to 0-255 is rather concerning though, since there are 6 very distinct bands with multiple pixel wide noiseless areas on their borders. On the gamma light build there were also some much less distinct bands in the same areas that were slightly visible, but the transition between them was much smoother with smaller level jumps per pixel.


Quote:
Originally Posted by iSunrise View Post
@madshi:
Just stumbled over a possible bug by accident. When I enable DCI-P3 calibration in the settings
I have a similar bug-report regarding DCI-P3 and 3DLUTs here.

Last edited by cyberbeing; 15th February 2014 at 06:28.
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 06:24   #23168  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by cyberbeing View Post
I would as well, but madshi never set any such guidelines for testing such things. With it being a majority vote, the algorithm with the most preferred subjective appearance was always destined to win.
Which I think is a bad thing when you consider that there are factors which could make the image look subjectively better, that could actually be caused by worse dithering performance. (e.g. masking flaws in the source by adding unnecessary noise)

Quote:
Originally Posted by cyberbeing View Post
That image Ver Greeneyes posted of the linear light build expansion from 0-6 to 0-255 is rather concerning though, since there are 6 massive bands with multiple pixel wide noiseless areas on their borders.
I wonder if it could be caused by this, which may be beneficial with random dither, but harmful with error diffusion?

Quote:
Originally Posted by madshi View Post
madVR v0.87
* madVR doesn't dither, anymore, when a pixel doesn't need dithering
Edit: or possibly the limiter which restricts the range of values being used.
I notice that you essentially have a "background" of a middle gray tone, and on either side there is dither using lighter or darker values on top of it - but there is no crossing point that blends all three values.

There are also stray green/magenta pixels too, which seems odd.

Last edited by 6233638; 15th February 2014 at 07:07.
6233638 is offline   Reply With Quote
Old 15th February 2014, 06:32   #23169  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 445
Quote:
Originally Posted by cyberbeing View Post
That image Ver Greeneyes posted of the linear light build expansion from 0-6 to 0-255 is rather concerning though, since there are 6 massive bands with multiple pixel wide noiseless areas on their borders.
These may just be areas where the 16-bit source happens to be an integer multiple of 1/255, so no dithering is needed. Considering the whole image only spans 7/256, it's possible we're reaching the limit of even 16-bit per channel encoding. In fact, because 65535 = 257*255, the set of 16-bit values contains all 256 8-bit values.

Edit: Here's another comparison, using a perceptual pattern and scaling the luma in a perceptually uniform space:
no dithering (luma only)
random dithering (luma only)
error diffusion - gamma light (luma only)
error diffusion - linear light (luma only)
expanded original (luma only)

I'm aware that the luma only images are a bit.. beige.. but I'm not sure what the right way is to deal with that is. Anyway, this comparison clearly shows that there's something wrong with linear light.

madshi, is it possible you're adding gamma instead of removing it? It turns out I was doing this with my perceptual test pattern due to a copy-paste mistake (I'll upload new versions soon), so I thought I'd ask :P

Last edited by Ver Greeneyes; 15th February 2014 at 07:42.
Ver Greeneyes is offline   Reply With Quote
Old 15th February 2014, 07:14   #23170  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by Ver Greeneyes View Post
These may just be areas where the 16-bit source happens to be an integer multiple of 1/255, so no dithering is needed. Considering the whole image only spans 7/256, it's possible we're reaching the limit of even 16-bit per channel encoding. In fact, because 65535 = 257*255, the set of 16-bit values contains all 256 8-bit values.
I think it may be a result of the change made in build 87 to disable dithering when a pixel doesn't need dithering - it happens with random dither too. I'd check with an older version, but every build prior to 87.x seems to be crashing immediately for me.
6233638 is offline   Reply With Quote
Old 15th February 2014, 07:21   #23171  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
Quote:
Originally Posted by 6233638 View Post
Which I think is a bad thing when you consider that there are factors which could make the image look subjectively better, that could actually be caused by worse dithering performance. (e.g. masking flaws in the source by adding unnecessary noise)
More "analog", "tube-like"? I watched old Antarctica 1983 movie yesterday with GL and had these impressions. Pure speculations though, I haven't much time to spend on tests. Some digital sources look too *digital* and may benefit from *analog* filter, but not all of them i suspect.
Qaq is offline   Reply With Quote
Old 15th February 2014, 07:29   #23172  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by Qaq View Post
More "analog", "tube-like"? I watched old Antarctica 1983 movie yesterday with GL and had these impressions. Pure speculations though, I haven't much time to spend on tests. Some digital sources look too *digital* and may benefit from *analog* filter, but not all of them i suspect.
That's not the job of a dither algorithm.
But from my continued testing... I don't mind how NL6 looks. At least not when testing gradients - though I'm still not convinced it was the best choice.

Last edited by 6233638; 15th February 2014 at 07:38.
6233638 is offline   Reply With Quote
Old 15th February 2014, 07:30   #23173  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by Ver Greeneyes View Post
These may just be areas where the 16-bit source happens to be an integer multiple of 1/255, so no dithering is needed. Considering the whole image only spans 7/256, it's possible we're reaching the limit of even 16-bit per channel encoding. In fact, because 65535 = 257*255, the set of 16-bit values contains all 256 8-bit values.
Yet the linear light version had a rather huge error near-black in your images compared to the gamma light version.

Gamma Light difference vs Reference | Linear Light difference vs Reference.

How did you generate these images with madVR? The only thing which could logically explain this, is if there was a mismatch somewhere between gamma light and linear light processing.

Quote:
Originally Posted by Ver Greeneyes View Post
Anyway, this comparison clearly shows that there's something wrong with linear light.
I see your other comparison now, and it adds weight to my suspicion. I agree that something seems wrong with how linear light is being applied here. It's like madVR is forgetting to convert the video from gamma light to linear light, apply dithering, and then convert back from linear light to gamma light again. But who knows. If you are using shaders, maybe it's something like the 0-6 -> 0-255 expansion being performed in gamma light instead of linear light.

Last edited by cyberbeing; 15th February 2014 at 07:50.
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 07:48   #23174  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 445
Quote:
Originally Posted by cyberbeing View Post
How did you generate these images with madVR? The only thing which could logically explain this, is if there was a mismatch somewhere between gamma light and linear light processing.
I used a test pattern I made myself (updated version uploading now; I used frame 29), took screenshots, cropped them to just the video, saved them as BMP images, then fed those into another program to boost their brightness in a perceptually uniform way (unfortunately I'm not sure how to deal with the chroma properly). I did this with both the linear light and the gamma light build. I updated my earlier post with some more images by the way if you'd like to see a non-beige but very colorful version

Edit: New test pattern is up if you want to try it yourself, link in my signature (the first one).

Quote:
Originally Posted by cyberbeing View Post
If you are using shaders, maybe it's something like the 0-6 -> 0-255 expansion being performed in gamma light instead of linear light.
I'm not using shaders, no, I'm directly modifying the data using C++ code. The first comparison I made did use straight up 0-6 -> 0-255 without any gamma processing, but the difference was fairly obvious even there.

Last edited by Ver Greeneyes; 15th February 2014 at 07:59.
Ver Greeneyes is offline   Reply With Quote
Old 15th February 2014, 08:00   #23175  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
Quote:
Originally Posted by 6233638 View Post
That's not the job of a dither algorithm.
Right. Just like "pop effect", "opened window" or "3D look".
Quote:
Originally Posted by 6233638 View Post
I don't mind how NL6 looks. At least not when testing gradients - though I'm still not convinced it was the best choice.
There could be no best choice at all, at least for a very different sources. But it's hard to tell in short time testing, thats why I didn't even try it. Whatever we expect, dither algorithm may act like an image filter - thats my exact point.
Qaq is offline   Reply With Quote
Old 15th February 2014, 08:31   #23176  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
@Ver Greeneyes

I see the test pattern now, though I still don't understand exactly what you are doing to produce those results.

When I take a screenshot of frame 29 of gradient-perceptual-v2.mkv, then expand from 0-6->0-255 I end up with the following results:

madVR Gamma Dither | madVR Linear Dither
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 08:39   #23177  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,201
Quote:
Originally Posted by 6233638 View Post
That's not the job of a dither algorithm.
But from my continued testing... I don't mind how NL6 looks. At least not when testing gradients - though I'm still not convinced it was the best choice.
I'm interested in seeing images showing NL5/4 or what ever doing a better job than NL6. I'm for accuracy and tidyness first and foremost, Leeperry could take that with a touch of grain. XD
ryrynz is offline   Reply With Quote
Old 15th February 2014, 09:04   #23178  |  Link
bacondither
Registered User
 
Join Date: Oct 2013
Location: Sweden
Posts: 125
Quote:
Originally Posted by Ver Greeneyes View Post
I did the same thing bacondither did but with device RGB spacing, and generating an expanded version of the original myself (it's my program, after all). There's no trickyness going on here: every image is expanded from 0-6 to 0-255, except the expanded original which I generated. Let me know if this helps clear up the difference between GL and LL.

no dithering
random dithering
gamma light error diffusion
linear light error diffusion
expanded original

(I think it shows that Gamma Light is more true to the source. How is the Linear Light conversion being done?)
I took you images and did a gaussian blur in linear light(GIMP 2.9 32-bit float linear, radius 3.5) and then converted back to 8-bit gamma light.

Doing gauss blur i gamma light is worng and i am guilty of doing it before.

The results:

SOURCE

GAMMA light

LINEAR light

Then a ran tests on the blurred results in matlab:

PSNR, higher is better!
Code:
out = psnr(GAMMA,SOURCE)

out =

   38.1919

out = psnr(LINEAR,SOURCE)

out =

   42.4802
Linear light is closer to the source image. It looks like gamma is being overcompensated a bit to much when converting to linear light.(slightly darker around black)

Last edited by bacondither; 15th February 2014 at 09:07.
bacondither is offline   Reply With Quote
Old 15th February 2014, 09:40   #23179  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
I'll share my unprofessional objective opinion.

Both of them do not look like the source.
One is too dark, the other is too bright.

In all of the tests you guys posted.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 15th February 2014 at 09:44.
James Freeman is offline   Reply With Quote
Old 15th February 2014, 09:41   #23180  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by Anime Viewer View Post
What stat do you want us to report? Average stats - rendering and present ms?

during a video opening scene
gamma version
6.28 ms (rendering)
1.53 ms (present)

linear version
6.44 ms (rendering)
1.51 ms (present)

when the show actually started the numbers increased a little:
gamma version
6.86 ms
1.53 ms

linear version
6.98 ms
1.50 ms

With such a small difference I don't care which way you go. Which ever produces better quality would be what I vote for in this case.
Interesting that for you linear is slower than gamma, while it's the other way around for some other users. Anyway, you're right, the drop in your measurements isn't worth worrying about.

Quote:
Originally Posted by har3inger View Post
Sorry for the tangent, but I have a quick update on my troubleshooting with Directcompute dithering.

Given that my laptop uses two GPUs, I decided to try what would happen if I disabled the intel 4000 HD.

Well, turns out madvr will try to run in software mode or something, as simple playback of 8 bit content without scaling slows down to a <1 fps slideshow. However, direct compute dithering now "works", in that it doesn't result in a black screen. I am still unsure if it's automatically ignoring directcompute settings in absence of GPU acceleration, or if I've somehow bypassed a bug in the drivers. I don't know why mpchc/madvr won't use my discreet gpu in the absence of the integrated one, but I did notice that madvr is displaying through an "unknown generic monitor" with an orange icon rather than the generic monitor that indicates my laptop screen.

This leads me to believe that there is likely something wrong in the 13.12 catalyst drivers or the OEM intel 4000 drivers I'm using. Updating the 4000's drivers is tricky because every installer I have found refuses to install. I doubt the 13.12 drivers are the culprit, since virtually no one else has my problem.
Not sure what I can say. madVR searches for Direct3D devices that are connected to the monitor on which the rendering window is positioned, and then uses those for pixel shader and also for DirectCompute processing. It seems that Direct3D doesn't work properly in your configuration when you disable the HD4000. I'm not sure why. That's outside of my control.

Quote:
Originally Posted by iSunrise View Post
Just stumbled over a possible bug by accident. When I enable DCI-P3 calibration in the settings, I get this
I don't have the time to look into that atm. Can you create a bug report for this in the bug tracker, with a description how to reproduce the problem? Thanks.

Quote:
Originally Posted by cyberbeing View Post
I would as well, but madshi never set any such guidelines for testing such things. With it being a majority vote, the algorithm with the most preferred subjective appearance was always destined to win.
I didn't set guidelines because both approaches (doing scientific analyzation and just trusting your eyes) have their merit. I've often found that trusting your eyes can sometimes be the best instrument. But I also find scientific analyzation very important. So I'm not biased either way. Personally, I didn't care too much which algorithm won because I personally thought they were all reasonably close. So I just wanted a decision, and the "easiest" and fairest way to make a decision is a simple vote. Of course a vote comes with certain dangers, but in the end, a decision had to be made, and without a vote I would have had to choose myself which of the two approaches is the right one. And I didn't want to make that decision because I don't know which is the right approach in this case.

I did make several blind tests, though, to make sure that people using either approach were consistent in their choices. And they were. So it seems to me both approaches do have their merit in this case.

Quote:
Originally Posted by 6233638 View Post
I notice that you essentially have a "background" of a middle gray tone, and on either side there is dither using lighter or darker values on top of it - but there is no crossing point that blends all three values.
Are we talking about random dithering or error diffusion? I'm asking because the changelog entry you're quoting only applies to random dithering.

Quote:
Originally Posted by Shiandow View Post
In my opinion it is most likely that something, somewhere has gone wrong. Unless I made a serious mistake in my reasoning somewhere the linear light build should be brighter for values near black.

The only way I can explain that linear light makes the dark regions brighter is if the linear gamma build used a gamma lower than 1, and I'm pretty sure this would have been noticeable.
Quote:
Originally Posted by Ver Greeneyes View Post
I'm aware that the luma only images are a bit.. beige.. but I'm not sure what the right way is to deal with that is. Anyway, this comparison clearly shows that there's something wrong with linear light.

madshi, is it possible you're adding gamma instead of removing it? It turns out I was doing this with my perceptual test pattern due to a copy-paste mistake (I'll upload new versions soon), so I thought I'd ask :P
Well, I think my code is alright, but have a look yourself:

gamma light:
Code:
float3 calcPixelGamma(float3 gammaPixel, float3 collectedError, float randomValue, out float3 newError)
{
  float3 tempPixel = gammaPixel * 255.0f;
  float3 floorG = floor(tempPixel) / 255.0f;
  float3  ceilG = ceil (tempPixel) / 255.0f;
  float3 result = (gammaPixel + randomValue + collectedError < 0.5 * (floorG + ceilG)) ? floorG : ceilG;
  newError = gammaPixel + collectedError - result;
  return result;
}
linear light:
Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 1.0 / 0.45);
}

float3 calcPixelLinear(float3 gammaPixel, float3 collectedError, float randomValue, out float3 newError)
{
  float3 tempPixel = gammaPixel * 255.0f;
  float3 floorG = floor(tempPixel) / 255.0f;
  float3  ceilG = ceil (tempPixel) / 255.0f;
  float3 floorL = convertToLinear(floorG);
  float3  ceilL = convertToLinear( ceilG);
  bool3 useFloor = convertToLinear(gammaPixel + randomValue) + collectedError < 0.5 * (floorL + ceilL);
  newError = clamp(convertToLinear(gammaPixel) + collectedError, floorL, ceilL) - ((useFloor) ? floorL : ceilL);
  return (useFloor) ? floorG : ceilG;
}
One thing I'm not happy about is the clamp() call. I think it could potentially introduce a small error into the math. However, it was necessary to make the limiter work. Without the clamp() call the limiter produced artifacts in linear light. Of course I could remove both the limiter and the clamp() call, but then we would reintroduce some rare stray dots.
madshi is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 01:26.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.