Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th February 2014, 07:24   #23161  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by cyberbeing View Post
I would as well, but madshi never set any such guidelines for testing such things. With it being a majority vote, the algorithm with the most preferred subjective appearance was always destined to win.
Which I think is a bad thing when you consider that there are factors which could make the image look subjectively better, that could actually be caused by worse dithering performance. (e.g. masking flaws in the source by adding unnecessary noise)

Quote:
Originally Posted by cyberbeing View Post
That image Ver Greeneyes posted of the linear light build expansion from 0-6 to 0-255 is rather concerning though, since there are 6 massive bands with multiple pixel wide noiseless areas on their borders.
I wonder if it could be caused by this, which may be beneficial with random dither, but harmful with error diffusion?

Quote:
Originally Posted by madshi View Post
madVR v0.87
* madVR doesn't dither, anymore, when a pixel doesn't need dithering
Edit: or possibly the limiter which restricts the range of values being used.
I notice that you essentially have a "background" of a middle gray tone, and on either side there is dither using lighter or darker values on top of it - but there is no crossing point that blends all three values.

There are also stray green/magenta pixels too, which seems odd.

Last edited by 6233638; 15th February 2014 at 08:07.
6233638 is offline   Reply With Quote
Old 15th February 2014, 07:32   #23162  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by cyberbeing View Post
That image Ver Greeneyes posted of the linear light build expansion from 0-6 to 0-255 is rather concerning though, since there are 6 massive bands with multiple pixel wide noiseless areas on their borders.
These may just be areas where the 16-bit source happens to be an integer multiple of 1/255, so no dithering is needed. Considering the whole image only spans 7/256, it's possible we're reaching the limit of even 16-bit per channel encoding. In fact, because 65535 = 257*255, the set of 16-bit values contains all 256 8-bit values.

Edit: Here's another comparison, using a perceptual pattern and scaling the luma in a perceptually uniform space:
no dithering (luma only)
random dithering (luma only)
error diffusion - gamma light (luma only)
error diffusion - linear light (luma only)
expanded original (luma only)

I'm aware that the luma only images are a bit.. beige.. but I'm not sure what the right way is to deal with that is. Anyway, this comparison clearly shows that there's something wrong with linear light.

madshi, is it possible you're adding gamma instead of removing it? It turns out I was doing this with my perceptual test pattern due to a copy-paste mistake (I'll upload new versions soon), so I thought I'd ask :P

Last edited by Ver Greeneyes; 15th February 2014 at 08:42.
Ver Greeneyes is offline   Reply With Quote
Old 15th February 2014, 08:14   #23163  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by Ver Greeneyes View Post
These may just be areas where the 16-bit source happens to be an integer multiple of 1/255, so no dithering is needed. Considering the whole image only spans 7/256, it's possible we're reaching the limit of even 16-bit per channel encoding. In fact, because 65535 = 257*255, the set of 16-bit values contains all 256 8-bit values.
I think it may be a result of the change made in build 87 to disable dithering when a pixel doesn't need dithering - it happens with random dither too. I'd check with an older version, but every build prior to 87.x seems to be crashing immediately for me.
6233638 is offline   Reply With Quote
Old 15th February 2014, 08:21   #23164  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
Quote:
Originally Posted by 6233638 View Post
Which I think is a bad thing when you consider that there are factors which could make the image look subjectively better, that could actually be caused by worse dithering performance. (e.g. masking flaws in the source by adding unnecessary noise)
More "analog", "tube-like"? I watched old Antarctica 1983 movie yesterday with GL and had these impressions. Pure speculations though, I haven't much time to spend on tests. Some digital sources look too *digital* and may benefit from *analog* filter, but not all of them i suspect.
Qaq is offline   Reply With Quote
Old 15th February 2014, 08:29   #23165  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by Qaq View Post
More "analog", "tube-like"? I watched old Antarctica 1983 movie yesterday with GL and had these impressions. Pure speculations though, I haven't much time to spend on tests. Some digital sources look too *digital* and may benefit from *analog* filter, but not all of them i suspect.
That's not the job of a dither algorithm.
But from my continued testing... I don't mind how NL6 looks. At least not when testing gradients - though I'm still not convinced it was the best choice.

Last edited by 6233638; 15th February 2014 at 08:38.
6233638 is offline   Reply With Quote
Old 15th February 2014, 08:30   #23166  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by Ver Greeneyes View Post
These may just be areas where the 16-bit source happens to be an integer multiple of 1/255, so no dithering is needed. Considering the whole image only spans 7/256, it's possible we're reaching the limit of even 16-bit per channel encoding. In fact, because 65535 = 257*255, the set of 16-bit values contains all 256 8-bit values.
Yet the linear light version had a rather huge error near-black in your images compared to the gamma light version.

Gamma Light difference vs Reference | Linear Light difference vs Reference.

How did you generate these images with madVR? The only thing which could logically explain this, is if there was a mismatch somewhere between gamma light and linear light processing.

Quote:
Originally Posted by Ver Greeneyes View Post
Anyway, this comparison clearly shows that there's something wrong with linear light.
I see your other comparison now, and it adds weight to my suspicion. I agree that something seems wrong with how linear light is being applied here. It's like madVR is forgetting to convert the video from gamma light to linear light, apply dithering, and then convert back from linear light to gamma light again. But who knows. If you are using shaders, maybe it's something like the 0-6 -> 0-255 expansion being performed in gamma light instead of linear light.

Last edited by cyberbeing; 15th February 2014 at 08:50.
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 08:48   #23167  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by cyberbeing View Post
How did you generate these images with madVR? The only thing which could logically explain this, is if there was a mismatch somewhere between gamma light and linear light processing.
I used a test pattern I made myself (updated version uploading now; I used frame 29), took screenshots, cropped them to just the video, saved them as BMP images, then fed those into another program to boost their brightness in a perceptually uniform way (unfortunately I'm not sure how to deal with the chroma properly). I did this with both the linear light and the gamma light build. I updated my earlier post with some more images by the way if you'd like to see a non-beige but very colorful version

Edit: New test pattern is up if you want to try it yourself, link in my signature (the first one).

Quote:
Originally Posted by cyberbeing View Post
If you are using shaders, maybe it's something like the 0-6 -> 0-255 expansion being performed in gamma light instead of linear light.
I'm not using shaders, no, I'm directly modifying the data using C++ code. The first comparison I made did use straight up 0-6 -> 0-255 without any gamma processing, but the difference was fairly obvious even there.

Last edited by Ver Greeneyes; 15th February 2014 at 08:59.
Ver Greeneyes is offline   Reply With Quote
Old 15th February 2014, 09:00   #23168  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
Quote:
Originally Posted by 6233638 View Post
That's not the job of a dither algorithm.
Right. Just like "pop effect", "opened window" or "3D look".
Quote:
Originally Posted by 6233638 View Post
I don't mind how NL6 looks. At least not when testing gradients - though I'm still not convinced it was the best choice.
There could be no best choice at all, at least for a very different sources. But it's hard to tell in short time testing, thats why I didn't even try it. Whatever we expect, dither algorithm may act like an image filter - thats my exact point.
Qaq is offline   Reply With Quote
Old 15th February 2014, 09:31   #23169  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
@Ver Greeneyes

I see the test pattern now, though I still don't understand exactly what you are doing to produce those results.

When I take a screenshot of frame 29 of gradient-perceptual-v2.mkv, then expand from 0-6->0-255 I end up with the following results:

madVR Gamma Dither | madVR Linear Dither
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 09:39   #23170  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,646
Quote:
Originally Posted by 6233638 View Post
That's not the job of a dither algorithm.
But from my continued testing... I don't mind how NL6 looks. At least not when testing gradients - though I'm still not convinced it was the best choice.
I'm interested in seeing images showing NL5/4 or what ever doing a better job than NL6. I'm for accuracy and tidyness first and foremost, Leeperry could take that with a touch of grain. XD
ryrynz is offline   Reply With Quote
Old 15th February 2014, 10:04   #23171  |  Link
bacondither
Registered User
 
Join Date: Oct 2013
Location: Sweden
Posts: 128
Quote:
Originally Posted by Ver Greeneyes View Post
I did the same thing bacondither did but with device RGB spacing, and generating an expanded version of the original myself (it's my program, after all). There's no trickyness going on here: every image is expanded from 0-6 to 0-255, except the expanded original which I generated. Let me know if this helps clear up the difference between GL and LL.

no dithering
random dithering
gamma light error diffusion
linear light error diffusion
expanded original

(I think it shows that Gamma Light is more true to the source. How is the Linear Light conversion being done?)
I took you images and did a gaussian blur in linear light(GIMP 2.9 32-bit float linear, radius 3.5) and then converted back to 8-bit gamma light.

Doing gauss blur i gamma light is worng and i am guilty of doing it before.

The results:

SOURCE

GAMMA light

LINEAR light

Then a ran tests on the blurred results in matlab:

PSNR, higher is better!
Code:
out = psnr(GAMMA,SOURCE)

out =

   38.1919

out = psnr(LINEAR,SOURCE)

out =

   42.4802
Linear light is closer to the source image. It looks like gamma is being overcompensated a bit to much when converting to linear light.(slightly darker around black)

Last edited by bacondither; 15th February 2014 at 10:07.
bacondither is offline   Reply With Quote
Old 15th February 2014, 10:40   #23172  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
I'll share my unprofessional objective opinion.

Both of them do not look like the source.
One is too dark, the other is too bright.

In all of the tests you guys posted.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 15th February 2014 at 10:44.
James Freeman is offline   Reply With Quote
Old 15th February 2014, 10:41   #23173  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Anime Viewer View Post
What stat do you want us to report? Average stats - rendering and present ms?

during a video opening scene
gamma version
6.28 ms (rendering)
1.53 ms (present)

linear version
6.44 ms (rendering)
1.51 ms (present)

when the show actually started the numbers increased a little:
gamma version
6.86 ms
1.53 ms

linear version
6.98 ms
1.50 ms

With such a small difference I don't care which way you go. Which ever produces better quality would be what I vote for in this case.
Interesting that for you linear is slower than gamma, while it's the other way around for some other users. Anyway, you're right, the drop in your measurements isn't worth worrying about.

Quote:
Originally Posted by har3inger View Post
Sorry for the tangent, but I have a quick update on my troubleshooting with Directcompute dithering.

Given that my laptop uses two GPUs, I decided to try what would happen if I disabled the intel 4000 HD.

Well, turns out madvr will try to run in software mode or something, as simple playback of 8 bit content without scaling slows down to a <1 fps slideshow. However, direct compute dithering now "works", in that it doesn't result in a black screen. I am still unsure if it's automatically ignoring directcompute settings in absence of GPU acceleration, or if I've somehow bypassed a bug in the drivers. I don't know why mpchc/madvr won't use my discreet gpu in the absence of the integrated one, but I did notice that madvr is displaying through an "unknown generic monitor" with an orange icon rather than the generic monitor that indicates my laptop screen.

This leads me to believe that there is likely something wrong in the 13.12 catalyst drivers or the OEM intel 4000 drivers I'm using. Updating the 4000's drivers is tricky because every installer I have found refuses to install. I doubt the 13.12 drivers are the culprit, since virtually no one else has my problem.
Not sure what I can say. madVR searches for Direct3D devices that are connected to the monitor on which the rendering window is positioned, and then uses those for pixel shader and also for DirectCompute processing. It seems that Direct3D doesn't work properly in your configuration when you disable the HD4000. I'm not sure why. That's outside of my control.

Quote:
Originally Posted by iSunrise View Post
Just stumbled over a possible bug by accident. When I enable DCI-P3 calibration in the settings, I get this
I don't have the time to look into that atm. Can you create a bug report for this in the bug tracker, with a description how to reproduce the problem? Thanks.

Quote:
Originally Posted by cyberbeing View Post
I would as well, but madshi never set any such guidelines for testing such things. With it being a majority vote, the algorithm with the most preferred subjective appearance was always destined to win.
I didn't set guidelines because both approaches (doing scientific analyzation and just trusting your eyes) have their merit. I've often found that trusting your eyes can sometimes be the best instrument. But I also find scientific analyzation very important. So I'm not biased either way. Personally, I didn't care too much which algorithm won because I personally thought they were all reasonably close. So I just wanted a decision, and the "easiest" and fairest way to make a decision is a simple vote. Of course a vote comes with certain dangers, but in the end, a decision had to be made, and without a vote I would have had to choose myself which of the two approaches is the right one. And I didn't want to make that decision because I don't know which is the right approach in this case.

I did make several blind tests, though, to make sure that people using either approach were consistent in their choices. And they were. So it seems to me both approaches do have their merit in this case.

Quote:
Originally Posted by 6233638 View Post
I notice that you essentially have a "background" of a middle gray tone, and on either side there is dither using lighter or darker values on top of it - but there is no crossing point that blends all three values.
Are we talking about random dithering or error diffusion? I'm asking because the changelog entry you're quoting only applies to random dithering.

Quote:
Originally Posted by Shiandow View Post
In my opinion it is most likely that something, somewhere has gone wrong. Unless I made a serious mistake in my reasoning somewhere the linear light build should be brighter for values near black.

The only way I can explain that linear light makes the dark regions brighter is if the linear gamma build used a gamma lower than 1, and I'm pretty sure this would have been noticeable.
Quote:
Originally Posted by Ver Greeneyes View Post
I'm aware that the luma only images are a bit.. beige.. but I'm not sure what the right way is to deal with that is. Anyway, this comparison clearly shows that there's something wrong with linear light.

madshi, is it possible you're adding gamma instead of removing it? It turns out I was doing this with my perceptual test pattern due to a copy-paste mistake (I'll upload new versions soon), so I thought I'd ask :P
Well, I think my code is alright, but have a look yourself:

gamma light:
Code:
float3 calcPixelGamma(float3 gammaPixel, float3 collectedError, float randomValue, out float3 newError)
{
  float3 tempPixel = gammaPixel * 255.0f;
  float3 floorG = floor(tempPixel) / 255.0f;
  float3  ceilG = ceil (tempPixel) / 255.0f;
  float3 result = (gammaPixel + randomValue + collectedError < 0.5 * (floorG + ceilG)) ? floorG : ceilG;
  newError = gammaPixel + collectedError - result;
  return result;
}
linear light:
Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 1.0 / 0.45);
}

float3 calcPixelLinear(float3 gammaPixel, float3 collectedError, float randomValue, out float3 newError)
{
  float3 tempPixel = gammaPixel * 255.0f;
  float3 floorG = floor(tempPixel) / 255.0f;
  float3  ceilG = ceil (tempPixel) / 255.0f;
  float3 floorL = convertToLinear(floorG);
  float3  ceilL = convertToLinear( ceilG);
  bool3 useFloor = convertToLinear(gammaPixel + randomValue) + collectedError < 0.5 * (floorL + ceilL);
  newError = clamp(convertToLinear(gammaPixel) + collectedError, floorL, ceilL) - ((useFloor) ? floorL : ceilL);
  return (useFloor) ? floorG : ceilG;
}
One thing I'm not happy about is the clamp() call. I think it could potentially introduce a small error into the math. However, it was necessary to make the limiter work. Without the clamp() call the limiter produced artifacts in linear light. Of course I could remove both the limiter and the clamp() call, but then we would reintroduce some rare stray dots.
madshi is offline   Reply With Quote
Old 15th February 2014, 10:57   #23174  |  Link
bacondither
Registered User
 
Join Date: Oct 2013
Location: Sweden
Posts: 128
@madshi
Could you try a gamma compensation value of 1/2.0=0.5
Code:
return pow(saturate(gammaValue), 1.0 / 0.5);
It should compensate for the lowered brightness around black.(or mabe i´m thinking backwards?)

Quote:
OECF standards
ITU-R Rec. BT.709, Parameter values
for the HDTV standard for the studio
and for international programme
exchange (Geneva: ITU).
BT.709 standardizes a reference camera transfer function (OECF),
graphed in Figure 3. BT.709’s encoding equation incorporates a power
function exponent of 0.45 – what I call the “advertised” gamma.
However, BT.709’s equation incorporates a linear segment near black;
the power function portion is scaled and offset to achieve function
and slope continuity at the breakpoint (which lies at about 2% rela-tive luminance). The scaling and offseting cause the effective expo-
nent to be higher than the exponent “advertised” in the equation. If
BT.709’s curve is approximated as a single pure power function, the
best exponent is about 0.5: BT.709’s encoding is effectively a square
root.

Last edited by bacondither; 15th February 2014 at 12:22.
bacondither is offline   Reply With Quote
Old 15th February 2014, 11:10   #23175  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
Well, I think my code is alright, but have a look yourself
How does this differ from the gamma->linear->gamma conversion you do for linear light scaling which has a large performance impact?
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 11:34   #23176  |  Link
Aikibana
Registered User
 
Join Date: Aug 2012
Posts: 12
Can't figure out why my system can't handle NNEDI3.

System:
AMD HD 7770 (1GB GDDR5)
Intel i5-750 - 16GB RAM
Win 7 64
Madvr (0.87.4) + MP-HC (latest nightly build) including LAV
Catalyst 14.1
I play mostly high quality videos (720p/1080p) via extended desktop on a 55 inch plasma.

Only NNEDI3 up to 32 neurons for Chroma upscaling works, anything higher then that eats up all GPU power and results in dropped frames.

Image doubling is just impossible, even just for Luma at 16 neurons. I get massive dropped frames and the video stutters heavily.

I've tried some trade-offs (using random dithering, Lanczos or Bicubic 50 for up- and downscaling), but it doesn't resolve anything.

GPU-Z says OpenCL is fully operational and anything but NNEDI3 works just fine (Jinc3/4, new debanding feature, no trade-offs).

I've read Madshi can pull of NNEDI3 and many of you seem to have no problems testing all different builds, so I'm just wondering if it's a software setting or something really obvious I've missed...

Any help would be greatly appreciated!
Aikibana is offline   Reply With Quote
Old 15th February 2014, 11:58   #23177  |  Link
noee
Registered User
 
Join Date: Jan 2007
Posts: 530
Quote:
Originally Posted by Aikibana View Post
Can't figure out why my system can't handle NNEDI3.
Go back to latest stable, 13.12. I've had some problems with the 14.1 beta as well, not worth struggling with right now. 14.2 is coming soon anyway, you could try again with that later.
__________________
Win7Ult || RX560/4G || Ryzen 5
noee is offline   Reply With Quote
Old 15th February 2014, 12:02   #23178  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 1.0 / 0.45);
}
Also isn't that reversed?

If you are converting from 2.2 Gamma to 1.0 Linear shouldn't it be something like the following:

Code:
float3 convertToLinear(float3 gammaValue)
{
  return pow(saturate(gammaValue), 1.0 / 2.2);
}

I know if I make an MPC-HC shader like the following, madVR will convert the gamma from 2.2 -> 1.0 gamma:
Code:
sampler s0;

float4 main(float2 tex : TEXCOORD0) : COLOR
{
  return pow(tex2D(s0, tex), 1.0 / 2.2);
}
Yet if I make an MPC-HC shader like the following, madVR will convert the gamma from 2.2 -> 4.88 gamma (or 1.0 -> 2.2):
Code:
sampler s0;

float4 main(float2 tex : TEXCOORD0) : COLOR
{
  return pow(tex2D(s0, tex), 1.0 / 0.45);
}

Last edited by cyberbeing; 15th February 2014 at 12:15.
cyberbeing is offline   Reply With Quote
Old 15th February 2014, 12:18   #23179  |  Link
Aikibana
Registered User
 
Join Date: Aug 2012
Posts: 12
Quote:
Originally Posted by noee View Post
Go back to latest stable, 13.12. I've had some problems with the 14.1 beta as well, not worth struggling with right now. 14.2 is coming soon anyway, you could try again with that later.
Actually, I just updated to 14.1 beta because 13.12 didn't work either enabling smooth NNEDI3-playback. For me 14.1 works 'as good/bad' as version 13.12.

Also tried overclocking my 7770 Gh edition, but doesn't change much and makes my system unstable.

From what I understood, a recent AMD GPU shouldn't have too much troubles with OpenCL and NNEDI3, right?
Aikibana is offline   Reply With Quote
Old 15th February 2014, 12:18   #23180  |  Link
mithra66
Registered User
 
Join Date: Jan 2013
Location: Savoie FRANCE
Posts: 13
Sorry if I misinterprate the code but shouldn't LL value be calculated from Gamma settings (i.e. fields "The display is calibrated to the following transfer function / gamma" and/or "desired display gamma/transfer function".

Unless this does not make much difference with 2.2 gamma?
mithra66 is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 07:02.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.