Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 28th February 2014, 07:28   #23981  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
@Asmodian

I don't understand your post, but I really want to because its in my interest also.
Do you meant you want to leave the VideoLUT that's loaded in the video card untouched, and have the 3DLUT do color correction?

Quote:
I want madVR to ignore the GPU's gamma ramps (use only the 3DLUT for calibration) while not resetting the gamma ramps for everything else.
Isn't that what "disable GPU gamma ramps" does when NOT selected?
Leaving VideoLUT untouched, while applying the 3DLUT.
Quote:
I discovered "disable GPU gamma ramps" only has an effect if the 3DLUT does not have an attached set of ramps.
Does NOT have, or DO have?
Effect on what? VideoLUT loaded in the GPU?
Quote:
"disable GPU gamma ramps" does exactly what I want when used with a 3DLUT without attached ramps.
Quote:
The issue is that to get a good white point with argyllcms I need to include the calibrated gamma ramps in the 3DLUT creation,
if I do that linear gamma ramps are appended to the 3DLUT which get applied by madVR (changing the white point for Windows).
Please explain your post clearer.
Its not very beneficial to anyone if you're the only one who can understand it...


As I understand it:
The 3DLUT should have an embedded Gamma/Calibration ramp (Linear, if you do NOT select "Apply Calibration" in the 3DLUT creator).
"disable GPU gamma ramps" should disable the VideoLUT ramp.
If Not selecting "disable GPU gamma ramps" and activating 3DLUT, you'll get two calibration ramps one on top of the other.

You can always just Profile in ArgyllCMS without Calibrating.
This will embed a totally flat linear calibration file into the resulting profile/3DLUT.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 28th February 2014 at 07:47.
James Freeman is offline   Reply With Quote
Old 28th February 2014, 07:56   #23982  |  Link
Mangix
Audiophile
 
Join Date: Oct 2006
Posts: 353
Quote:
Originally Posted by Ver Greeneyes View Post
Converting from YUV420 to RGB, chroma doubling, image upscaling and downscaling will all generate values with a fractional part, so dithering will almost always be needed. Does 6-bit + dithering look good to you, or does it just look noisier than 7 or 8 bit without dithering?
6-bit + dithering looks slightly worse than 7-bit with no dithering. But honestly this feels like I'm picking at straws here. The difference is barely visible.

w/e. My eyes hurt from staring at the monitor too close to compare these images.
Mangix is offline   Reply With Quote
Old 28th February 2014, 08:06   #23983  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
I have a quick question... and staring at the screen for numerous settings has my eyes all fuzzy now... so I thought I would throw it out there for the experts to chime in. I push my R9 270x to max if possible and I have come to a fine hair's edge and can't decide the best of two scenarios:

ED (option 2) + Dynamic and use NNEDI3 x16 for Chroma upscaling

OR

Ordered Dithering and use NNEDI3 x32 for Chroma upscaling

the difference between the two is so small, but it's the difference between a few dropped frames to none.

Is the added level of NNEDI3 for Chroma upscaling worth the down-grade in dithering?

QB
__________________
QBhd is offline   Reply With Quote
Old 28th February 2014, 08:39   #23984  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
Quote:
Originally Posted by QBhd View Post
ED (option 2) + Dynamic and use NNEDI3 x16 for Chroma upscaling

OR

Ordered Dithering and use NNEDI3 x32 for Chroma upscaling

the difference between the two is so small, but it's the difference between a few dropped frames to none.

Is the added level of NNEDI3 for Chroma upscaling worth the down-grade in dithering?

QB
If it came down to lowering the neurons for the chroma upscaling to enable higher quality dithering I'd rather do that than have higher neurons for chroma doubling, or you could just overclock a touch and have both

Last edited by ryrynz; 28th February 2014 at 08:51.
ryrynz is offline   Reply With Quote
Old 28th February 2014, 09:00   #23985  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Quote:
Originally Posted by James Freeman View Post
@Asmodian
Isn't that what "disable GPU gamma ramps" does when NOT selected?
Leaving VideoLUT untouched, while applying the 3DLUT.
No, with "disable GPU gamma ramps" selected madVR bypasses the gamma ramps. It doesn't matter what they are set to (inverted even), the color is the same in madVR. This setting does not change weather or not they are loaded. If they are included in the 3DLUT they are always loaded no matter what and this setting does not bypass them.

Quote:
Originally Posted by James Freeman View Post
Does NOT have, or DO have?
Effect on what? VideoLUT loaded in the GPU?
Does NOT have, if you have a set of gamma ramps attached to the 3DLUT the option "disable GPU gamma ramps" does nothing. madVR looks the same with or without it checked, the LUT is always applied but it is linear if it should be so everything looks ok in madVR.

Quote:
Originally Posted by James Freeman View Post
As I understand it:
The 3DLUT should have an embedded Gamma/Calibration ramp (Linear, if you do NOT select "Apply Calibration" in the 3DLUT creator).
"disable GPU gamma ramps" should disable the VideoLUT ramp.
If Not selecting "disable GPU gamma ramps" and activating 3DLUT, you'll get two calibration ramps one on top of the other.
I do not know about a 3DLUT creator, I use command line. I am talking about the collink options:
-a file.cal Apply calibration curves to link output and append linear
-H file.cal Append calibration curves to 3dlut

Both of these situations are correct but if you want to use madVR's 3DLUT instead of the video cards LUT you have to have Windows also use linear gamma ramps when madVR is running. I want to be able to use "-a" but still have Windows calibrated all the time. edit: I can do this if I un-append the linear curves (hexedit) and use madVR's "disable GPU gamma ramps".

Quote:
Originally Posted by James Freeman View Post
You can always just Profile in ArgyllCMS without Calibrating.
This will embed a totally flat linear calibration file into the resulting profile/3DLUT.
If you profile with ArgyllCMS without calibrating your white point will be off. Also, it will not embed a totally flat linear calibration file into the 3DLUT, it will not embed any gamma ramps at all (no -a or -H).

I hope I made more sense this time.

Last edited by Asmodian; 28th February 2014 at 09:36.
Asmodian is offline   Reply With Quote
Old 28th February 2014, 09:21   #23986  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by James Freeman View Post
I can't reproduce the green tint effect with OD32 & 3-bit.
I have equal balance between Green and Magenta.
Did you disable 3DLUT?

Here is a screenshot from my setup:
3bit OD32 Color
In that screenshot the majority of the pixels seems to be green, this is somewhat hard to see without processing the image but it's definitely there. The clearest method to show this would be to increase the saturation such that the green/magenta pixels stand out, blur this such that any area with more green pixels than magenta pixels becomes green, and then optionally increase the saturation again to make it even easier to see which regions are green and which are magenta.
Shiandow is offline   Reply With Quote
Old 28th February 2014, 10:42   #23987  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by ryrynz View Post
I guess random dither might drop off then?
Not right now. Let's keep all the options for a while and then after a couple of weeks or so we can revisit who likes which algorithm the most and which algorithm might not be used at all, anymore, and then maybe drop some options based on that...

Quote:
Originally Posted by TheLion View Post
just a quick heads up - after updating from Catalyst 13.12 to 14.2 Beta 1.3 NNEDI scaling is working now. It really was "falling back" to Jinc chroma upsampling before - for whatever reason.

All is good now
Good to know!

Quote:
Originally Posted by leeperry View Post
Just ran a few more tests:
-colorful noise still looks uneasy to watch
-dynamic is still very obvious
-A4's EE is annoying to look at on sharp & clear 1080p but it works wonderfully on blurry/noisy/low-bitrate videos, sharpening up without white halos, very cool stuff
-ED11 is unforgiving, making it dynamic makes pristine 30fps 1080p look veiled as the noise is dancing around on top of the video but when going 64x NNEDI 24p 720p@1080p the dynamic noise dance still adds a noise veil but the latter is also injecting some much required "fake" grain and it wouldn't appear to be much of an issue from a distance...anyway don't shoot the messenger.
-my Sammy TV's anti-glare layers are quite thick(even my cat's claws couldn't rip it ) but it comes with a semi-glossy finish so it's also veiling the picture anyway. OTOH glossy panels are unmanageable IME(the DELL S2440L that I briefly owned comes to mind) so it's all about compromises.
So which algorithm do you like most now? You're talking about "annoying" with A4 and about "look veiled" with dynamic ED11. So static ED11 it is for you?

Quote:
Originally Posted by GREG1292 View Post
I settled on 5bit with my projector ED11 OCD
can see a difference on a large screen. Nice work
madshi. And leepery Oblivion was great last night.
Had to watch that movie one more time and never
looked better.
Ouch. Your projector looks better with madVR set to 5bit compared to setting madVR to 6bit? That's quite interesting. I'd say it probably means that your projector isn't doing a very good job. Or maybe you just like the "grain" added by 5bit dithering?

Quote:
Originally Posted by YxP View Post
Using ED2 with both options ticked and I'm completely sold I didn't like previous opposite colour dynamic - testbuild, but I just can't see the extra "energy" anymore, even if I kinda wanted to like OD more.
Great!

Quote:
Originally Posted by cyberbeing View Post
madshi, I'd be interested in hearing your subjective impression on mono dynamic vs color dynamic with ordered dithering?
After the green tint fix (see below) it's the same as with mono dynamic vs color dynamic for error diffusion: Using "color" lowers the luma noise level, which I personally find more pleasing. Using "dynamic" is IMHO a must have with ordered dithering. Static ordered dithering simply doesn't work too well for video, IMHO. This is with testing at 5bit, though, since I can't see a difference at 8bit, when looking at my computer monitor at least.

Quote:
Originally Posted by cyberbeing View Post
ED11 mono dynamic is rather interesting. Unlike A4, I so far do not seem to be instantly taking notice of the dynamic movement. If I notice it at all, would still require more testing.

Which leaves me with needing to do a comparison of A4 mono static against ED11 mono dynamic which I've yet to do.
Please let me know about your final preferences, thanks.

Quote:
Originally Posted by aufkrawall View Post
Yes, this avoids dropped frames (but I think I still had presentation glitches with my GTX 670, will test this as soon as the Radeon is sent back).
But somehow also setting default debanding strength from low to high (same level like strength during fade in/outs) solves frame drops.
Isn't this very odd? The drops occur btw. during fade in/outs.
It's not *that* odd. Without that trade option, madVR rerenders the past 5 frames every time a fade is detected. The fade detection only detects a fade as a fade after 5 frames were faded in a row. So in the moment when the fade is detected, madVR has to go back, throw away the past 5 frames rendering and rerender them. That will immediately cut the render queue down from 7-8/8 to 2-3/8. This *can* maybe produce dropped frames once in a while if your GPU has a lot to do and circumstances are bad. Generally it might be a good idea to increase the GPU queue size if you want to use debanding with full fade functionality. E.g. with 12 GPU frames, the fade rerendering will only lower the GPU queue from 11-12/12 to 6-7/12, which should not be a problem.

Quote:
Originally Posted by aufkrawall View Post
Yes, I'm having sporadic presentation glitches with the Geforce. Happens both with either "don't rerender frames when fade in/out is detected" enabled and default debanding strength at high.
And you don't have any such glitches without debanding enabled at all? In the moment when those glitches occur (with the trade quality option activated) which state to the queues have? Are they all full? In any case, I'd suggest increasing the GPU queue size to 12, if your GPU has enough RAM for that. There isn't really any downside to using a higher GPU queue size, as far as I'm aware, except that "delay playback start until all queues are full" will delay a bit longer. In order to avoid presentation glitches you could also try increasing the number of "frames that shall be presented in advance".

Quote:
Originally Posted by 6233638 View Post
I'm not sure if it's a bug, or just something that's being magnified by viewing in 3-bit, but I've been doing some comparisons and images seem to have a slight green tint to them when using Ordered Dither and colored noise.
This does not seem to be a problem with the error diffusion builds.
Yes, thanks for reporting this, it's a bug. Basically the random value I added had the range 0..1 instead of -0.5..+0.5, so when inverting the whole weight set for "oppositeColor", the green channel got a higher weight compared to red/blue. This problem only occured with ordered dithering. Should be fixed in this build:

http://madshi.net/madVRfinalDither4.rar

Quote:
Originally Posted by Mangix View Post
well this sucks...

I've been playing around with the madVR settings and have noticed that on my monitor, I cannot see a difference between 7 and 8 bits with dithering off. Dithering only makes a visual difference at 6-bit and below. Does this mean that my monitor is dithering internally? If so, is there any point in using madVR's dithering algorithms on this monitor? I can't see a difference between any of them at 7-bits(even None).
Soukyuu had exactly the same problem/question 2 pages ago. Please see this post (and the following) for my reply:

http://forum.doom9.org/showpost.php?p=1670818&postcount=23921

Quote:
Originally Posted by Mangix View Post
Right now I have madVR set to 7-bit with NO dithering. Turning dithering on seems to be counter-productive as it costs performance and I see absolutely no difference in the video, even when I look at the video very closely.

My initial thought about dithering was that it helps mainly with madVR's processing of the video(upscaling, debanding, etc...) but now it looks like it doesn't.
I strongly recommend to *always* enable dithering. You may not see the difference in many situations, but I guarantee in some movies in some scenes there will be a visible difference. And dithering enabled is simply more correct, in any case. Look at my reply to Soukyuu for more information on how to properly test this.

Scientifically, your eyes should be able to differentiate about 11-12 bits of brightness information in a smooth gray ramp. So if you set madVR to 7 bit without dithering, you're throwing away TONS of potential brightness steps for no good reason. Maybe current content is often not good enough to show a clear benefit, but as I said, that depends on the exact movie/scene. And we'll hopefully soon get 4K content with higher native bitdepths.

Quote:
Originally Posted by James Freeman View Post
If you are watching a movie (which is already dithered by the studios), then everything is fine.
Please don't *ever* recommend or support disabling dithering. It's a bad bad bad idea. The hair on my neck is standing up. Seriously.

Quote:
Originally Posted by The 8472 View Post
Environmentally friendly madVR, conserves bits.
Quote:
Originally Posted by Shiandow View Post
Apparently it is green as well...
Haha!

Quote:
Originally Posted by Asmodian View Post
I finally figured out how to make madVR handle GPU gamma ramps and 3DLUTs in the way I want. The only problem is that I needed to use a hex editor to remove the linear gamma ramps from the 3DLUT created by argyllcms.

I want madVR to ignore the GPU's gamma ramps (use only the 3DLUT for calibration) while not resetting the gamma ramps for everything else.

I discovered "disable GPU gamma ramps" only has an effect if the 3DLUT does not have an attached set of ramps. "disable GPU gamma ramps" does exactly what I want when used with a 3DLUT without attached ramps. The issue is that to get a good white point with argyllcms I need to include the calibrated gamma ramps in the 3DLUT creation, if I do that linear gamma ramps are appended to the 3DLUT which get applied by madVR (changing the white point for Windows). I think these ramps should not be applied if "disable GPU gamma ramps" is checked, that setting should over-write argyll instead of the other way around.
I'm a bit confused here. You should be doing the ArgyllCMS calibration with linear ramps loaded into the GPU. If you don't do that, you'll lose quality because the GPU ramps only work in 8bit (when using digital output), so that would screw up the whole calibration accuracy.

And if you perform the ArgyllCMS calibration with linear ramps loaded into the GPU, then the GPU ramps also have to be linear when playing the video, otherwise you'll get incorrect colors.

Am I missing something?

Quote:
Originally Posted by QBhd View Post
I have a quick question... and staring at the screen for numerous settings has my eyes all fuzzy now... so I thought I would throw it out there for the experts to chime in. I push my R9 270x to max if possible and I have come to a fine hair's edge and can't decide the best of two scenarios:

ED (option 2) + Dynamic and use NNEDI3 x16 for Chroma upscaling

OR

Ordered Dithering and use NNEDI3 x32 for Chroma upscaling

the difference between the two is so small, but it's the difference between a few dropped frames to none.

Is the added level of NNEDI3 for Chroma upscaling worth the down-grade in dithering?
I don't know. If I've learned anything in this thread then it is that every user's eyes/brain works a bit differently. It was unfortunately necessary to add so many new dithering options simply because we couldn't come to a clear agreement which algorithm was best.

So to answer your question, I fear you will need to try both configurations you suggested yourself and check which one your eyes/brain prefer. Which dithering algorithm looks best might also be dependent on your display. And whether more neurons help will also depend on the movie/scene. So I really wouldn't know what to recommend to you...

-------

So here's another "final" build which fixes the green tint when using "colored" ordered dithering. The fix may also modify/improve mono colored ordered dithering ever so slightly compared to the previous "final" builds:

http://madshi.net/madVRfinalDither4.rar
madshi is offline   Reply With Quote
Old 28th February 2014, 11:03   #23988  |  Link
Owyn
Registered User
 
Join Date: Sep 2010
Posts: 35
madshi,

Hello, I think I have an audio noise lag related to MadVR - https://trac.mpc-hc.org/ticket/4076
MadVR somehow forces audio to be in perfect sync with the video and so makes it create a noise when it can't be perfectly synced due to CPU overuse (it starts at player using 50-60% CPU), I tried old madVR versions - 09 and 020 and those don't seem to have this (because as I lowered priority of the MPC HC player and made video lag forcefully (cuz it didn't want to - used way less resources than madvr does now) - audio desynced but didn't make any noise, (from version madVR040.zip and above the noise is there as I tested)

so could you please (please please please) make an option to turn off perfect audio syncing to video so it could stop making that noise?
Owyn is offline   Reply With Quote
Old 28th February 2014, 11:05   #23989  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Quote:
Originally Posted by madshi View Post
And if you perform the ArgyllCMS calibration with linear ramps loaded into the GPU, then the GPU ramps also have to be linear when playing the video, otherwise you'll get incorrect colors.

Am I missing something?
Well can't I get the effect of linear ramps by using "disable GPU gamma ramps" without actually having to load linear ramps?
Asmodian is offline   Reply With Quote
Old 28th February 2014, 11:14   #23990  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Owyn View Post
Hello, I think I have an audio noise lag related to MadVR - https://trac.mpc-hc.org/ticket/4076
MadVR somehow forces audio to be in perfect sync with the video and so makes it create a noise when it can't be perfectly synced due to CPU overuse (it starts at player using 50-60% CPU), I tried old madVR versions - 09 and 020 and those don't seem to have this (because as I lowered priority of the MPC HC player and made video lag forcefully (cuz it didn't want to - used way less resources than madvr does now) - audio desynced but didn't make any noise, (from version madVR040.zip and above the noise is there as I tested)

so could you please (please please please) make an option to turn off perfect audio syncing to video so it could stop making that noise?
madVR has zero feedback into the video player or audio system. Basically the upstream filters (splitter, decoder etc) send stuff to madVR and madVR then renders it. I don't see how any of this would effect audio playback. The audio playback components don't even know whether madVR renders anything or nothing at all. So I think it's unlikely that this is madVR's fault. Except maybe because madVR is consuming some CPU resources.

Quote:
Originally Posted by Asmodian View Post
Well can't I get the effect of linear ramps by using "disable GPU gamma ramps" without actually having to load linear ramps?
From a technical point of view there is no way to "disable the GPU gamma ramps". Basically they are always used. So disabling them practically means filling them with linear values. So disable=linear.
madshi is offline   Reply With Quote
Old 28th February 2014, 11:23   #23991  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Quote:
Originally Posted by madshi View Post
From a technical point of view there is no way to "disable the GPU gamma ramps". Basically they are always used. So disabling them practically means filling them with linear values. So disable=linear.
If I check/uncheck "disable the GPU gamma ramps" the white point in Windows does not change while the white point in madVR does.

If I use a linear ramp attached to the 3DLUT the white point in Windows changes too.

The video in MadVR looks the same either way (global linear or just "disable the GPU gamma ramps" checked).

Is this not what you are expecting with that option? It doesn't load linear ramps into Windows for me (Win 8.1).

Last edited by Asmodian; 28th February 2014 at 11:29.
Asmodian is offline   Reply With Quote
Old 28th February 2014, 11:27   #23992  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by mzso View Post
I was tinkering around with encoding full range rgb UT Video to yuv444 h.264. And I kept getting results that looked a bit different. Tried using only ffmpeg, then using avisynth with x264. As it turns out it might be a renderer issue. Here's my source and one of my encoded files. People claimed it looks something that might be related to bt.601/bt.709 conversion.
Here's what I got:
http://screenshotcomparison.com/comparison/64616 the color always got more orangish.
Someone claimed that my encoded file looks properly so I made screenshots with Potplayer's EVR-CP where I got opposite results:
http://screenshotcomparison.com/comparison/64643
(plain EVR rendered most obviously wrong, too dark colors)

So is this a renderer bug? Or something else?
Quote:
Originally Posted by sneaker_ger View Post
@mzso @aufkrawall (@madshi)
It's not an issue with wrong flags, the file is correctly detected as full range BT.709. I made a lossless encode and the YUV 4:4:4 part looks different through madVR:
http://abload.de/img/ll_dither_madvr_ccujp.png (MPC-HC snapshot function of avs script opened with madVR)
http://abload.de/img/ll_dither_imagewriter5iu8w.png (screen from avs script through ImageWriter())
http://abload.de/img/ll_112_madvr2hu21.png (MPC-HC snapshot function of sample opened with madVR)

http://www.file-upload.net/download-...ut_ll.mkv.html

Script for the screenshots:
Code:
lwlibavvideosource("output_ll.mkv")
Dither_convert_yuv_to_rgb(matrix="709", tv_range=false, output="rgb24")
Not sure if it's an error so I'll leave it up for discussion.
I can see the color mismatch problem on my PC, too. However, try forcing LAV to NV12 output - and the colors still stay the same. So this has nothing to do with madVR getting 4:4:4 content. Must be something else.

My best guess is that x264 uses buggy math to convert the video from RGB to (fullrange) YCbCr. But of course this is only a wild guess, nothing else. I did see (and report) bugs in x264's color conversion code in the past already, though. So this being x264's fault would not come as a surprise to me.

The only way this could possibly be madVR's fault would be if madVR generally would handle all YCbCr fullrange input incorrectly. I think somebody would have noticed that already, if that were the case? Anyway, anybody willing to put this to a test? I've no experience with encoding stuff, so I'm probably not the right person to do the testing/experimentation...

Quote:
Originally Posted by mzso View Post
It didn't let me change the color primries though (gamut conversion disabled). Why is that? Is it important?
We're just trying to play the encode with the same colors as the original file. Since the encoding process is very unlikely to change the primaries, I don't think we have to play with different primaries in madVR. The color encoding/decoding matrix and the levels (limited range vs full range) is probably where it's at.
madshi is offline   Reply With Quote
Old 28th February 2014, 11:33   #23993  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Asmodian View Post
If I check/uncheck "disable the GPU gamma ramps" the white point in Windows does not change while the white point in madVR does.
Not enough information. Is this with a 3dlut loaded or not? Does the 3dlut have GPU ramps attached or not? Are we talking about overlay mode or windowed mode or FSE mode? All of these will influence the results.

This is an extremely complex topic because so many variables are involved. Right now I'm pretty confused about what you were/are doing, and why you manually modifed the 3dlut file etc etc. Makes all no sense to me right now. Have you read and understood what I wrote about the ramps having to be linear=disabled while doing ArgyllCMS calibration? Is that the way you were doing the measurements/calibration?
madshi is offline   Reply With Quote
Old 28th February 2014, 11:33   #23994  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by madshi View Post
My best guess is that x264 uses buggy math to convert the video from RGB to (fullrange) YCbCr.
This has nothing to do with x264 as my screenshots are all from the same lossless sample, not from mzso's original files.
sneaker_ger is offline   Reply With Quote
Old 28th February 2014, 11:38   #23995  |  Link
Owyn
Registered User
 
Join Date: Sep 2010
Posts: 35
Quote:
Originally Posted by madshi View Post
madVR has zero feedback into the video player or audio system. Basically the upstream filters (splitter, decoder etc) send stuff to madVR and madVR then renders it. I don't see how any of this would effect audio playback. The audio playback components don't even know whether madVR renders anything or nothing at all. So I think it's unlikely that this is madVR's fault. Except maybe because madVR is consuming some CPU resources.
Weird cuz old versions of MadVR just desync with audio without making noise and new ones make audio noise and won't let the audio desync, both tested under artificial lag and without needed CPU resources,

so it's the LAV video decoder \ splitter which does this?

LAV has some if\else conditions for MadVR version then to know when to make the noise and when not to?

Last edited by Owyn; 28th February 2014 at 11:57.
Owyn is offline   Reply With Quote
Old 28th February 2014, 11:54   #23996  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Quote:
Originally Posted by madshi View Post
Not enough information. Is this with a 3dlut loaded or not? Does the 3dlut have GPU ramps attached or not? Are we talking about overlay mode or windowed mode or FSE mode? All of these will influence the results.

This is an extremely complex topic because so many variables are involved. Right now I'm pretty confused about what you were/are doing, and why you manually modifed the 3dlut file etc etc. Makes all no sense to me right now. Have you read and understood what I wrote about the ramps having to be linear=disabled while doing ArgyllCMS calibration? Is that the way you were doing the measurements/calibration?
I am using a 3DLUT either with a set of linear ramps attached or not (manually removed). Windows 8.1 Overlay Mode only.

I do start calibration with linear ramps, dispwin -c. I then profile with linear ramps loaded but using dispread -K:

-K file.cal Apply calibration file to test values while reading

I then create the 3DLUT with collink -a:

-a file.cal Apply calibration curves to link output and append linear

I then have a calibration which looks great with linear ramps loaded and very odd with my "normal" ramps from dispcal loaded. I want linear ramps for madVR and my calibrated ramps in Windows. I had thought this was impossible too... but it seems to work.

Quote:
Originally Posted by madshi View Post
From a technical point of view there is no way to "disable the GPU gamma ramps". Basically they are always used. So disabling them practically means filling them with linear values. So disable=linear.
In this mode (Win 8.1 overlay) I can see the white point obviously change toggling "disable the GPU gamma ramps" while Windows doesn't change at all. I will have to test in other modes, I will report back soon.

If it only works in this exact situation I will be happy with my hack and you can ignore me.
Asmodian is offline   Reply With Quote
Old 28th February 2014, 11:55   #23997  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Quote:
Originally Posted by Owyn View Post
Weird cuz old versions of MadVR just desyncs with audio without making noise and new ones make audio noise and won't let the audio desync, both tested under artificial lag and without needed CPU resources,

so it's the LAV video decoder \ splitter which does this?

LAV has some if\else conditions for MadVR version then to know when to make the noise and when not to?
I always noticed this with Reclock, are you using something similar?
Asmodian is offline   Reply With Quote
Old 28th February 2014, 11:57   #23998  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
Quote:
Originally Posted by ryrynz View Post
If it came down to lowering the neurons for the chroma upscaling to enable higher quality dithering I'd rather do that than have higher neurons for chroma doubling, or you could just overclock a touch and have both
LOL... it's already a factory OC'd R9 270x (1120 MHz)... and I'm not sure I want to push it even further... madVR is even more demanding than the Latest Thief game :P

Quote:
Originally Posted by madshi View Post
I don't know. If I've learned anything in this thread then it is that every user's eyes/brain works a bit differently. It was unfortunately necessary to add so many new dithering options simply because we couldn't come to a clear agreement which algorithm was best.

So to answer your question, I fear you will need to try both configurations you suggested yourself and check which one your eyes/brain prefer. Which dithering algorithm looks best might also be dependent on your display. And whether more neurons help will also depend on the movie/scene. So I really wouldn't know what to recommend to you...
Double LOL... I had a feeling you might say that! I'm pretty sure that the ED (option 2) gave me slightly better results than the extra neurons for Chroma upscaling... just hoped for an educated guess, since I was blurry eyed by the time I was done tweaking and posted this :P

BTW... ED option 2 + Mono Dynamic is awesome!

QB
__________________

Last edited by QBhd; 28th February 2014 at 12:02.
QBhd is offline   Reply With Quote
Old 28th February 2014, 11:59   #23999  |  Link
Owyn
Registered User
 
Join Date: Sep 2010
Posts: 35
Quote:
Originally Posted by Asmodian View Post
I always noticed this with Reclock, are you using something similar?
just the standart MPC HC player official build with everything internal, If I had something specific which was making this - I'd be happy to get rid of it
Owyn is offline   Reply With Quote
Old 28th February 2014, 12:01   #24000  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by sneaker_ger View Post
This has nothing to do with x264 as my screenshots are all from the same lossless sample, not from mzso's original files.
Just found this old thread related to similar chroma problems:

http://forum.doom9.org/showthread.php?t=161915

I hope that after reading that thread you'll have more trust in madVR than in swscale...

Quote:
Originally Posted by Owyn View Post
Weird cuz old versions of MadVR just desyncs with audio without making noise and new ones make audio noise and won't let the audio desync, both tested under artificial lag and without needed CPU resources,

so it's the LAV video decoder \ splitter which does this?

LAV has some if\else conditions for MadVR version then to know when to make the noise and when not to?
I've no idea where the noise comes from, but madVR got pretty much nothing to do with the audio side of things. So whether there's noise or not is outside of my control, from what I can see.

Quote:
Originally Posted by Asmodian View Post
I am using a 3DLUT either with a set of linear ramps attached or not (manually removed). Windows 8.1 Overlay Mode only.

I do start calibration with linear ramps, dispwin -c. I then profile with linear ramps loaded but using dispread -K:

-K file.cal Apply calibration file to test values while reading

I then create the 3DLUT with collink -a:

-a file.cal Apply calibration curves to link output and append linear

I then have a calibration which looks great with linear ramps loaded and very odd with my "normal" ramps from dispcal loaded. I want linear ramps for madVR and my calibrated ramps in Windows. I had thought this was impossible too... but it seems to work.
Overlay mode makes a BIG difference because Overlay mode totally ignores the GPU gamma ramps. madVR has to do extra work to manually apply GPU gamma ramps in Overlay mode. Yeah, ok, I guess madVR could make use of this Overlay specialty by letting the GPU gamma ramps alone, when using Overlay mode. I'm not sure if madVR currently does that or not.

Quote:
Originally Posted by QBhd View Post
Double LOL... I had a feeling you might say that! I'm pretty sure that the ED (option 2) gave me slightly better results than the extra neurons for Chroma upscaling... just hoped for an educated guess, since I was blurry eyed by the time I was done tweaking :P
One thing in favor of ED is that *IF* your eyes can see a difference at all with ED, then you should see that in most movies and most scenes. While improved chroma upscaling will only be visible in specific scenes. If your eyes do generally prefer ED over ordered dithering, then I'd recommend going that way.
madshi is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 05:35.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.