Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 14th July 2016, 14:19   #38701  |  Link
zoyd
Registered User
 
Join Date: Sep 2009
Posts: 43
Quote:
Originally Posted by madshi View Post
So basically I'd be using something like dE to calculate the best compromise between saturation loss and luminance loss (while keeping the hue angle constant)?
Yes but that assumes the hue angle is obtainable at the desired saturation location. This may not always be the case if for example you are mapping P3 primaries down to Rec709 primaries. In that case you have to vary all three to find the best dE-based compromise, because out-of-gamut now can involve the wrong hue, saturation and luminance.
zoyd is offline   Reply With Quote
Old 14th July 2016, 14:23   #38702  |  Link
StinDaWg
Registered User
 
Join Date: Jan 2014
Posts: 216
Quote:
Originally Posted by madshi View Post
This can happen with weird refresh rates, especially in 10bit mode, especially with AMD GPU drivers. I don't think I've seen this reported with NVidia drivers yet. One possible fix is to lower the number of prepresented frames to e.g. 4-6. This reportedly has helped for most users in the past. I believe it's a GPU driver issue, which is related to the mixture of using D3D11, weird refresh rates, a high number of prepresented frames, and maybe 10bit output.
I tried changing present frames to 1,2,3, and 4 as well as changing to 8-bit output and it made no difference. Even though queues are near empty and stay that way madvr is not reporting any dropped frames, so I'm not sure what to think really.

Any idea why it does the same thing in D3D11 FSE after pausing the video and Windows screen saver activates? This happens with all files not just 30 fps. Should I just ignore the stats as long as I don't visually see stuttering?

Last edited by StinDaWg; 14th July 2016 at 14:25.
StinDaWg is offline   Reply With Quote
Old 14th July 2016, 14:42   #38703  |  Link
ShiftyFella
Registered User
 
Join Date: Apr 2016
Posts: 17
Quote:
Originally Posted by madshi View Post
Yes.


Ok. To be fair, both the resolution and the refresh rate differs when comparing hdmi and display port in your screenshots. So I'm not sure if it's proof yet that display port doesn't have the issue.
Ah, sorry. It's two different devices with different resolutions, tv is connected to hdmi and monitor to display port and can only do 60hz, while tv has different display modes set up.

Ok, just did a quick test again with display modes turned off for tv and monitor resolution dropped down to 1080p when using dp or hdmi connection to make it more fair comparison. Now I'm not sure what to make of it. With display modes turned off for TV and using hdmi or display port with monitor, FSE produces no issues and all ques are filled up with presentation times looking normal. turning display modes on for TV produces above mentioned glitch. Next thing I did is left only 1080p60 in display modes plus turned them on for monitor. Having display modes on or off for monitor produced no issues with FSE using either hdmi or display port connection. Meanwhile TV even with display mode of 1080p60 produces glitches in FSE but with it turned off and running at same 1080p60 has no issues. To make sure there is no bandwidth limitation or multi-monitor issues, I had one device connected at a time to video card and used same cable for hdmi connection. This is really weird and I'm not sure this is even same issue StinDaWg experiencing, if this is an issue at all and nothing more then just random amd driver glitches

Last edited by ShiftyFella; 14th July 2016 at 14:48.
ShiftyFella is offline   Reply With Quote
Old 14th July 2016, 15:16   #38704  |  Link
Sunset1982
Registered User
 
Join Date: Sep 2014
Posts: 280
Quote:
This is really what the media player should do. I do know that not all media players provide a nice exclusive mode GUI, though, which is the reason why I implemented the seekbar in the first place. Maybe at some point I'll add more GUI controls for media players that don't have their own in exclusive mode. But it's not a high priority atm.

Is there a player which supports madvr fse mode while using a gui osd? If so, which one is it? A player with support for madvr fse d3d11 (for 10 bit output) and a good skinnable gui would be great...
__________________
Intel i5 6600, 16 GB DDR4, AMD Vega RX56 8 GB, Windows 10 x64, Kodi DS Player 17.6, MadVR (x64), LAV Filters (x64), XySubfilter .746 (x64)
LG 4K OLED (65C8D), Denon X-4200 AVR, Dali Zensor 5.1 Set
Sunset1982 is offline   Reply With Quote
Old 14th July 2016, 15:42   #38705  |  Link
SpoCk0nd0pe
Registered User
 
Join Date: Jun 2015
Posts: 25
Quote:
Originally Posted by madshi View Post
My control over what the GPU outputs is very limited. I can either output frame packed 3D, or conventional 2D. I have no other options. When outputting conventional 2D, I can do some fancy pixel resorting, which allows me to render 3D for those line or column alternate displays (passive IPS LCDs). But I don't see how I could do frame sequential 3D. Neither Windows nor the GPU manufacturers offer any kind of API for that.
Frame sequential 3d in conventional 2d is really what I am looking for.

So for example: madvr gets frame packed 3d material at 47.952 Hz. What I would like to be able to do is: split the right and left frames and send them one after the other at 95.904 Hz to the TV as conventional 2d frames.
SpoCk0nd0pe is offline   Reply With Quote
Old 14th July 2016, 15:55   #38706  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by colinhunt View Post
- Radeon RX 480 (HDCP 2.2 and HDMI 2.0 with support for HDR), connected via HDMI to an HDR 4K television.

I'm trying to play HDR 4K demo clips on MPC-HC which has been configured to use MadVR but the screen remains black. Audio plays, though.
HEVC decoding with Radeon 480 has been reported to be problematic before. Does this work with EVR?

Quote:
Originally Posted by zoyd View Post
Yes but that assumes the hue angle is obtainable at the desired saturation location. This may not always be the case if for example you are mapping P3 primaries down to Rec709 primaries. In that case you have to vary all three to find the best dE-based compromise, because out-of-gamut now can involve the wrong hue, saturation and luminance.
Doing this kind of stuff in real time in a pixel shader does comes with some limitations. Minimizing dE doesn't seem to be a simple one-step math calculation, at least I don't see how I would do that. So I'll probably stick to keeping the hue angle constant at all times and just try to find the right compromise between reducing saturation and luminance.

Quote:
Originally Posted by StinDaWg View Post
Any idea why it does the same thing in D3D11 FSE after pausing the video and Windows screen saver activates? This happens with all files not just 30 fps. Should I just ignore the stats as long as I don't visually see stuttering?
I don't really know. I suppose as long as there are no frame drops reported, it's ok.

Quote:
Originally Posted by Sunset1982 View Post
Is there a player which supports madvr fse mode while using a gui osd? If so, which one is it?
I think there are several players, but why don't you try for yourself? The list of supported players is available on www.madVR.com.

Quote:
Originally Posted by SpoCk0nd0pe View Post
Frame sequential 3d in conventional 2d is really what I am looking for.

So for example: madvr gets frame packed 3d material at 47.952 Hz. What I would like to be able to do is: split the right and left frames and send them one after the other at 95.904 Hz to the TV as conventional 2d frames.
But how would the display know that it's receiving frame sequential 3D instead of 2D? And how would the display know which frame is for the left eye and which for the right eye? Probably HDMI supports metadata to pass this information to the display somehow. But there's no API available I could use to send this information to the display.

It's also not really all that simple to implement this feature. If audio and video run out of sync, I'll need to drop a frame. Usually I only drop one frame. But with frame sequential output I'd have to make sure I always drop 2 frames, so the eye order doesn't swap.

Why do you need this? Does your TV not support frame packed 3D?
madshi is offline   Reply With Quote
Old 14th July 2016, 18:27   #38707  |  Link
colinhunt
Registered User
 
Join Date: Dec 2002
Posts: 1,022
Quote:
Originally Posted by madshi View Post
HEVC decoding with Radeon 480 has been reported to be problematic before. Does this work with EVR?
HEVC 10bit UHD/59.94p (SDR), RX 480 connected via HDMI to a TV, MPC-HC with madVR: black screen, both fullscreen and windowed.

Same source, GPU etc. but with EVR: plays perfectly, CPU usage 2-5%

EVR doesn't support HDR output to HDR TV either.
colinhunt is offline   Reply With Quote
Old 14th July 2016, 19:00   #38708  |  Link
jerryleungwh
Registered User
 
Join Date: Jun 2016
Posts: 39
Quote:
Originally Posted by madshi View Post
The log is pretty hard to interpret. It's made for my eyes, so I'm not sure if you can find what you need. Those 2 lines you mentioned are "good", they should be there are don't indicate a problem.

If you upload the (zipped) log somewhere I can have a look if I can see something. Please activate the Ctrl+J OSD and keep it active while creating the log, then try to enter exclusive mode. Having the Ctrl+J OSD turned on is important because in this specific case it adds more information to the debug log.
Much appreciated. What I did was I turned on the OSD, enter fullscreen and see the "exclusive mode failed" message and leave it for 5 seconds before closing it. It hope that's correct.

https://drive.google.com/file/d/0By8...ew?usp=sharing

And if you don't mind may I also upload another log file later for you to take a look? I cannot seem to play frampacked 3d properly either. My TV detects the 3D signal but the image output is still in 2D
jerryleungwh is offline   Reply With Quote
Old 14th July 2016, 20:52   #38709  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by madshi View Post
Doing this kind of stuff in real time in a pixel shader does comes with some limitations. Minimizing dE doesn't seem to be a simple one-step math calculation, at least I don't see how I would do that. So I'll probably stick to keeping the hue angle constant at all times and just try to find the right compromise between reducing saturation and luminance.
Couldn't you generate a 3DLUT that precomputes all this stuff, and then you just apply the 3DLUT to every frame? In fact, that's exactly what happens when someone generates a 3DLUT using e.g. Argyll's collink and applies it in madVR - collink computes the gamut mapping between, say, BT.709 and the (measured) gamut of the output device, precomputes the conversion for every possible color, and outputs a 3DLUT to be used in madVR. This is literally the exact same problem. In fact one could theoretically use collink for HDR to SDR conversion today, though I don't know how well it would work.

(Generating that 3DLUT in a reasonable amount of time, on the other hand, could prove challenging. collink for example can take a while to run. Waiting such a long time on playback start could be annoying. However it might be possible to make the process much faster by making a few reasonable performance tradeoffs. Graeme would probably know a lot more about this, since he wrote this stuff.)

By the way, I see we've come full circle in this discussion, we're back to discussing the use of DeltaE which I already suggested 7 pages back. There was some talk about using CAM02-SCD as well. I believe the main reason why you were not very enthusiastic about this idea was because calculating dE94 or dE2000 is done in CIELAB, which goes against the ICtCp trend that you mentioned. Maybe there is a robust color difference formula that can be used in ICtCp somewhere?

(By the way: if you look at the DeltaE formulas themselves, you might notice that the problem of minimizing dE94 can be rephrased in 3D space as finding the closest point that is within the gamut volume in the CIELAB system of coordinates. Maybe that could help, especially considering this is done on a GPU? My math skills are not good enough to allow me to elaborate I'm afraid.)

Last edited by e-t172; 14th July 2016 at 21:03.
e-t172 is offline   Reply With Quote
Old 14th July 2016, 21:22   #38710  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by colinhunt View Post
HEVC 10bit UHD/59.94p (SDR), RX 480 connected via HDMI to a TV, MPC-HC with madVR: black screen, both fullscreen and windowed.

Same source, GPU etc. but with EVR: plays perfectly, CPU usage 2-5%

EVR doesn't support HDR output to HDR TV either.
Hmmmm... Does it work with madVR if you use DXVA copyback instead of native DXVA decoding?

Quote:
Originally Posted by jerryleungwh View Post
Much appreciated. What I did was I turned on the OSD, enter fullscreen and see the "exclusive mode failed" message and leave it for 5 seconds before closing it. It hope that's correct.
According to the log madVR *tries* to enter fullscreen exclusive mode, but DXGI/Direct3D reports "DXGI_STATUS_OCCLUDED". Which according to Microsoft means:

> DXGI_STATUS_OCCLUDED if you request full-screen mode
> and it is unavailable

Whatever that means. I don't think there's much I can do about it.

Quote:
Originally Posted by jerryleungwh View Post
And if you don't mind may I also upload another log file later for you to take a look? I cannot seem to play frampacked 3d properly either. My TV detects the 3D signal but the image output is still in 2D
A log file wouldn't help there.

I suspect all these problems are caused by Optimus. I don't really know how to help there, unfortunately.

Quote:
Originally Posted by e-t172 View Post
Couldn't you generate a 3DLUT that precomputes all this stuff, and then you just apply the 3DLUT to every frame?
That's not possible, unfortunately, because the values I get are sometimes so much out of range that they don't fit into the 3dlut input range, even if use "video levels" for the 3dlut, which provides BTB + WTW headroom. E.g. consider a blue pixel with 400 nits, with a 400 nits display. For a legal RGB value (16-235) the blue pixel can be max 24 nits (with a 400 nits display). I'm not sure which RGB value the blue pixel would have with 400 nits, but it would be WAY *WAY* out of range.

Quote:
Originally Posted by e-t172 View Post
By the way, I see we've come full circle in this discussion, we're back to discussing the use of DeltaE which I already suggested
Yes, but you suggested it in a different way. Basically you suggested to replace the out-of-gamut color with color "x". This color "x" would be the one in-gamut color which has the lowest dE difference to the original out-of-gamut color. I'm not aware of a simple math formula to calculate "color x" from the original out-of-gamut color. I'm not even sure how to calculate it at all, other than using trial-and-error.

We're now discussing using dE just to find a decent compromise between losing saturation vs losing luminance, while keeping the hue angle constant. That significantly simplifies the math. Although I'm still not sure how to do the math right now.
madshi is offline   Reply With Quote
Old 14th July 2016, 21:54   #38711  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by madshi View Post
That's not possible, unfortunately, because the values I get are sometimes so much out of range that they don't fit into the 3dlut input range, even if use "video levels" for the 3dlut, which provides BTB + WTW headroom. E.g. consider a blue pixel with 400 nits, with a 400 nits display. For a legal RGB value (16-235) the blue pixel can be max 24 nits (with a 400 nits display). I'm not sure which RGB value the blue pixel would have with 400 nits, but it would be WAY *WAY* out of range.
Have you considered widening the input range of the 3DLUT to account for all possible input values? That will probably require having a less-than-perfectly-granular 3DLUT (i.e. applying the 3DLUT will require interpolating between 3DLUT values), but that doesn't sound like a big deal - in fact, AFAIK color-managed software never use a fully specified 3DLUT like madVR does, they use something like a 64x64x64 cube and interpolate the missing points. ICC profiles, for example, contain such lower-precision 3DLUTs (cLUTs in ICC parlance) so that people don't end up with ICC profiles that are 100MB in size

Quote:
Originally Posted by madshi View Post
We're now discussing using dE just to find a decent compromise between losing saturation vs losing luminance, while keeping the hue angle constant. That significantly simplifies the math.
Okay. Hopefully you'll be able to find some way to lift this constraint eventually, because as you said preserving the hue angle is sometimes counter-productive:

Quote:
Originally Posted by madshi View Post
Graeme makes the excellent point that desaturating a pixel too much doesn't really preserve hue. E.g. an almost white pixel doesn't really preserve any hue at all, although strictly the hue angle might still be perfect.

Last edited by e-t172; 14th July 2016 at 21:56.
e-t172 is offline   Reply With Quote
Old 14th July 2016, 22:08   #38712  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by e-t172 View Post
Have you considered widening the input range of the 3DLUT to account for all possible input values? That will probably require having a less-than-perfectly-granular 3DLUT (i.e. applying the 3DLUT will require interpolating between 3DLUT values), but that doesn't sound like a big deal - in fact, AFAIK color-managed software never use a fully specified 3DLUT like madVR does, they use something like a 64x64x64 cube and interpolate the missing points.
The widening would have to be ultra extreme to handle all possible situations. I don't think that makes much sense. The widening would dramatically increase the needed 3dlut size, or alternatively noticeably decrease 3dlut accuracy. Furthermore the widening would also dramatically slow down offline calculation, due to all the possible extremely wide out-of-gamut situations. I expect calculating such a 3dlut offline would *at least* cost multiple seconds, after heavy optimizations. Maybe minutes, without heavy optimizations. It just doesn't make sense.
madshi is offline   Reply With Quote
Old 14th July 2016, 22:28   #38713  |  Link
colinhunt
Registered User
 
Join Date: Dec 2002
Posts: 1,022
Quote:
Originally Posted by madshi View Post
Hmmmm... Does it work with madVR if you use DXVA copyback instead of native DXVA decoding?
No. Green screen instead of a black one.
colinhunt is offline   Reply With Quote
Old 14th July 2016, 22:34   #38714  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by madshi View Post
The widening would have to be ultra extreme to handle all possible situations. I don't think that makes much sense. The widening would dramatically increase the needed 3dlut size, or alternatively noticeably decrease 3dlut accuracy. Furthermore the widening would also dramatically slow down offline calculation, due to all the possible extremely wide out-of-gamut situations. I expect calculating such a 3dlut offline would *at least* cost multiple seconds, after heavy optimizations. Maybe minutes, without heavy optimizations. It just doesn't make sense.
Actually, strike my last suggestion - I just realized that you were referring to RGB values, which are indeed problematic. What about generating a 3DLUT that maps YCbCr (or ICtCp) values? That would be just as valid, and I presume that wouldn't cause any range issues since your inputs are, almost by definition, in range. 3DLUTs can be applied in any color space, and in fact can even be used to convert between color spaces (e.g. YCbCr to RGB) in addition to doing color processing. So, for example, you could have a 3DLUT that takes ICtCp values as input, applies HDR processing such as gamut mapping or gamma ramps, other/various color processing stuff, and at the same time outputs the final RGB values.

I agree that still doesn't address the issue of calculation time though. However, I would still argue that decreasing 3DLUT accuracy would be a good tradeoff for solving that problem. After all, every color-managed application does this, and they don't suffer from accuracy problems. Argyll even has a tool, profcheck, that verifies the agreement between interpolated values from an ICC profile's cLUT and the original, full-precision measured values. If you can verify that the difference is less than 1 dE, it's unlikely anyone would be able to notice the difference.
e-t172 is offline   Reply With Quote
Old 14th July 2016, 22:43   #38715  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Granted, doing it in YCbCr or ICtCp would be more reasonable. But I still think calculating the 3dlut offline would take much too long. And I don't really see the advantage right now: The math I'm using now works just fine. Except for this one specific problem, and I'm not sure right now how to solve it mathematically, either offline or online. Once I know how to solve it, it's probably possible to do it online, too. So why do it offline? It would cost many hours extra work, delay playback start, have less precision. I simply see no reason why I should do that.
madshi is offline   Reply With Quote
Old 14th July 2016, 22:55   #38716  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Sure, this is all mostly speculation It's a matter of processing budget, though. Say you have an algorithm or some math formula that, given an input color X, gives you the best output color Y to map to. Say this algorithm takes 4µs to complete for each color. There's no way to do this in realtime for every pixel of every frame, but it would only take 1 second to generate a 64x64x64 3DLUT using that algorithm.
e-t172 is offline   Reply With Quote
Old 14th July 2016, 23:04   #38717  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Ok, but GPUs are massively parallel, while CPUs are not. If the algo takes 4µs per color per thread on the CPU, then with a quad core CPU it would actually only take 0.25 seconds to calculate a 64^3 3dlut. However, a GPU doesn't calculate just 4 colors at once, but hundreds or thousands at once (not sure how many). So even if the GPU needs 4µs per color, because of the massive parallel calculation, the algo would still run in real time without any problems. But I'd say 4µs on the CPU is very optimistic. It'd probably be much slower than that.
madshi is offline   Reply With Quote
Old 14th July 2016, 23:11   #38718  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
We're now discussing using dE just to find a decent compromise between losing saturation vs losing luminance, while keeping the hue angle constant. That significantly simplifies the math. Although I'm still not sure how to do the math right now.
Which dE are you referring to by the way? From the ones defined on this page, CIE76 seems doable, CIE94 only with constant hue, and CIEDE2000 is just ridiculous.
Shiandow is offline   Reply With Quote
Old 14th July 2016, 23:16   #38719  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Good question. I was thinking of using either CIEDE2000 with all the hue stuff removed (which should bring it near to CIE94, I think), or straight CIE94. Alternatively, there's a dE formula available in the first google link for the search term "de color difference ipt". In any case, my plan was to stick to constant hue.
madshi is offline   Reply With Quote
Old 14th July 2016, 23:49   #38720  |  Link
Xaurus
Registered User
 
Join Date: Jun 2011
Posts: 288
Anyone else having problems with Pascal GPU (1070,1080) getting the proper P-state when using Madvr in mpc-hc or mpc-be? I can't seem to make it happen actually. I've observed this by using several programs, and GPU-z confirms the speeds are reduced.
Setting "Prefer maximum power" in the Nvidia control panel doesn't make any difference. I've tried to use DDU and reinstall the latest drivers without any effect.
Could anyone else test the built-in graphics test in GPU-z and watch the clock speeds, and then compare it to when you run a full screen video with madvr? Noticeably the RAM speed is reduced.
__________________
SETUP: Win 10/MPC-HC/LAV/MadVR
HARDWARE: Fractal Design Node 804 | Xeon E3-1260L v5 | Supermicro X11SSZ-TLN4F | Samsung 2x8GB DDR4 ECC | Samsung 850 EVO 1TB | MSI GTX 1650 Super | EVGA G2 750
Xaurus is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 19:33.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.