Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 13th November 2013, 21:39   #20881  |  Link
flashmozzg
Registered User
 
Join Date: May 2013
Posts: 77
Quote:
Originally Posted by kasper93 View Post
It's know issue, and its ISR bug. Moreover I can say that one day it will be fixed on MPC-HC side. For now if you want to use ISR you have two options either disable subtitle queue in MPC-HC settings or disable madVR "fading" feature by setting both debanding options to the same preset.
I heard that MPC-HC is switching to XySubFilter in the next release.
flashmozzg is offline   Reply With Quote
Old 13th November 2013, 22:41   #20882  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by kasper93 View Post
It's know issue, and its ISR bug. Moreover I can say that one day it will be fixed on MPC-HC side. For now if you want to use ISR you have two options either disable subtitle queue in MPC-HC settings or disable madVR "fading" feature by setting both debanding options to the same preset.

If you want to use other subtitle renderer, you need to download subs and load them by XySubFilter there is no other option at the time.
Not denying that there's a bug in ISR but why should it affect madVR? Why would madVR's fade debanding feature affect the way it ask for ISR's subs?
sneaker_ger is offline   Reply With Quote
Old 13th November 2013, 22:56   #20883  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
Quote:
Originally Posted by huhn View Post
madvr outrput is changed by colorprofiles and disable gpu ramps disabled this. in window mode the hole desktop is srgb then.

and nearly all games are affected too
madvr output is changed by color profiles only so far as everything in Windows is; gamut correction does not apply. Windows assumes you have a display with an sRGB gamut but it can load part of an ICC profile into the video card for a calibrated gamma curve for red, green, and blue (the videoLUT). All this does is correct the white point and grey scale. This is why a wide gamut display is a bad thing unless you are on a Mac or using an application which supports color correction on its own (like MadVR ).

Games are only not affected when they load their own LUT into the video card. I have no idea why some games do this and I hate them.
Asmodian is offline   Reply With Quote
Old 13th November 2013, 23:14   #20884  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by huhn View Post
madvr outrput is changed by colorprofiles and disable gpu ramps disabled this. in window mode the hole desktop is srgb then.

and nearly all games are affected too
They are affected by the videoLUT calibration, but not by the rest of the profile. It's pretty easy to test this with a profile that simply inverts all the colors - you'll see that madVR doesn't care, but EVR with color management enabled does. For proper color management in madVR you need a 3DLUT.

Quote:
Originally Posted by Asmodian View Post
Games are only not affected when they load their own LUT into the video card. I have no idea why some games do this and I hate them.
Yeah, it's annoying. I use Monitor Calibration Wizard to keep my LUT loaded (games that do reset the LUT generally do it only once, or only when you alt-tab into them), but what I really want is an overlay that loads a madVR 3DLUT for all applications They all assume something like sRGB anyway, so why not?

Last edited by Ver Greeneyes; 13th November 2013 at 23:20.
Ver Greeneyes is offline   Reply With Quote
Old 13th November 2013, 23:32   #20885  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by 6233638 View Post
I actually did some testing on this, and with low resolution videos, and high strength debanding, it was actually beneficial as far as artifacts are concerned, to upscale externally and then have madVR process the image. That way you have pixel-sized "dither/noise" rather than giant upscaled noise. (I know you said you don't dither, but it's done something like dithering to smooth out those gradients)
Oh well, the algorithm doesn't really use anything like dithering. It basically just takes the average of 4 surrounding pixels of a random distance. But I guess if the debanding correction amount is high enough, it could actually end up looking like dithering.

Quote:
Originally Posted by 6233638 View Post
However, debanding was considerably less effective after doing this - it needed much higher settings to work, and some things just weren't seen as gradients.
Well, I guess that's got to be expected, but I guess some adjustments to the algorithm to take the zoom factor into account should be possible without negatively impacting detail loss.

Do you have a couple of low resolution samples where debanding in the source resolution is noticeably worse than debanding after external scaling? Maybe that could convince me to look into this.

Quote:
Originally Posted by DarkSpace View Post
How about just "scaling" the debanding values from video size to output size and applying debanding after scaling?
Can the debanding range (what pixels are looked at) be scaled for x and y dimensions separately? Can the AngleBoost algorithm be adapted to this? How about the banding/detail decision?
AngleBoost should not be affected, nor any of the other thresholds. However, after upscaling, the max random distance of the 4 surrounding pixels should be increased.

Quote:
Originally Posted by turbojet View Post
I updated the bug tracker with the crash report. Switching madvr to debug was the only way to get the log.
No crash report from madVR? So the crash is in potplayer and not in madVR? That might be hard to figure out, but I'll have a look when I find some time.

Quote:
Originally Posted by turbojet View Post
It doesn't appear noticeably faster for me.
What does speed have to do with anything? The topic at hand was totally unrelated to that.

Quote:
Originally Posted by Gagorian View Post
I'm considering getting the R9 270 GPU, does anyone have any other suggestions?
This question is asked every couple of pages.

Quote:
Originally Posted by Gagorian View Post
Awaiting the convergence adjustment feature like a kid for christmas
Haha. Yes, I'm looking forward to that myself. And to some other features I'm planning which should benefit my CIH setup.

Quote:
Originally Posted by sneaker_ger View Post
Not denying that there's a bug in ISR but why should it affect madVR? Why would madVR's fade debanding feature affect the way it ask for ISR's subs?
The same reason why I have to hide the MPC-HC "pause" OSD message: If I ask the ISR to rerender an older frame again, the ISR misbehaves. And the "fade in/out" detection of the debanding algorithm does require the first few frames of the fade in/out to be rerendered. There are different workarounds for this problem available, see the other posts.
madshi is offline   Reply With Quote
Old 13th November 2013, 23:50   #20886  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
So you don't do fade detection before the rendering?
sneaker_ger is offline   Reply With Quote
Old 13th November 2013, 23:57   #20887  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by Gagorian View Post
Awaiting the convergence adjustment feature
FWIW This PS script can do it, using 6 different zones at that.

Last edited by leeperry; 14th November 2013 at 00:01.
leeperry is offline   Reply With Quote
Old 13th November 2013, 23:58   #20888  |  Link
turbojet
Registered User
 
Join Date: May 2008
Posts: 1,840
Quote:
Originally Posted by madshi
Quote:
Originally Posted by turbojet
I updated the bug tracker with the crash report. Switching madvr to debug was the only way to get the log.
No crash report from madVR? So the crash is in potplayer and not in madVR? That might be hard to figure out, but I'll have a look when I find some time.
It's a madvr crash but it doesn't create a crash report on desktop when I press 'close application' check your pm for screens of the crash.

Quote:
Originally Posted by madshi
Quote:
Originally Posted by turbojet
It doesn't appear noticeably faster for me.
What does speed have to do with anything? The topic at hand was totally unrelated to that.
It would be really nice if this forum nested previous msg's without a bunch of work but I thought it was asking for opinions on the last part of http://forum.doom9.org/showthread.ph...21#post1652721
__________________
PC: FX-8320 GTS250 HTPC: G1610 GTX650
PotPlayer/MPC-BE LAVFilters MadVR-Bicubic75AR/Lanczos4AR/Lanczos4AR LumaSharpen -Strength0.9-Pattern3-Clamp0.1-OffsetBias2.0

Last edited by turbojet; 14th November 2013 at 00:25.
turbojet is offline   Reply With Quote
Old 14th November 2013, 00:23   #20889  |  Link
e-t172
Registered User
 
Join Date: Jan 2008
Posts: 589
Quote:
Originally Posted by Ver Greeneyes View Post
Yeah, it's annoying. I use Monitor Calibration Wizard to keep my LUT loaded (games that do reset the LUT generally do it only once, or only when you alt-tab into them)
Personally, I use a scheduled task that runs xcalib every minute. That does the trick. There are games (e.g. Crysis) that reset the LUTs at every level change, in which case my scheduled task resets it back to the real calibration less than a minute later.

Quote:
Originally Posted by Ver Greeneyes View Post
but what I really want is an overlay that loads a madVR 3DLUT for all applications They all assume something like sRGB anyway, so why not?
Yes. Yes please. Unfortunately we all know that's not gonna happen; it could be implemented in the driver but I don't think NVidia/ATI care. Theoretically it's not that hard for application developers to Do The Right Thing themselves; there are lots of libraries available to do ICC transformations (such as the easy-to-use LittleCMS), and even a Windows API. It would be great if the developers of the main game engines (e.g. Unreal Engine, Cryengine, Source, Frostbite...) could generate a transformation using these libraries and then store the 3DLUT on the GPU for real-time correction of the game colors (like madVR), but could luck convincing them to spend time to implement that.

Last edited by e-t172; 14th November 2013 at 00:25.
e-t172 is offline   Reply With Quote
Old 14th November 2013, 00:33   #20890  |  Link
mzso
Registered User
 
Join Date: Oct 2009
Posts: 930
Quote:
Originally Posted by madshi View Post
I'm not aware of such a problem. The Potplayer developer didn't talk to me about this.
He actually talks to anyone without being asked?
Anyway he wasn't particularly helpful. He just told me to "Use old player~~"
mzso is offline   Reply With Quote
Old 14th November 2013, 02:38   #20891  |  Link
andybkma
Registered User
 
Join Date: Sep 2006
Posts: 212
Quote:
Originally Posted by madshi View Post
Does reducing the number of windowed mode backbuffers and/or the number of exclusive mode prepresented frames help in any way?
madshi, I have since narrowed down the culprit to the Avisynth+ dll file which I started using for realtime script sharpening right around the time I started using the new deband mVR. Replacing the avisynth+ dll file with the original 2.6 Alpha 5 dll and the problem hasn't reoccurred in two days. Guess I will let the avisynth+ thread know about this problem. Cheers
andybkma is offline   Reply With Quote
Old 14th November 2013, 03:09   #20892  |  Link
DarkSpace
Registered User
 
Join Date: Oct 2011
Posts: 204
Quote:
Originally Posted by madshi View Post
Quote:
Originally Posted by DarkSpace View Post
How about just "scaling" the debanding values from video size to output size and applying debanding after scaling?
Can the debanding range (what pixels are looked at) be scaled for x and y dimensions separately? Can the AngleBoost algorithm be adapted to this? How about the banding/detail decision?
AngleBoost should not be affected, nor any of the other thresholds. However, after upscaling, the max random distance of the 4 surrounding pixels should be increased
I see. My main concern was content with non-1:1 scaling (e.g. DVDs). I must admit, though, that after reading your answer, I realized that the boost itself should indeed remain unchanged, as only the angles themselves are changing in that case.
DarkSpace is offline   Reply With Quote
Old 14th November 2013, 06:42   #20893  |  Link
Ver Greeneyes
Registered User
 
Join Date: May 2012
Posts: 447
Quote:
Originally Posted by e-t172 View Post
Yes. Yes please. Unfortunately we all know that's not gonna happen; it could be implemented in the driver but I don't think NVidia/ATI care. Theoretically it's not that hard for application developers to Do The Right Thing themselves; there are lots of libraries available to do ICC transformations (such as the easy-to-use LittleCMS), and even a Windows API. It would be great if the developers of the main game engines (e.g. Unreal Engine, Cryengine, Source, Frostbite...) could generate a transformation using these libraries and then store the 3DLUT on the GPU for real-time correction of the game colors (like madVR), but could luck convincing them to spend time to implement that.
I was thinking of something like a d3d9.dll hook for (Direct3D 9) games and maybe a DWM hack for windowed applications. Theoretically it shouldn't even be that hard! I haven't gotten around to trying it myself though, so I shouldn't talk.
Ver Greeneyes is offline   Reply With Quote
Old 14th November 2013, 09:49   #20894  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by sneaker_ger View Post
So you don't do fade detection before the rendering?
I do, but I only consider a fade a fade when it lasts at least 5 frames. So it's only after the 5th frame of a fade that I know that this really is a fade. So in that moment when that 5th frame runs through the fade detection algorithm, the previous 4 frames are usually already rendered. madVR is heavily multi-threaded. Everything works in parallel. The fade detection can't introduce a 5 frame delay. So it lets every frame pass until a fade is detected. At that point in time it has to go back and rerender the last 4 frames.

Of course it's not ideal to have to go back to rerender past frames. But it's the best solution I came up with. Every other solution had worse disadvantages than this one.

Quote:
Originally Posted by leeperry View Post
FWIW This PS script can do it, using 6 different zones at that.
Yes, but with a rather bad quality. On a quick check I think if the 6 zones noticeably differ with that PS script, you might actually see the zone boundaries. Furthermore that script doesn't work in linear light, so you'll get discoloration in pixel on/off test patterns. Finally, the convergence correction seems to be done with bilinear filtering which will soften up the image. madVR's convergence correction should not have any of these problems (I hope).

Quote:
Originally Posted by turbojet View Post
It would be really nice if this forum nested previous msg's without a bunch of work but I thought it was asking for opinions on the last part of http://forum.doom9.org/showthread.ph...21#post1652721
So when you say "It doesn't appear noticeably faster for me" what do you mean exactly? That the face doesn't appear faster for you with Werewolfy's video sample, when using "medium" instead of "high" debanding for fade in/outs? Or are you talking about the madVR rendering times? Or what?

The feedback I was hoping to get is whether I should use "medium" or "high" as a default option for debanding during fade ins/outs.

Quote:
Originally Posted by mzso View Post
He actually talks to anyone without being asked?
He did contact me in the past, at least one time, maybe twice. Don't remember.

Last edited by madshi; 14th November 2013 at 09:57.
madshi is offline   Reply With Quote
Old 14th November 2013, 10:48   #20895  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by madshi View Post
The fade detection can't introduce a 5 frame delay.
Why not? Or is this because of OSD messages or similar yet again? Personally I think fluid playback should have the highest priority.
sneaker_ger is offline   Reply With Quote
Old 14th November 2013, 11:01   #20896  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,646
Quote:
Originally Posted by sneaker_ger View Post
Why not?
Me thinks it would be because of the heavily threaded nature madshi mentioned.

Quote:
Originally Posted by sneaker_ger View Post
Personally I think fluid playback should have the highest priority.
From the sounds of it, this core MadVR feature (fluid playback) is not compromised by having to rerender four frames.
ryrynz is offline   Reply With Quote
Old 14th November 2013, 11:04   #20897  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
Originally Posted by ryrynz View Post
Me thinks it would be because of the heavily threaded nature madshi mentioned.
Another buffer does not hurt multi-threading. On the contrary the current solution hurts performance as evidenced by user reports and the nature of the solution. It will only hurt multi-threading under additional requirements I'm currently unaware of. Madshi says he "can't" have another delay but the multi-treading answer does not explain that.

Quote:
Originally Posted by ryrynz View Post
From the sounds of it, this core MadVR feature (fluid playback) is not compromised by having to rerender four frames.
It's not compromised if your system is fast enough.

Last edited by sneaker_ger; 14th November 2013 at 11:06.
sneaker_ger is offline   Reply With Quote
Old 14th November 2013, 11:41   #20898  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by sneaker_ger View Post
Why not? Or is this because of OSD messages or similar yet again?
Has nothing to do with OSD. Adding a 5 frame delay would mean that the rendering queue would never reach its max. E.g. with a rendering queue of 16 frames, if you have a 5 frame delay, the rendering queue would never fill more than 11/16. I think we both agree this would be a bad solution?

What I could do is introduce a separate queue just for fade detection. So the fade queue would then be 5 frames longer than the rendering queue. E.g. the fade queue would then be 21 frames long and the render queue would be 16 frames long. In that case the 5 frame delay would still allow the render queue to fill completely. But this design would require the upload queue (and the decoder queue) to be 5 frames longer, too, which would (together with the new fade queue) increase the GPU RAM consumption.

With the current solution the render queue is always nicely filled, except in the very moment when the fifth frame of a fade is detected. In that moment the render queue temporarily goes down to 11/16, but fills up quickly again. All this without any increased GPU RAM consumption.

I'm not sure if it makes sense to discuss this in detail. Trust me, I've thought all possibilities through and chosen the best possible implementation. The one and only problem with this implementation is that it makes problems with the ISR. But that's really a bug in the ISR, and I'm not willing to use an inferior render design just to work around an ISR bug. If you insist on using the ISR, either disable the fade detection, or disable the ISR queue, or ask the MPC-HC/BE devs to fix the ISR bug. Or make the switch to XySubFilter. Many options for you.

Quote:
Originally Posted by sneaker_ger View Post
Personally I think fluid playback should have the highest priority.
I've chosen the solution which would give the highest queue saturation with the lowest GPU RAM consumption. Which in the end aims at the most reliable/fluid playback.

Quote:
Originally Posted by sneaker_ger View Post
On the contrary the current solution hurts performance as evidenced by user reports
Huh!? Where did you get *that* impression from? If you're talking about the reported frame drops, that's not a performance problem, but a simple logic bug in the current build, which is already fixed in my sources, and which btw never introduced any visible stuttering. The simple bug was that a smooth motion blended frame which should have been deleted by the fade detection wasn't deleted. So there suddenly were two blended frames for the same situation. So one was dropped, without causing a visual problem. So the only real "problem" was that the OSD reported a dropped frame. But the dropped frame was superfluous, anyway.

Quote:
Originally Posted by sneaker_ger View Post
It's not compromised if your system is fast enough.
The speed of the system got nothing to do with it.
madshi is offline   Reply With Quote
Old 14th November 2013, 12:00   #20899  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
I was indeed thinking about just adding an additional queue for the fade and in turn increase its length. I didn't think about the RAM and thought you just didn't feel like adding yet another queue or didn't want to make it too long. I wasn't affected, just wondering.

Quote:
Originally Posted by madshi View Post
Huh!? Where did you get *that* impression from? If you're talking about the reported frame drops, that's not a performance problem, but a simple logic bug in the current build, which is already fixed in my sources, and which btw never introduced any visible stuttering. The simple bug was that a smooth motion blended frame which should have been deleted by the fade detection wasn't deleted. So there suddenly were two blended frames for the same situation. So one was dropped, without causing a visual problem. So the only real "problem" was that the OSD reported a dropped frame. But the dropped frame was superfluous, anyway.
Yes, I mixed that up.

Quote:
Originally Posted by madshi View Post
I've chosen the solution which would give the highest queue saturation with the lowest GPU RAM consumption. Which in the end aims at the most reliable/fluid playback.
Well, I thought queues are there to ensure fluid playback, i.e. having some security net/buffer for avoiding dropped frames. Resetting a queue fully or partly/reprocessing frames means decreasing that security and results in more dropouts on average even if most users aren't affected and it can be worked-around/turned off easily. Maybe I'm missing something.
sneaker_ger is offline   Reply With Quote
Old 14th November 2013, 12:36   #20900  |  Link
Vyral
Registered User
 
Vyral's Avatar
 
Join Date: Oct 2012
Posts: 70
Hi everyone,
I currently use the sRGB profile on my monitor and the default settings in Windows Color management (No ICC profile). I've set "this display is already calibrated" (while it's not) with BT.709 and pure power curve 2.20 in the calibration tab. However, some guides about calibration advice to set pure power curve to 2.40 with sRGB. Is it true or not ?
Another question : should I keep my actual settings or change for "disable calibration controls for this display" in the calibration tab and enable gamma processing in the color & gamma tab ?
Thanks for your help.
__________________
iiyama prolite xb2483hsu 1080p60 Gamma=2.25 - Intel Core i3-2100 3.10GHz - AMD Radeon HD 6850, RGB 4:4:4 Full range - MPC-HC + XYSubFilter + madVR

Last edited by Vyral; 14th November 2013 at 12:38.
Vyral is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:39.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.