Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Closed Thread
 
Thread Tools Search this Thread Display Modes
Old 4th April 2011, 02:09   #16801  |  Link
Rain1
Registered User
 
Rain1's Avatar
 
Join Date: Nov 2008
Posts: 36
I set Auto-zoom: 100% in Playback and it works well for vids with resolution < my monitor's res. 1920x1200 (They were played 100% at their resolution). However, with full HD vids, MPC-HC plays in maximize window mode & still show the border. Doesn't it suppose to play at full screen (without border, at all) ?

I know there's a full screen option, but that setting overwrite the auto-zoom setting at 100% & always play vids in fullscreen, regardless of their res.

Is there anyway to make the player to auto switch to full screen for fullHD clips, when Auto-zoom is set to 100% ?

Thanks /
__________________
» Core i7 920 OC @ 4Ghz, 1.28v | Asus P6T | 6GB DDR3 @ 1531Mhz, 7-7-7-24-1T | GTX260 | HP LP2475w 24" & X-Rite OPTIX-XR
» Win7 Ultimate SP1 x64 | MPC-HC latest | LAV filters latest | madVR latest

Last edited by Rain1; 4th April 2011 at 02:11.
Rain1 is offline  
Old 4th April 2011, 03:35   #16802  |  Link
JanWillem32
Registered User
 
JanWillem32's Avatar
 
Join Date: Oct 2010
Location: The Netherlands
Posts: 1,083
@Rain1: Only the "D3D Fullscreen Mode" option will make videos render in full screen on startup, as far as I know.

@Ger: Thank you for testing. I already thought my changes to the bicubic resizer wouldn't make much of a difference (the output assembly didn't change much). Bicubic resizing in a single pass with the current matrix method is sub-optimal anyway. I hope you have more luck when other two-pass resizing modes are added. Luckily, (although blurry) native GPU bilinear filtering is default.

I do have some remarks, of course. (I'm in a technical mood/mode today.)

Chroma up-sampling is something best left to a custom mixer. The native surface blitting from Y'CbCr to RGB on GPUs have incoherent results. Newer nVidia models force a bilinear filter on doubling the dimensions of the chroma data, which can't be disabled. Older nVidia models, ATi and Intel models don't filter, so the chroma is resized with the nearest neighbor method. Of course, neither forcing bilinear filtering nor skipping filtering is a good solution. However, I can understand that adding a GUI option in the video section of the drivers with a range of chroma resizing filters that range from unfiltered to something like Lanczos256 would be rather much to ask from GPU driver developers.
The only thing I can do at the moment is integrating the shaders I've written in a similar way as the color management and dithering function, making in effect half a mixer. That would work okay as a user-selectable option, but not as an automatic one.
The other (preferred) option would be to discard the EVR and VMR-9 mixer and give the shared renderer part of VMR-9 (renderless) and EVR CP a custom mixer. The hard parts are that I've haven't seen any mixers with a GPL license around, I can't just write one on my own, without months worth of lessons in media handling like this in my spare time and lastly, I wouldn't know a good name for the renderer, as (renderless) and CP are what would remain of the original names.

I'm the one that pushed conversion to RGB32 or RGB24 to the very bottom of the mixer input priority list. All the current decoders are natively NV12, YV12, I420 and YUY2 anyway. With the conversion to RGB32 or RGB24, after having conversion from Y'CbCr to RGB, the resulting floating points are rounded to 8-bit integer. With some luck, there's some dithering involved (that can't be compared to the internal 32²||128² dither maps and randomized dithering methods). Anyway, this kind of conversion is pretty lossy. It's completely unacceptable for any renderer chain to be designed like that.
Conversion to RGB32 or RGB24 before the mixer is just a compatibility option for old GPUs that can't provide any suitable Y'CbCr surfaces for the VMR-9 or EVR mixer that for some reason demand such support.

Absolutely all current internal shaders can be replaced by a set I've written (and a selection of a few more). I highly doubt there will be anyone that will deny that the versions I've written are superior. I'll happily defend that statement if someone wants to challenge me. The "YV12" shader was one of the first I've re-written for MPC-HC (last September). See the OP and the first few posts from the shaders thread for reference.

Advanced deinterlacing methods are not exclusive to ATi hardware. The nVidia and (recent) Intel solutions include those, too. Software deinterlacing has included several advanced methods for more than a decade.
The current GPUs just prefer to do deinterlacing with NV12 input it seems. The deinterlacing capabilities of a GPU can be checked with the DXVAchecker program.
__________________
development folder, containing MPC-HC experimental tester builds, pixel shaders and more: http://www.mediafire.com/?xwsoo403c53hv
JanWillem32 is offline  
Old 4th April 2011, 06:57   #16803  |  Link
oddball
Registered User
 
Join Date: Jan 2002
Posts: 1,264
I'm quite happy with things the way they are now. I can now use D3D tickbox and get MadVR with exclusive mode on secondary display. All menu's and controls appear to function correctly. The only snag I had was my GPU is not fast enough to handle 720p with subtitle heavy video at 720p (desktop res and power of 2). I wanted to use softcubic 100 on chroma and spline 4 tap on luma up and down. But it was dropping frames too much. I've ended up overclocking my G210 and things are a LOT better (see link below on the overclock)!

http://www.techpowerup.com/gpuz/79mv/

I'll probably upgrade my card at some point though as it does get quite hot since it's passively cooled. Max temp after a 25 minute episode was 82 degrees C. But the max operating temp is 105c so not too worried.
oddball is offline  
Old 4th April 2011, 07:13   #16804  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by JanWillem32 View Post
The native surface blitting from Y'CbCr to RGB on GPUs have incoherent results. Newer nVidia models force a bilinear filter on doubling the dimensions of the chroma data, which can't be disabled. Older nVidia models, ATi and Intel models don't filter, so the chroma is resized with the nearest neighbor method.
I've recently asked madVR users to run tests for me on their GPUs. The result is that all DX9 NVidia GPUs that have been tested have forced some kind of chroma upsampling with no way to turn it off. On the other hand, with ATI all GPUs have used nearest neighbor when using StretchRect with D3DTEXF_NONE. You can however make ATI GPUs use bilinear chroma upsampling by using D3DTEXF_LINEAR in the NV12 -> RGB StretchRect call.

Quote:
Originally Posted by oddball View Post
I'm quite happy with things the way they are now. I can now use D3D tickbox and get MadVR with exclusive mode on secondary display. All menu's and controls appear to function correctly.
Good to hear, thanks for feedback.

Last edited by madshi; 4th April 2011 at 07:15.
madshi is offline  
Old 4th April 2011, 07:38   #16805  |  Link
Ger
Registered User
 
Join Date: May 2007
Location: Norway
Posts: 192
Quote:
Originally Posted by JanWillem32 View Post
@Ger: Thank you for testing. I already thought my changes to the bicubic resizer wouldn't make much of a difference (the output assembly didn't change much). Bicubic resizing in a single pass with the current matrix method is sub-optimal anyway. I hope you have more luck when other two-pass resizing modes are added.
I'm happy as long as A=-1.00 works. I don't remember seeing much of a difference between that and the other two when I was using Nvidia. I don't remember anyone else reporting this issue either, so I don't think it's widespread, so feel free to not worry about it and prioritize other stuff.

Quote:
Originally Posted by JanWillem32 View Post
I do have some remarks, of course. (I'm in a technical mood/mode today.)
for trying. Don't get your hopes up that I will actually understand it though.

Quote:
Originally Posted by JanWillem32 View Post
Chroma up-sampling is something best left to a custom mixer. The native surface blitting from Y'CbCr to RGB on GPUs have incoherent results. Newer nVidia models force a bilinear filter on doubling the dimensions of the chroma data, which can't be disabled. Older nVidia models, ATi and Intel models don't filter, so the chroma is resized with the nearest neighbor method. Of course, neither forcing bilinear filtering nor skipping filtering is a good solution. However, I can understand that adding a GUI option in the video section of the drivers with a range of chroma resizing filters that range from unfiltered to something like Lanczos256 would be rather much to ask from GPU driver developers.
The only thing I can do at the moment is integrating the shaders I've written in a similar way as the color management and dithering function, making in effect half a mixer. That would work okay as a user-selectable option, but not as an automatic one.
The other (preferred) option would be to discard the EVR and VMR-9 mixer and give the shared renderer part of VMR-9 (renderless) and EVR CP a custom mixer. The hard parts are that I've haven't seen any mixers with a GPL license around, I can't just write one on my own, without months worth of lessons in media handling like this in my spare time and lastly, I wouldn't know a good name for the renderer, as (renderless) and CP are what would remain of the original names.

I'm the one that pushed conversion to RGB32 or RGB24 to the very bottom of the mixer input priority list. All the current decoders are natively NV12, YV12, I420 and YUY2 anyway. With the conversion to RGB32 or RGB24, after having conversion from Y'CbCr to RGB, the resulting floating points are rounded to 8-bit integer. With some luck, there's some dithering involved (that can't be compared to the internal 32²||128² dither maps and randomized dithering methods). Anyway, this kind of conversion is pretty lossy. It's completely unacceptable for any renderer chain to be designed like that.
Conversion to RGB32 or RGB24 before the mixer is just a compatibility option for old GPUs that can't provide any suitable Y'CbCr surfaces for the VMR-9 or EVR mixer that for some reason demand such support.
A lot of this is too technical for me and over my head, because I don't know what goes on inside the renderer or even understand the mixer concept. I still don't quite understand why apparently the old method was bad and the new method is good when only the new method has blocky reds without shaders. I guess for other reasons then.

But let me try to guess/understand from an end-user's perspective, and please let me know if I'm even close to grasping this:

Are you saying that in old unaffected builds, like in 2833 and still in EVR Sync or plain EVR, when the renderer gets NV12 from the decoder (according to pin info) it performs a (high quality) conversion to RGB32 first and uses that for the "mixer input"? And you wanted NV12 mixer input and changed this to prioritize feeding the same NV12 colorspace the renderer gets from the decoder to the mixer in either 2837 or 2839 and not in 2863? That is why the bad chroma is visible already in r2840?

Quote:
Originally Posted by JanWillem32 View Post
The "YV12" shader was one of the first I've re-written for MPC-HC (last September). See the OP and the first few posts from the shaders thread for reference.
Obviously if you (re)wrote it you should get credited, and of course I take your word for it. I just said "by Leak" because I think I remember he posted the original in this thread a few years ago because of similar blocky reds with ATI issues. But I'm not even sure I remember that part correctly. I was using Nvidia at the time and didn't really pay attention.

Quote:
Originally Posted by JanWillem32 View Post
Absolutely all current internal shaders can be replaced by a set I've written (and a selection of a few more). I highly doubt there will be anyone that will deny that the versions I've written are superior. I'll happily defend that statement if someone wants to challenge me.
I have no reason or intention to challenge that. I have only tried a few of your shaders.

The reason I think that particular "YV12" shader is the easiest solution for most people for the time being (which is maybe what you are responding to here?) is that it is embedded already in all builds, and so is easy to use. It fixes the blocky reds in EVR-CP and doesn't break anything (hardly any change at all) if you switch to already "not blocky" EVR Sync or RGB32 (Full/HQ) output from the decoder.

OTOH, if for example I use either of your optimized paths for fp/integer trios of shaders (fixes the blocky reds when present), they will (or at least it did for me) actually cause blocky reds in EVR Sync or with Full/HQ converted RGB32 from the decoder. So we have to remember to switch shaders when we switch settings. Of course if you know/recommend a better shader (or set of shaders) than the YV12 one that doesn't break anything when changing settings/renderers then I'm open to suggestions for my own use, but it will still be to complicated to recommend custom/external shaders for the average ATI/EVR-CP user with blocky reds until they are integrated IMHO.

Quote:
Originally Posted by JanWillem32 View Post
Advanced deinterlacing methods are not exclusive to ATi hardware. The nVidia and (recent) Intel solutions include those, too. Software deinterlacing has included several advanced methods for more than a decade.
The current GPUs just prefer to do deinterlacing with NV12 input it seems. The deinterlacing capabilities of a GPU can be checked with the DXVAchecker program.
I don't know if that was meant for me as I never said anything different, and I did actually know that already. We have tested with DXVAChecker before. ATI needs NV12 for Vector Adaptive. Nvidia will work with YV12, NV12 and many more for their best "spatial-temporal" method. Not with RGB32 though. None of the software methods in ffdshow look as good as hardware. Yadif2x is pretty good, but has artifacts and increased CPU usage that can make a difference if already decoding by software/CPU. Dscaler ivtc mod is reportedly good for IVTC, but not much use for my interlaced content (which is 99% DVB/PAL).

When your tester builds are used, ATI's hardware deinterlacing with the internal MPEG-2 decoder looks noticeably better than in trunk builds with all of the EVR renderers. The decoder is outputting NV12 and not YUY2 as in the trunk builds, so "VectorAdaptiveDevice" (and "MotionAdaptiveDevice") can be used, while YUY2 is restricted to the third best ATI method (AdaptiveDevice). Maybe whatever change you made to accomplish that could be committed sooner than the other stuff, if it's not too integrated with your other features?
Ger is offline  
Old 4th April 2011, 07:55   #16806  |  Link
Ger
Registered User
 
Join Date: May 2007
Location: Norway
Posts: 192
Quote:
Originally Posted by madshi View Post
I've recently asked madVR users to run tests for me on their GPUs. The result is that all DX9 NVidia GPUs that have been tested have forced some kind of chroma upsampling with no way to turn it off. On the other hand, with ATI all GPUs have used nearest neighbor when using StretchRect with D3DTEXF_NONE. You can however make ATI GPUs use bilinear chroma upsampling by using D3DTEXF_LINEAR in the NV12 -> RGB StretchRect call.
Do you think that will fix the blocky reds in EVR-CP with no manually applied shaders? If it does you all know what my vote is, unless it ruins any significant future plans.

Edit: Maybe something happened here?

Last edited by Ger; 4th April 2011 at 08:19.
Ger is offline  
Old 4th April 2011, 12:25   #16807  |  Link
pirlouy
_
 
Join Date: May 2008
Location: France
Posts: 692
Quote:
Originally Posted by janos666 View Post
You don't need to run the GUI itself all the time but you need it to be installed and you shouldn't kill it's backgroung processes (AMD External Events Service/Client Module...).

Of course, you can live without it, as you can live without the driver itself but you are only asking for trouble because it is needed for full functionality.
Honestly, I doubt this. CCC is optional, so I really think it's just a GUI. I install it anyway. But I have disabled all services and CCC, and I don't have problems.
Maybe I miss something, and you're right. But it would be cool to know EXACTLY what dependences are.
pirlouy is offline  
Old 4th April 2011, 13:35   #16808  |  Link
mr.duck
quack quack
 
Join Date: Apr 2009
Posts: 259
I have some experience with AMDs drivers. DON'T install the CCC and go to services control pannel and block all AMD Radeon related services from loading like external events blah blah. This setup gets best performance/fastest boot times and less problems (like changing AA mode and madVR doesn't work properly won't happen without CCC). I even worked out how to manually edit some of the settings in the registry to turn off all the sh!t video enhancements that ruins picture quality.
__________________
Media Player Classic Home Cinema Icon Library: NORMAL VERSION / GLOWING VERSION
mr.duck is offline  
Old 4th April 2011, 13:38   #16809  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by madshi View Post
I've recently asked madVR users to run tests for me on their GPUs. The result is that all DX9 NVidia GPUs that have been tested have forced some kind of chroma upsampling with no way to turn it off. On the other hand, with ATI all GPUs have used nearest neighbor when using StretchRect with D3DTEXF_NONE. You can however make ATI GPUs use bilinear chroma upsampling by using D3DTEXF_LINEAR in the NV12 -> RGB StretchRect call.


Good to hear, thanks for feedback.
I guess you refer to the lossy Surface states in the madnv12test ?, which indeed according to your test just change depending on the Controll Panel settings but stay lossy overall though it might be possible to go deeper via the NVAPI into those settings or ask Nvidia to make them available their though i didn't tried to include madnv12test as a 3D application and change the 3D performance settings for it also the driver was on the default quality level (which is @ application settings though and should have no impact) when i supplied my test results.

1 strange thing for example is this:

Inverse telecine off:

D3D9 Surface StretchRect:
D3DFMT_R8G8B8: creating GPU texture failed
D3DFMT_A8R8G8B8: lossy (16-235)
D3DFMT_X8R8G8B8: lossy (16-235)
D3DFMT_A8B8G8R8: creating GPU texture failed
D3DFMT_X8B8G8R8: creating GPU texture failed
D3DFMT_A2R10G10B10: lossy (16-236)
D3DFMT_A2B10G10R10: lossy (16-236)
D3DFMT_A16B16G16R16: StretchRect failed
D3DFMT_A16B16G16R16F: lossy (16-235)
D3DFMT_A32B32G32R32F: StretchRect failed

D3D9 Surface VideoProcessor:
D3DFMT_R8G8B8: creating GPU texture failed
D3DFMT_A8R8G8B8: lossy (16-235)
D3DFMT_X8R8G8B8: lossy (16-235)
D3DFMT_A8B8G8R8: creating GPU texture failed
D3DFMT_X8B8G8R8: creating GPU texture failed
D3DFMT_A2R10G10B10: lossy (16-236)
D3DFMT_A2B10G10R10: lossy (16-236)
D3DFMT_A16B16G16R16: lossy (16-235)
D3DFMT_A16B16G16R16F: lossy (16-235)
D3DFMT_A32B32G32R32F: lossy (16-235)

Inverse telecine on:

D3D9 Surface StretchRect:
D3DFMT_R8G8B8: creating GPU texture failed
D3DFMT_A8R8G8B8: lossy (16-255)
D3DFMT_X8R8G8B8: lossy (16-255)
D3DFMT_A8B8G8R8: creating GPU texture failed
D3DFMT_X8B8G8R8: creating GPU texture failed
D3DFMT_A2R10G10B10: lossy (16-255)
D3DFMT_A2B10G10R10: lossy (16-255)
D3DFMT_A16B16G16R16: StretchRect failed
D3DFMT_A16B16G16R16F: lossy (16-191)
D3DFMT_A32B32G32R32F: StretchRect failed

D3D9 Surface VideoProcessor:
D3DFMT_R8G8B8: creating GPU texture failed
D3DFMT_A8R8G8B8: lossy (16-255)
D3DFMT_X8R8G8B8: lossy (16-255)
D3DFMT_A8B8G8R8: creating GPU texture failed
D3DFMT_X8B8G8R8: creating GPU texture failed
D3DFMT_A2R10G10B10: lossy (16-255)
D3DFMT_A2B10G10R10: lossy (16-255)
D3DFMT_A16B16G16R16: lossy (16-255)
D3DFMT_A16B16G16R16F: lossy (16-191)
D3DFMT_A32B32G32R32F: lossy (16-192)

How can this have an impact on chroma in the test, or can results differ per run ?
seems so i just enabled it again and get different results from the two previous ones hehe funny

after reenabling Inverse Telecine:

D3D9 Surface StretchRect:
D3DFMT_R8G8B8: creating GPU texture failed
D3DFMT_A8R8G8B8: lossy (16-190)
D3DFMT_X8R8G8B8: lossy (16-190)
D3DFMT_A8B8G8R8: creating GPU texture failed
D3DFMT_X8B8G8R8: creating GPU texture failed
D3DFMT_A2R10G10B10: lossy (16-191)
D3DFMT_A2B10G10R10: lossy (16-191)
D3DFMT_A16B16G16R16: StretchRect failed
D3DFMT_A16B16G16R16F: lossy (16-178)
D3DFMT_A32B32G32R32F: StretchRect failed

D3D9 Surface VideoProcessor:
D3DFMT_R8G8B8: creating GPU texture failed
D3DFMT_A8R8G8B8: lossy (16-190)
D3DFMT_X8R8G8B8: lossy (16-190)
D3DFMT_A8B8G8R8: creating GPU texture failed
D3DFMT_X8B8G8R8: creating GPU texture failed
D3DFMT_A2R10G10B10: lossy (16-191)
D3DFMT_A2B10G10R10: lossy (16-191)
D3DFMT_A16B16G16R16: lossy (16-190)
D3DFMT_A16B16G16R16F: lossy (16-178)
D3DFMT_A32B32G32R32F: lossy (16-178)

So now the question would be how reliable are these measurements in real, or is Nvidias driver non deterministic ?
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 4th April 2011 at 14:20.
CruNcher is offline  
Old 4th April 2011, 13:55   #16810  |  Link
Damien147
Registered User
 
Join Date: Mar 2011
Posts: 380
Quote:
Originally Posted by Ger View Post
Also tested the chroma upsampling issue (blocky reds) with that Nvidia laptop and the same monitor connected via HDMI, and it's not affected at all, so that is apparently also (still) an ATI-only issue.
Quote:
In the meantime the first workaround, the "YV12 chroma upsampling shader", is a quick and easy solution for affected ATI/EVR-CP users.
This is the answer I've been looking.

Is this the patch that changed things?

I'm a noob and I can't understand exactly the description,when using ffdshow for decoding I get nv12 output in mpc-hc.This is intended according to the patch,right?
But when using mpc-hc for decoding on the same video I get rgb32 output with different but bad chroma result
Damien147 is offline  
Old 4th April 2011, 14:21   #16811  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by CruNcher View Post
How can this have an impact on chroma in the test, or can results differ per run ?
seems so i just enabled it again and get different results from the two previous ones hehe funny

So now the question would be how reliable are these measurements in real ?
The measurements should reliably show what the StretchRect operation produced. The results are stable with ATI and unstable with NVidia. I don't think it's a bug in madNV12Test. I think it just shows that with NVidia drivers, NV12 -> RGB StretchRect is not always reliable. Which means that madVR for example is never going to use NV12 -> RGB StretchRect with NVidia hardware/drivers.
madshi is offline  
Old 4th April 2011, 14:43   #16812  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Yep no doubt in that if your measurements are correct then it seems to be not reliable and should be replaced, also a interesting inside look into ATI and Nvidias Video Handling Level some oldshool ATI we are better then Nvidia in Video seems to be still alive deep down in the Driver also seeing some UVD 3 results lately makes me wonder if they didn't back on track also in Video after they run over Nvidia in 3D efficiency
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004
CruNcher is offline  
Old 4th April 2011, 15:56   #16813  |  Link
Ger
Registered User
 
Join Date: May 2007
Location: Norway
Posts: 192
Quote:
Originally Posted by Damien147 View Post
This is the answer I've been looking.
Yeah, I think you asked some of the questions I was referring to in that post, so I'm glad it helped.

Regarding your other questions, my tests only show roughly when it happened. I'm no dev either, and I have no idea specifically what change caused it or why it was changed or why it is or isn't a good idea to change it again. That's what I'm trying to ask/understand myself. I'm just testing/reporting and speculating/asking based on the little tidbits I understand.

My theory right now based on what JanW and madshi said is the bilinear filtering vs. nearest neighbor and the changes in r2837, but I'm utterly clueless, so you don't wanna know what my changing theories are all the time. So please ask someone else, or just wait and see how it all develops.

Last edited by Ger; 4th April 2011 at 15:59.
Ger is offline  
Old 4th April 2011, 16:06   #16814  |  Link
JanWillem32
Registered User
 
JanWillem32's Avatar
 
Join Date: Oct 2010
Location: The Netherlands
Posts: 1,083
@madshi and CruNcher: StretchRect is in EVR CP and VMR-9 (renderless) as an "all methods failed" backup solution. It disables all further shaders, 10- 16- and 32-bit surfaces, dithering and color management. It's been removed in EVR Sync for a good reason, as the problems with StretchRect are well known.

@Damien147: That's one of my older works. Im my builds I used the format merit list that was already in EVR sync as a base, and expanded the list to include all possible formats that the EVR mixer could accept.
I mostly wrote that patch in the link to eliminate the YV12/I420/IYUV to YUY2 conversion. Doing half chroma up-sampling before the mixer is just dumb. I added conversion to NV12 as a preferred option and set conversion to RGB32 as a backup if that fails.

@Ger: That code in the link was part of my first contribution to the renderer. It was mostly to fix the assignment of surfaces and the 10-bit output mode.
I didn't commit any of the visible shaders in the trunk build. I want to remove all them as a first step, and insert new ones with proper names.
The lack of chroma up-sampling has always been there for those that use NV12 output from external decoders, by the way. It's not that new.
For the problem with RGB32 input: once I figure out how to make message windows, I can add a simple warning message window to indicate that the video stream format in the file doesn't match the input stream of the mixer. (And of course add warnings if items in the renderer fail.)
The reason I don't dare to commit anything at the moment is because I broke the VSync offset function in my build and it's very hard to repair. The code for timing items is complex.
__________________
development folder, containing MPC-HC experimental tester builds, pixel shaders and more: http://www.mediafire.com/?xwsoo403c53hv
JanWillem32 is offline  
Old 4th April 2011, 16:14   #16815  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by JanWillem32 View Post
StretchRect is in EVR CP and VMR-9 (renderless) as an "all methods failed" backup solution. It disables all further shaders, 10- 16- and 32-bit surfaces, dithering and color management. It's been removed in EVR Sync for a good reason, as the problems with StretchRect are well known.
What is used instead for the YV12/NV12 -> RGB conversion?
madshi is offline  
Old 4th April 2011, 16:43   #16816  |  Link
Ger
Registered User
 
Join Date: May 2007
Location: Norway
Posts: 192
Quote:
Originally Posted by JanWillem32 View Post
The lack of chroma up-sampling has always been there for those that use NV12 output from external decoders, by the way. It's not that new.
No it's not a problem in r2833 or in EVR Sync when outputting NV12 from ffdshow to the renderer with no manual shaders applied, but it is in r2840. That's what I've been trying to tell you for a while now. It was broken in EVR-CP only, and in that revision range (2834-2840). That's why I'm saying several ATI owners are noticing a regression. That is also why I listed two suspected culprit revisions and maybe especially that last piece of code I linked to.

The situation with YUY2 output from decoder is exactly the same as with NV12. OK in EVR Sync and in old builds. Broken (as in very blocky) in EVR-CP 2840 and up.

It may not be quite as good as madVR or EVR Vanilla in those old builds or in EVR Sync, but it's much much better than EVR-CP in 2840 and later, and not that far away from the other two.

The YV12 shader makes a massive difference when it's broken/blocky and hardly any difference at all when it's already OK. See my previous posts for details on that. Unlike the YV12 shader, your optimized shader path inverts the situation. It makes blocky stuff OK and OK stuff blocky.

You have an ATI card right? Provided all ATI cards behave the same with this issue, which I'm assuming until I'm told otherwise, try feeding NV12 from ffdshow to EVR Sync in any build or to EVR-CP in 2833 (from my uploaded zip or from xvidvideo.ru) and see for yourself with a suitable red logo or something like that.

Last edited by Ger; 4th April 2011 at 16:50.
Ger is offline  
Old 4th April 2011, 17:15   #16817  |  Link
v0lt
Registered User
 
Join Date: Dec 2008
Posts: 1,959
@Ger
Bug appeared in version 2837. Here version 2836 and 2837. You can check out.
v0lt is offline  
Old 4th April 2011, 17:21   #16818  |  Link
Ger
Registered User
 
Join Date: May 2007
Location: Norway
Posts: 192
Thank you v0lt!

I confirm 2837 is the culprit. That was the revision I linked to last with the removed/changed stretchrect bilinear filtering stuff madshi was talking about, and one of the two on my first list of suspects.

Finally we're getting somewhere.

Last edited by Ger; 4th April 2011 at 17:27.
Ger is offline  
Old 4th April 2011, 19:05   #16819  |  Link
JanWillem32
Registered User
 
JanWillem32's Avatar
 
Join Date: Oct 2010
Location: The Netherlands
Posts: 1,083
@madshi: ARGB output is requested from the external EVR/VMR-9 mixer, as far as I can see. Both mixers don't limit or round the outputs, so values beyond the regular [0, 1] interval with floating-point conversions are left intact. The transfer matrix for the conversion is generally selected automatically by EVR. http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx I expect a similar item will be in VMR-9.

@Ger: http://forum.doom9.org/showthread.ph...32#post1463132 I could also replicate that later on.
EVR sync uses StretchRect (unfortunately I just found that out, it's a bit hidden in the code). I haven't paid much attention to Sync in development. We could use a developer that can help integrate the sync clock into the shared EVR CP/VMR-9 (renderless) renderer where the entire renderer stems from. (And clean up its many flaws in the process.)
I've looked up what the YV12 shader does again. (I haven't looked at it in a while.) It converts an area of 9 pixels with an inaccurate, truncated bt.601 (SD video) matrix and averages all values. My "4÷2÷0 chroma blur for SD&HD video input on old and slow PS 2.0 hardware" shader at least blurs with bilinear filter, using 4 pixels with correct SD and HD color matrices. It indeed hurts quality a lot if the chroma input is filtered with a bilinear filter during color conversion and then filtered again by this type of shaders. (I use "blurring", as "up-sampling" is better reserved for the windowed scalers.)
I very much prefer no filtering to take place by default at all, than having to waste resources on a default or forced bilinear filter on chroma data. Options to up-sample chroma can be added like the regular scaling options. It will need a warning for those with a nVidia card not to enable that option.
Having swscale to convert every input format for the mixer to RGB32 as it was before, would be pretty bad. On low-end CPU's it wasted a lot of processing on the bilinear filtering, RGB conversion and rounding to 8-bit. Output of 10-, 16- or 32-bit color formats is impossible with this method, so the color accuracy also suffers (on top of losing the advanced deinterlacing filters available with NV12 in the mixer).
__________________
development folder, containing MPC-HC experimental tester builds, pixel shaders and more: http://www.mediafire.com/?xwsoo403c53hv
JanWillem32 is offline  
Old 4th April 2011, 19:23   #16820  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by JanWillem32 View Post
ARGB output is requested from the external EVR/VMR-9 mixer, as far as I can see.
So basically the conversion from YV12/NV12 to RGB is done by some internal Microsoft EVR/VMR code? So we don't know how it's technically done? Could be StretchRect again?
madshi is offline  
Closed Thread

Tags
dxva, h264, home cinema, media player classic, mpc-hc

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:02.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.