Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
4th April 2011, 02:09 | #16801 | Link |
Registered User
Join Date: Nov 2008
Posts: 36
|
I set Auto-zoom: 100% in Playback and it works well for vids with resolution < my monitor's res. 1920x1200 (They were played 100% at their resolution). However, with full HD vids, MPC-HC plays in maximize window mode & still show the border. Doesn't it suppose to play at full screen (without border, at all) ?
I know there's a full screen option, but that setting overwrite the auto-zoom setting at 100% & always play vids in fullscreen, regardless of their res. Is there anyway to make the player to auto switch to full screen for fullHD clips, when Auto-zoom is set to 100% ? Thanks /
__________________
» Core i7 920 OC @ 4Ghz, 1.28v | Asus P6T | 6GB DDR3 @ 1531Mhz, 7-7-7-24-1T | GTX260 | HP LP2475w 24" & X-Rite OPTIX-XR » Win7 Ultimate SP1 x64 | MPC-HC latest | LAV filters latest | madVR latest Last edited by Rain1; 4th April 2011 at 02:11. |
4th April 2011, 03:35 | #16802 | Link |
Registered User
Join Date: Oct 2010
Location: The Netherlands
Posts: 1,083
|
@Rain1: Only the "D3D Fullscreen Mode" option will make videos render in full screen on startup, as far as I know.
@Ger: Thank you for testing. I already thought my changes to the bicubic resizer wouldn't make much of a difference (the output assembly didn't change much). Bicubic resizing in a single pass with the current matrix method is sub-optimal anyway. I hope you have more luck when other two-pass resizing modes are added. Luckily, (although blurry) native GPU bilinear filtering is default. I do have some remarks, of course. (I'm in a technical mood/mode today.) Chroma up-sampling is something best left to a custom mixer. The native surface blitting from Y'CbCr to RGB on GPUs have incoherent results. Newer nVidia models force a bilinear filter on doubling the dimensions of the chroma data, which can't be disabled. Older nVidia models, ATi and Intel models don't filter, so the chroma is resized with the nearest neighbor method. Of course, neither forcing bilinear filtering nor skipping filtering is a good solution. However, I can understand that adding a GUI option in the video section of the drivers with a range of chroma resizing filters that range from unfiltered to something like Lanczos256 would be rather much to ask from GPU driver developers. The only thing I can do at the moment is integrating the shaders I've written in a similar way as the color management and dithering function, making in effect half a mixer. That would work okay as a user-selectable option, but not as an automatic one. The other (preferred) option would be to discard the EVR and VMR-9 mixer and give the shared renderer part of VMR-9 (renderless) and EVR CP a custom mixer. The hard parts are that I've haven't seen any mixers with a GPL license around, I can't just write one on my own, without months worth of lessons in media handling like this in my spare time and lastly, I wouldn't know a good name for the renderer, as (renderless) and CP are what would remain of the original names. I'm the one that pushed conversion to RGB32 or RGB24 to the very bottom of the mixer input priority list. All the current decoders are natively NV12, YV12, I420 and YUY2 anyway. With the conversion to RGB32 or RGB24, after having conversion from Y'CbCr to RGB, the resulting floating points are rounded to 8-bit integer. With some luck, there's some dithering involved (that can't be compared to the internal 32²||128² dither maps and randomized dithering methods). Anyway, this kind of conversion is pretty lossy. It's completely unacceptable for any renderer chain to be designed like that. Conversion to RGB32 or RGB24 before the mixer is just a compatibility option for old GPUs that can't provide any suitable Y'CbCr surfaces for the VMR-9 or EVR mixer that for some reason demand such support. Absolutely all current internal shaders can be replaced by a set I've written (and a selection of a few more). I highly doubt there will be anyone that will deny that the versions I've written are superior. I'll happily defend that statement if someone wants to challenge me. The "YV12" shader was one of the first I've re-written for MPC-HC (last September). See the OP and the first few posts from the shaders thread for reference. Advanced deinterlacing methods are not exclusive to ATi hardware. The nVidia and (recent) Intel solutions include those, too. Software deinterlacing has included several advanced methods for more than a decade. The current GPUs just prefer to do deinterlacing with NV12 input it seems. The deinterlacing capabilities of a GPU can be checked with the DXVAchecker program.
__________________
development folder, containing MPC-HC experimental tester builds, pixel shaders and more: http://www.mediafire.com/?xwsoo403c53hv |
4th April 2011, 06:57 | #16803 | Link |
Registered User
Join Date: Jan 2002
Posts: 1,264
|
I'm quite happy with things the way they are now. I can now use D3D tickbox and get MadVR with exclusive mode on secondary display. All menu's and controls appear to function correctly. The only snag I had was my GPU is not fast enough to handle 720p with subtitle heavy video at 720p (desktop res and power of 2). I wanted to use softcubic 100 on chroma and spline 4 tap on luma up and down. But it was dropping frames too much. I've ended up overclocking my G210 and things are a LOT better (see link below on the overclock)!
http://www.techpowerup.com/gpuz/79mv/ I'll probably upgrade my card at some point though as it does get quite hot since it's passively cooled. Max temp after a 25 minute episode was 82 degrees C. But the max operating temp is 105c so not too worried. |
4th April 2011, 07:13 | #16804 | Link | |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Good to hear, thanks for feedback. Last edited by madshi; 4th April 2011 at 07:15. |
|
4th April 2011, 07:38 | #16805 | Link | ||||||
Registered User
Join Date: May 2007
Location: Norway
Posts: 192
|
Quote:
Quote:
Quote:
But let me try to guess/understand from an end-user's perspective, and please let me know if I'm even close to grasping this: Are you saying that in old unaffected builds, like in 2833 and still in EVR Sync or plain EVR, when the renderer gets NV12 from the decoder (according to pin info) it performs a (high quality) conversion to RGB32 first and uses that for the "mixer input"? And you wanted NV12 mixer input and changed this to prioritize feeding the same NV12 colorspace the renderer gets from the decoder to the mixer in either 2837 or 2839 and not in 2863? That is why the bad chroma is visible already in r2840? Quote:
Quote:
The reason I think that particular "YV12" shader is the easiest solution for most people for the time being (which is maybe what you are responding to here?) is that it is embedded already in all builds, and so is easy to use. It fixes the blocky reds in EVR-CP and doesn't break anything (hardly any change at all) if you switch to already "not blocky" EVR Sync or RGB32 (Full/HQ) output from the decoder. OTOH, if for example I use either of your optimized paths for fp/integer trios of shaders (fixes the blocky reds when present), they will (or at least it did for me) actually cause blocky reds in EVR Sync or with Full/HQ converted RGB32 from the decoder. So we have to remember to switch shaders when we switch settings. Of course if you know/recommend a better shader (or set of shaders) than the YV12 one that doesn't break anything when changing settings/renderers then I'm open to suggestions for my own use, but it will still be to complicated to recommend custom/external shaders for the average ATI/EVR-CP user with blocky reds until they are integrated IMHO. Quote:
When your tester builds are used, ATI's hardware deinterlacing with the internal MPEG-2 decoder looks noticeably better than in trunk builds with all of the EVR renderers. The decoder is outputting NV12 and not YUY2 as in the trunk builds, so "VectorAdaptiveDevice" (and "MotionAdaptiveDevice") can be used, while YUY2 is restricted to the third best ATI method (AdaptiveDevice). Maybe whatever change you made to accomplish that could be committed sooner than the other stuff, if it's not too integrated with your other features? |
||||||
4th April 2011, 07:55 | #16806 | Link | |
Registered User
Join Date: May 2007
Location: Norway
Posts: 192
|
Quote:
Edit: Maybe something happened here? Last edited by Ger; 4th April 2011 at 08:19. |
|
4th April 2011, 12:25 | #16807 | Link | |
_
Join Date: May 2008
Location: France
Posts: 692
|
Quote:
Maybe I miss something, and you're right. But it would be cool to know EXACTLY what dependences are. |
|
4th April 2011, 13:35 | #16808 | Link |
quack quack
Join Date: Apr 2009
Posts: 259
|
I have some experience with AMDs drivers. DON'T install the CCC and go to services control pannel and block all AMD Radeon related services from loading like external events blah blah. This setup gets best performance/fastest boot times and less problems (like changing AA mode and madVR doesn't work properly won't happen without CCC). I even worked out how to manually edit some of the settings in the registry to turn off all the sh!t video enhancements that ruins picture quality.
|
4th April 2011, 13:38 | #16809 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Quote:
1 strange thing for example is this: Inverse telecine off: D3D9 Surface StretchRect: D3DFMT_R8G8B8: creating GPU texture failed D3DFMT_A8R8G8B8: lossy (16-235) D3DFMT_X8R8G8B8: lossy (16-235) D3DFMT_A8B8G8R8: creating GPU texture failed D3DFMT_X8B8G8R8: creating GPU texture failed D3DFMT_A2R10G10B10: lossy (16-236) D3DFMT_A2B10G10R10: lossy (16-236) D3DFMT_A16B16G16R16: StretchRect failed D3DFMT_A16B16G16R16F: lossy (16-235) D3DFMT_A32B32G32R32F: StretchRect failed D3D9 Surface VideoProcessor: D3DFMT_R8G8B8: creating GPU texture failed D3DFMT_A8R8G8B8: lossy (16-235) D3DFMT_X8R8G8B8: lossy (16-235) D3DFMT_A8B8G8R8: creating GPU texture failed D3DFMT_X8B8G8R8: creating GPU texture failed D3DFMT_A2R10G10B10: lossy (16-236) D3DFMT_A2B10G10R10: lossy (16-236) D3DFMT_A16B16G16R16: lossy (16-235) D3DFMT_A16B16G16R16F: lossy (16-235) D3DFMT_A32B32G32R32F: lossy (16-235) Inverse telecine on: D3D9 Surface StretchRect: D3DFMT_R8G8B8: creating GPU texture failed D3DFMT_A8R8G8B8: lossy (16-255) D3DFMT_X8R8G8B8: lossy (16-255) D3DFMT_A8B8G8R8: creating GPU texture failed D3DFMT_X8B8G8R8: creating GPU texture failed D3DFMT_A2R10G10B10: lossy (16-255) D3DFMT_A2B10G10R10: lossy (16-255) D3DFMT_A16B16G16R16: StretchRect failed D3DFMT_A16B16G16R16F: lossy (16-191) D3DFMT_A32B32G32R32F: StretchRect failed D3D9 Surface VideoProcessor: D3DFMT_R8G8B8: creating GPU texture failed D3DFMT_A8R8G8B8: lossy (16-255) D3DFMT_X8R8G8B8: lossy (16-255) D3DFMT_A8B8G8R8: creating GPU texture failed D3DFMT_X8B8G8R8: creating GPU texture failed D3DFMT_A2R10G10B10: lossy (16-255) D3DFMT_A2B10G10R10: lossy (16-255) D3DFMT_A16B16G16R16: lossy (16-255) D3DFMT_A16B16G16R16F: lossy (16-191) D3DFMT_A32B32G32R32F: lossy (16-192) How can this have an impact on chroma in the test, or can results differ per run ? seems so i just enabled it again and get different results from the two previous ones hehe funny after reenabling Inverse Telecine: D3D9 Surface StretchRect: D3DFMT_R8G8B8: creating GPU texture failed D3DFMT_A8R8G8B8: lossy (16-190) D3DFMT_X8R8G8B8: lossy (16-190) D3DFMT_A8B8G8R8: creating GPU texture failed D3DFMT_X8B8G8R8: creating GPU texture failed D3DFMT_A2R10G10B10: lossy (16-191) D3DFMT_A2B10G10R10: lossy (16-191) D3DFMT_A16B16G16R16: StretchRect failed D3DFMT_A16B16G16R16F: lossy (16-178) D3DFMT_A32B32G32R32F: StretchRect failed D3D9 Surface VideoProcessor: D3DFMT_R8G8B8: creating GPU texture failed D3DFMT_A8R8G8B8: lossy (16-190) D3DFMT_X8R8G8B8: lossy (16-190) D3DFMT_A8B8G8R8: creating GPU texture failed D3DFMT_X8B8G8R8: creating GPU texture failed D3DFMT_A2R10G10B10: lossy (16-191) D3DFMT_A2B10G10R10: lossy (16-191) D3DFMT_A16B16G16R16: lossy (16-190) D3DFMT_A16B16G16R16F: lossy (16-178) D3DFMT_A32B32G32R32F: lossy (16-178) So now the question would be how reliable are these measurements in real, or is Nvidias driver non deterministic ?
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 4th April 2011 at 14:20. |
|
4th April 2011, 13:55 | #16810 | Link | ||
Registered User
Join Date: Mar 2011
Posts: 380
|
Quote:
Quote:
Is this the patch that changed things? I'm a noob and I can't understand exactly the description,when using ffdshow for decoding I get nv12 output in mpc-hc.This is intended according to the patch,right? But when using mpc-hc for decoding on the same video I get rgb32 output with different but bad chroma result |
||
4th April 2011, 14:21 | #16811 | Link |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
The measurements should reliably show what the StretchRect operation produced. The results are stable with ATI and unstable with NVidia. I don't think it's a bug in madNV12Test. I think it just shows that with NVidia drivers, NV12 -> RGB StretchRect is not always reliable. Which means that madVR for example is never going to use NV12 -> RGB StretchRect with NVidia hardware/drivers.
|
4th April 2011, 14:43 | #16812 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Yep no doubt in that if your measurements are correct then it seems to be not reliable and should be replaced, also a interesting inside look into ATI and Nvidias Video Handling Level some oldshool ATI we are better then Nvidia in Video seems to be still alive deep down in the Driver also seeing some UVD 3 results lately makes me wonder if they didn't back on track also in Video after they run over Nvidia in 3D efficiency
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 |
4th April 2011, 15:56 | #16813 | Link |
Registered User
Join Date: May 2007
Location: Norway
Posts: 192
|
Yeah, I think you asked some of the questions I was referring to in that post, so I'm glad it helped.
Regarding your other questions, my tests only show roughly when it happened. I'm no dev either, and I have no idea specifically what change caused it or why it was changed or why it is or isn't a good idea to change it again. That's what I'm trying to ask/understand myself. I'm just testing/reporting and speculating/asking based on the little tidbits I understand. My theory right now based on what JanW and madshi said is the bilinear filtering vs. nearest neighbor and the changes in r2837, but I'm utterly clueless, so you don't wanna know what my changing theories are all the time. So please ask someone else, or just wait and see how it all develops. Last edited by Ger; 4th April 2011 at 15:59. |
4th April 2011, 16:06 | #16814 | Link |
Registered User
Join Date: Oct 2010
Location: The Netherlands
Posts: 1,083
|
@madshi and CruNcher: StretchRect is in EVR CP and VMR-9 (renderless) as an "all methods failed" backup solution. It disables all further shaders, 10- 16- and 32-bit surfaces, dithering and color management. It's been removed in EVR Sync for a good reason, as the problems with StretchRect are well known.
@Damien147: That's one of my older works. Im my builds I used the format merit list that was already in EVR sync as a base, and expanded the list to include all possible formats that the EVR mixer could accept. I mostly wrote that patch in the link to eliminate the YV12/I420/IYUV to YUY2 conversion. Doing half chroma up-sampling before the mixer is just dumb. I added conversion to NV12 as a preferred option and set conversion to RGB32 as a backup if that fails. @Ger: That code in the link was part of my first contribution to the renderer. It was mostly to fix the assignment of surfaces and the 10-bit output mode. I didn't commit any of the visible shaders in the trunk build. I want to remove all them as a first step, and insert new ones with proper names. The lack of chroma up-sampling has always been there for those that use NV12 output from external decoders, by the way. It's not that new. For the problem with RGB32 input: once I figure out how to make message windows, I can add a simple warning message window to indicate that the video stream format in the file doesn't match the input stream of the mixer. (And of course add warnings if items in the renderer fail.) The reason I don't dare to commit anything at the moment is because I broke the VSync offset function in my build and it's very hard to repair. The code for timing items is complex.
__________________
development folder, containing MPC-HC experimental tester builds, pixel shaders and more: http://www.mediafire.com/?xwsoo403c53hv |
4th April 2011, 16:14 | #16815 | Link | |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
|
|
4th April 2011, 16:43 | #16816 | Link | |
Registered User
Join Date: May 2007
Location: Norway
Posts: 192
|
Quote:
The situation with YUY2 output from decoder is exactly the same as with NV12. OK in EVR Sync and in old builds. Broken (as in very blocky) in EVR-CP 2840 and up. It may not be quite as good as madVR or EVR Vanilla in those old builds or in EVR Sync, but it's much much better than EVR-CP in 2840 and later, and not that far away from the other two. The YV12 shader makes a massive difference when it's broken/blocky and hardly any difference at all when it's already OK. See my previous posts for details on that. Unlike the YV12 shader, your optimized shader path inverts the situation. It makes blocky stuff OK and OK stuff blocky. You have an ATI card right? Provided all ATI cards behave the same with this issue, which I'm assuming until I'm told otherwise, try feeding NV12 from ffdshow to EVR Sync in any build or to EVR-CP in 2833 (from my uploaded zip or from xvidvideo.ru) and see for yourself with a suitable red logo or something like that. Last edited by Ger; 4th April 2011 at 16:50. |
|
4th April 2011, 17:15 | #16817 | Link |
Registered User
Join Date: Dec 2008
Posts: 1,959
|
@Ger
Bug appeared in version 2837. Here version 2836 and 2837. You can check out. |
4th April 2011, 17:21 | #16818 | Link |
Registered User
Join Date: May 2007
Location: Norway
Posts: 192
|
Thank you v0lt!
I confirm 2837 is the culprit. That was the revision I linked to last with the removed/changed stretchrect bilinear filtering stuff madshi was talking about, and one of the two on my first list of suspects. Finally we're getting somewhere. Last edited by Ger; 4th April 2011 at 17:27. |
4th April 2011, 19:05 | #16819 | Link |
Registered User
Join Date: Oct 2010
Location: The Netherlands
Posts: 1,083
|
@madshi: ARGB output is requested from the external EVR/VMR-9 mixer, as far as I can see. Both mixers don't limit or round the outputs, so values beyond the regular [0, 1] interval with floating-point conversions are left intact. The transfer matrix for the conversion is generally selected automatically by EVR. http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx I expect a similar item will be in VMR-9.
@Ger: http://forum.doom9.org/showthread.ph...32#post1463132 I could also replicate that later on. EVR sync uses StretchRect (unfortunately I just found that out, it's a bit hidden in the code). I haven't paid much attention to Sync in development. We could use a developer that can help integrate the sync clock into the shared EVR CP/VMR-9 (renderless) renderer where the entire renderer stems from. (And clean up its many flaws in the process.) I've looked up what the YV12 shader does again. (I haven't looked at it in a while.) It converts an area of 9 pixels with an inaccurate, truncated bt.601 (SD video) matrix and averages all values. My "4÷2÷0 chroma blur for SD&HD video input on old and slow PS 2.0 hardware" shader at least blurs with bilinear filter, using 4 pixels with correct SD and HD color matrices. It indeed hurts quality a lot if the chroma input is filtered with a bilinear filter during color conversion and then filtered again by this type of shaders. (I use "blurring", as "up-sampling" is better reserved for the windowed scalers.) I very much prefer no filtering to take place by default at all, than having to waste resources on a default or forced bilinear filter on chroma data. Options to up-sample chroma can be added like the regular scaling options. It will need a warning for those with a nVidia card not to enable that option. Having swscale to convert every input format for the mixer to RGB32 as it was before, would be pretty bad. On low-end CPU's it wasted a lot of processing on the bilinear filtering, RGB conversion and rounding to 8-bit. Output of 10-, 16- or 32-bit color formats is impossible with this method, so the color accuracy also suffers (on top of losing the advanced deinterlacing filters available with NV12 in the mixer).
__________________
development folder, containing MPC-HC experimental tester builds, pixel shaders and more: http://www.mediafire.com/?xwsoo403c53hv |
Tags |
dxva, h264, home cinema, media player classic, mpc-hc |
Thread Tools | Search this Thread |
Display Modes | |
|
|