Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
23rd November 2012, 16:38 | #15721 | Link |
QuickSync Decoder author
Join Date: Apr 2011
Location: Atlit, Israel
Posts: 916
|
Can you specify what the problem is?
__________________
Eric Gur, Processor Application Engineer for Overclocking and CPU technologies Intel QuickSync Decoder author Intel Corp. |
23rd November 2012, 16:54 | #15722 | Link | ||||
Registered User
Join Date: Feb 2004
Posts: 399
|
Quote:
I guess how Overlay passes DXVA1 bitstream packets to the GPU isn't documented either? Quote:
I never had a BSOD when DXVA decoding (mpeg2) either ^^; Quote:
Quote:
__________________
XP SP3 / Geforce 8500 / Zoom Player |
||||
23rd November 2012, 16:59 | #15723 | Link | |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,348
|
Quote:
I've also heard rumors of some people actually getting DXVA2 Decoding to work on XP, not sure if there is truth in that.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
|
23rd November 2012, 17:26 | #15725 | Link | ||
Registered User
Join Date: Feb 2004
Posts: 399
|
Quote:
Quote:
__________________
XP SP3 / Geforce 8500 / Zoom Player |
||
23rd November 2012, 17:52 | #15726 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,348
|
Deinterlacing is independent of decoding. If you can get DXVA Deinterlacing to work in madVR (it should work according to madshi), then you can also use software decoding if you want.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
23rd November 2012, 18:05 | #15727 | Link | |||||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
No. The log clearly says that madVR is not receiving VSync scanline information. I don't know why madVR would receive them for some videos but not for others. In any case, I don't think there's anything I can do about it... Quote:
FWIW, I'm asking DXVA2 to scale NV12 -> NV12, so I'm wondering why NVidia thinks it should do a matrix change! Quote:
Quote:
Quote:
Quote:
Typo? Why? DXVA2 deinterlacing and scaling is supported on XP, just not DXVA2 decoding. Quote:
Sure. When rendering with D3D9, the usual way to do things is to first call IDirect3DDevice9::BeginScene(), then you initialize the texture samplers by calling IDirect3DDevice9::SetTexture(), then you do some other stuff and finally you call IDirect3DDevice9::EndScene(). Now the first problem is: IDirect3DDevice9::SetTexture() wants a texture. It doesn't accept a surface. So I can't use the NV12 surface I got from DXVA2 here. Which means I need to convert the DXVA2 surface to a texture somehow. And here comes the next problem: Textures don't support NV12. So I can't just create an NV12 texture and copy the surface to the texture. Simply not possible. I could try to create a YUY2 or UYVY texture, which the GPU might (or might not) support. But YUY2 and UYVY are 4:2:2, so if I copy the NV12 surface to the YCbCr texture, the GPU driver already has to upsample chroma from 4:2:0 to 4:2:2 somehow and I have no control over how that is done. But the problems don't end here. If I use IDirect3DDevice9::SetTexture() with any kind of YCbCr texture, my pixel shaders (which do all the work) will never see the true YCbCr data. Instead Direct3D will convert the YCbCr data into RGB behind my back. So again chroma upsampling is done from 4:2:2 to 4:4:4 behind my back, and on top also YCbCr -> RGB conversion with an unknown color matrix. What I really want is to be able to access the original YCbCr data in my pixel shaders. But there's no clean way to do this. When using software decoding, madVR stores the original YCbCr data into an RGB texture. The GPU doesn't know it's actually YCbCr data, only madVR knows that. So I can process the data without someone converting the hell out of the data behind my back. Unfortunately this only works by using the CPU to force the NV12 data into an RGB texture. DXVA2 stores its results in GPU RAM, so if I want to use the CPU to force the NV12 data untouched into an RGB texture, I have to use "copy-back" which is bad for performance. I believe I can solve this with Intel by using OpenCL, and with NVidia by using CUDA. I don't see any other solution. If the Intel drivers have a secret "SplitNv12SurfaceToRgbTextureWithoutTouchingTheData" API then please let me know... |
|||||||
23rd November 2012, 18:07 | #15728 | Link | |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
(You have to install .NET 3.0 or higher to get DXVA2 working on XP at all.) |
|
23rd November 2012, 18:22 | #15729 | Link | |
Registered User
Join Date: Apr 2009
Posts: 1,019
|
Quote:
|
|
23rd November 2012, 19:06 | #15731 | Link | |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
NVidia supports OpenCL, but only an outdated version which doesn't support the features I need. That said, the current CUDA version doesn't support what I need, either (bug report posted to NVidia a couple of weeks ago). <sigh> |
|
23rd November 2012, 19:09 | #15732 | Link |
Kid for Today
Join Date: Aug 2004
Posts: 3,477
|
I couldn't get any further detail apart from some broken english repeating me that "it cannot work". You provided details on how to not break FSE in the mVR distribution files IIRC and you said that you were in contact wih them so I thought that you could raise the subject someday, nothing more. If it doesn't break the D3D exclusive mode of EVR, it should be possible to do the same with mVR
|
23rd November 2012, 20:26 | #15733 | Link | |
MPC-HC Project Manager
Join Date: Mar 2007
Posts: 2,317
|
Quote:
However it is usually limited to only one of the codecs. In my case my hd 4770 only had VC1 DXVA2 on XP. There is no real technical limit afaik, it is just that the drivers donīt support the capability. Rumors back then where that this was on purpose to push vista.
__________________
MPC-HC, an open source project everyone can improve. Want to help? Test Nightly Builds, submit patches or bugs and chat on IRC |
|
23rd November 2012, 22:29 | #15735 | Link | |
Registered User
Join Date: Jul 2011
Posts: 57
|
Quote:
Anybody? |
|
23rd November 2012, 22:37 | #15736 | Link | |
Registered User
Join Date: Jul 2005
Posts: 359
|
Quote:
Every time native DXVA2 decoding is on, colors look different from reference (2 and 4). Colors / gamma, everything look the same between 1 and 3, and 2 and 4. As far as I can tell, DXVA2 scaling looks similar to Bilinear scaling. (edit: nm, no scaling was done, video was 1920x1080 on a 1080p display) Driver: 12.10 with AMD Vision CP installed GPU: MSI Radeon 5570 OS: Win 7 64-bit ---------------------------------------- Intel Rig: This one was a bit more interesting. My AMD rig is connected to a 1080p display. I took my screenshots using a 1920x1080 video so no scaling was ever done. The color differences were caused by the software vs DXVA2 decoding. My Intel rig is connected to a 32" 1360x768 native LCD TV but supports up to 1080p. At first I took all of my screenshots at my display's native resolution (1360x768) and every screenshot (2, 3 and 4) had different colors than the reference #1 shot. I thought that was odd since my AMD rig, only the screenshots using DXVA2 decoding had different colors. Because DXVA2 scaling takes screenshots in its display/scaled resolution and madVR's bilinear scaling took the screenshot in its native resolution, I thought that maybe the DXVA2 scaling caused the differences in color. So I then switched my display to 1080p and took screenshots, so no scaling was done. Sure enough, just like my AMD rig, 1 and 3 looked the same and 2 and 4 looked the same. Something else I noticed that on my Intel rig, when I used DXVA2 scaling, I saw a green bar at the top. Driver: 9.17.10.2867 GPU: Intel HD 4000 OS: Win 7 64-bit FWIW, because you have to right-click to take a screenshot, every screenshot wasn't taken in FSE mode but in full-screen non-exclusive mode. Last edited by rahzel; 24th November 2012 at 00:36. |
|
23rd November 2012, 22:56 | #15737 | Link | |
Registered User
Join Date: Oct 2012
Posts: 27
|
Quote:
GPU: Radeon HD 5670 Driver: 11.5 without Catalyst, default driver from MS update OS: Windows 7 64-bit |
|
23rd November 2012, 23:03 | #15738 | Link | |
Registered User
Join Date: Mar 2007
Posts: 934
|
Quote:
Using Windows 8, nVidia GTS 250, latest drivers (306.97).
__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7 Last edited by DragonQ; 23rd November 2012 at 23:08. |
|
24th November 2012, 01:32 | #15739 | Link | |
Registered User
Join Date: Oct 2012
Posts: 27
|
Quote:
|
|
24th November 2012, 01:48 | #15740 | Link |
Registered User
Join Date: May 2008
Posts: 1,840
|
Green line at the top here too with intel hd3000. Something that may help track the problem is it's not limited to resizing. If madvr isn't image resizing but dxva resizer is selected the green line shows up when decoding with intel quicksync, switch to another decoder and it disappears. Also happens when resizing no matter what the decoder. MPC also takes about twice as long to start up with madvr 0.85.x as it does with 0.84.x and EVR on intel hd3000, after initial start it's quick again when switching files. With nvidia startup is about the same speed it was with 0.84.x.
DXVA resize is pretty impressive on nvidia 9500GT though, sharp with very few artifacts. Chroma resize is the culprit for most of the artifacts, switching to softcubic100 greatly reduced them but caused other image problems so switched back to lanczos3ar. Is it planned to have dxva available for chroma resizing? As for defaults, dxva is the 2nd largest load on the 2 gpu's here, so unless it can scale with performance of the gpu it might not be good for preventing slide shows, nor is lanczos really. Any of the bicubic's are about half the load as lanczos but maybe more important then default resizer is gpu queue. While process explorer doesn't show a significant load or ram increase using 8 gpu queues frame drops start around 60% load, while 4 gpu queues starts around 80%. Then there's the problem I had way back when with frozen player with default queues, luckily was able to drop them to 4 after a few minutes and issue disappeared. Is there any advantage to using 8 over 4? Drops frames at a lower load, noticeably slower startup and seeks on lower end hardware are disadvantages. 6/4 queues has never been a problem for me. Last edited by turbojet; 24th November 2012 at 01:51. |
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
|
|