Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 23rd November 2012, 16:38   #15721  |  Link
egur
QuickSync Decoder author
 
Join Date: Apr 2011
Location: Atlit, Israel
Posts: 916
Quote:
Originally Posted by madshi View Post
The key problem I'm currently facing is that DXVA2 outputs its NV12 results in IDirect3DSurface9 and I can't directly access that in my HLSL PixelShader rendering pipeline.
Can you specify what the problem is?
__________________
Eric Gur,
Processor Application Engineer for Overclocking and CPU technologies
Intel QuickSync Decoder author
Intel Corp.
egur is offline   Reply With Quote
Old 23rd November 2012, 16:54   #15722  |  Link
TheShadowRunner
Registered User
 
TheShadowRunner's Avatar
 
Join Date: Feb 2004
Posts: 399
Quote:
All of that talks about how to write a DXVA1 decoder, not a renderer. (..) It's not documented anywhere how VMR passes the DXVA1 bitstream packets to the GPU for decoding, so I don't know how to do that.
Ahhh OK, I get it.
I guess how Overlay passes DXVA1 bitstream packets to the GPU isn't documented either?

Quote:
Anyway, DXVA decoding on XP is problematic, anyway. It's always been very unstable (blue screens etc) on my XP PC. And there's a fundamental problem when resizing the window, too. Resizing the window results in madVR trying to reset the D3D9 device, but in XP that is only possible if all GPU resources are destroyed first. That makes things very very complicated when using external DXVA decoders, because they hold some GPU resources, as well.
Hmm when I resize the ZoomPlayer window, or the video surface using the zoom function when playing back MPEG2 contents using Cyberlink decoder in DXVA mode, with any of the compatible renderers (overlay, VMR7/9), there is no issue whatsoever compared to sofware decode..
I never had a BSOD when DXVA decoding (mpeg2) either ^^;

Quote:
Even if suddenly documentation for writing a DXVA1 renderer showed up, I'd probably not support it for XP because of the D3D9 device reset problem.
Damn.. guess DVD DXVA decode + perfect madVR vsync won't ever happen then :/

Quote:
FWIW, I fully support DXVA2 deinterlacing and scaling on XP and it works very stable for me.
Wait.. what? Typo? XD
__________________
XP SP3 / Geforce 8500 / Zoom Player
TheShadowRunner is offline   Reply With Quote
Old 23rd November 2012, 16:59   #15723  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,348
Quote:
Originally Posted by TheShadowRunner View Post
Damn.. guess DVD DXVA decode + perfect madVR vsync won't ever happen then :/
CUVID in LAV is the only accelerated decoding thats available for madVR on XP.
I've also heard rumors of some people actually getting DXVA2 Decoding to work on XP, not sure if there is truth in that.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 23rd November 2012, 17:08   #15724  |  Link
JarrettH
Registered User
 
Join Date: Aug 2004
Location: Canada
Posts: 860
What does the change log mean by external DXVA2 decoders? Hasn't LAV always had DXVA2 decoding or did it not work properly until now?
JarrettH is offline   Reply With Quote
Old 23rd November 2012, 17:26   #15725  |  Link
TheShadowRunner
Registered User
 
TheShadowRunner's Avatar
 
Join Date: Feb 2004
Posts: 399
Quote:
Originally Posted by nevcairiel View Post
CUVID in LAV is the only accelerated decoding thats available for madVR on XP.
Yes Nev, but no equivalent of DXVA's Pixel Adaptive deinterlacing when going the CUVID way, right?

Quote:
I've also heard rumors of some people actually getting DXVA2 Decoding to work on XP, not sure if there is truth in that.
Interestingu.. ^^;
__________________
XP SP3 / Geforce 8500 / Zoom Player
TheShadowRunner is offline   Reply With Quote
Old 23rd November 2012, 17:52   #15726  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,348
Quote:
Originally Posted by TheShadowRunner View Post
Yes Nev, but no equivalent of DXVA's Pixel Adaptive deinterlacing when going the CUVID way, right?
Deinterlacing is independent of decoding. If you can get DXVA Deinterlacing to work in madVR (it should work according to madshi), then you can also use software decoding if you want.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 23rd November 2012, 18:05   #15727  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by DragonQ View Post
It'll be interesting seeing the difference between, say, Jinc3 and DXVA2 luma scaling, once the differences in cropping and colours are fixed. I bet I won't notice any difference, lol.
Whether there's a big difference or not depends a lot on the source material and the scaling factor.

Quote:
Originally Posted by ajp2k11 View Post
any idea why some files work while others don't?
No. The log clearly says that madVR is not receiving VSync scanline information. I don't know why madVR would receive them for some videos but not for others. In any case, I don't think there's anything I can do about it...

Quote:
Originally Posted by 6233638 View Post
On my system with Nvidia, DXVA2 scaling is resulting in the wrong colours. DXVA native decoding is fine, when compared to software decoding.

It looks like it could be a colour matrix issue. (switching matrix in madVR doesn't fix it)
It appears to be using the BT.709 matrix on BT.601 content once it's scaled over a certain size.
Interesting! Could you make that test video available to me, please?

FWIW, I'm asking DXVA2 to scale NV12 -> NV12, so I'm wondering why NVidia thinks it should do a matrix change!

Quote:
Originally Posted by 6233638 View Post
Now that I know it's working correctly, I also ended up using the new saturation control last night, and it seems to do a good job.

While I am generally in the "watch the film as accurately as possible" group, there are some films where I wonder what kind of display they were mastered on, because colour looks dead and lifeless. (they tend to be shot on digital, often Red) A slight bump in saturation seems to help.

Before, After
Yeah, "After" definitely looks better...

Quote:
Originally Posted by leeperry View Post
I recently asked if they could get the D3D GUI of PotP to work with mVR, the replied "D3D UI cannot work with madVR".....it seems to be working with EVR(and possibly VMR9, not sure), so why not mVR?
What do I know? Ask them!

Quote:
Originally Posted by TheShadowRunner View Post
Ahhh OK, I get it.
I guess how Overlay passes DXVA1 bitstream packets to the GPU isn't documented either?
Correct.

Quote:
Originally Posted by TheShadowRunner View Post
Damn.. guess DVD DXVA decode + perfect madVR vsync won't ever happen then :/
Of course it will. Just update to win7.

Quote:
Originally Posted by TheShadowRunner View Post
Wait.. what? Typo? XD
Typo? Why? DXVA2 deinterlacing and scaling is supported on XP, just not DXVA2 decoding.

Quote:
Originally Posted by JarrettH View Post
What does the change log mean by external DXVA2 decoders? Hasn't LAV always had DXVA2 decoding or did it not work properly until now?
It was working, but only when using "copy-back", not "native" mode. And all the other DXVA2 decoders didn't work at all with madVR cause they don't have a "copy-back" mode. With the new madVR build now LAV works in native mode, too, and all the other DXVA decoders should now work, too (at least in theory).

Quote:
Originally Posted by egur View Post
Can you specify what the problem is?
Sure. When rendering with D3D9, the usual way to do things is to first call IDirect3DDevice9::BeginScene(), then you initialize the texture samplers by calling IDirect3DDevice9::SetTexture(), then you do some other stuff and finally you call IDirect3DDevice9::EndScene().

Now the first problem is: IDirect3DDevice9::SetTexture() wants a texture. It doesn't accept a surface. So I can't use the NV12 surface I got from DXVA2 here. Which means I need to convert the DXVA2 surface to a texture somehow. And here comes the next problem: Textures don't support NV12. So I can't just create an NV12 texture and copy the surface to the texture. Simply not possible. I could try to create a YUY2 or UYVY texture, which the GPU might (or might not) support. But YUY2 and UYVY are 4:2:2, so if I copy the NV12 surface to the YCbCr texture, the GPU driver already has to upsample chroma from 4:2:0 to 4:2:2 somehow and I have no control over how that is done. But the problems don't end here. If I use IDirect3DDevice9::SetTexture() with any kind of YCbCr texture, my pixel shaders (which do all the work) will never see the true YCbCr data. Instead Direct3D will convert the YCbCr data into RGB behind my back. So again chroma upsampling is done from 4:2:2 to 4:4:4 behind my back, and on top also YCbCr -> RGB conversion with an unknown color matrix.

What I really want is to be able to access the original YCbCr data in my pixel shaders. But there's no clean way to do this. When using software decoding, madVR stores the original YCbCr data into an RGB texture. The GPU doesn't know it's actually YCbCr data, only madVR knows that. So I can process the data without someone converting the hell out of the data behind my back. Unfortunately this only works by using the CPU to force the NV12 data into an RGB texture. DXVA2 stores its results in GPU RAM, so if I want to use the CPU to force the NV12 data untouched into an RGB texture, I have to use "copy-back" which is bad for performance.

I believe I can solve this with Intel by using OpenCL, and with NVidia by using CUDA. I don't see any other solution. If the Intel drivers have a secret "SplitNv12SurfaceToRgbTextureWithoutTouchingTheData" API then please let me know...
madshi is offline   Reply With Quote
Old 23rd November 2012, 18:07   #15728  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by nevcairiel View Post
Deinterlacing is independent of decoding. If you can get DXVA Deinterlacing to work in madVR (it should work according to madshi), then you can also use software decoding if you want.
Yes, DXVA2 deinterlacing and scaling in highest supported GPU quality works perfectly fine in XP. Stable and (relatively) good quality. At least with my AMD 3850 and 7770 cards.

(You have to install .NET 3.0 or higher to get DXVA2 working on XP at all.)
madshi is offline   Reply With Quote
Old 23rd November 2012, 18:22   #15729  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by madshi View Post
Interesting! Could you make that test video available to me, please?

FWIW, I'm asking DXVA2 to scale NV12 -> NV12, so I'm wondering why NVidia thinks it should do a matrix change!
http://www.datafilehost.com/download-4fdb9a1d.html
6233638 is offline   Reply With Quote
Old 23rd November 2012, 18:40   #15730  |  Link
mbordas
Registered User
 
Join Date: Jul 2011
Posts: 65
Quote:
Originally Posted by madshi View Post
I believe I can solve this with Intel by using OpenCL, and with NVidia by using CUDA.

Don't the latest nvidia drivers support OpenCL? So you could add support for both with one implementation?
mbordas is offline   Reply With Quote
Old 23rd November 2012, 19:06   #15731  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by 6233638 View Post
Thanks. This is really bad. I've changed my code to explicitely tell DXVA2 that source and destination matrix/gamut/transfer/levels are identical, but still NVidia insists on switching matrix, depending on resolution. Not sure if there's any way for me to work around this. Fortunately this doesn't seem to happen with AMD. Not tested with Intel yet.

Quote:
Originally Posted by mbordas View Post
Don't the latest nvidia drivers support OpenCL? So you could add support for both with one implementation?
NVidia supports OpenCL, but only an outdated version which doesn't support the features I need. That said, the current CUDA version doesn't support what I need, either (bug report posted to NVidia a couple of weeks ago). <sigh>
madshi is offline   Reply With Quote
Old 23rd November 2012, 19:09   #15732  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by madshi View Post
What do I know? Ask them
I couldn't get any further detail apart from some broken english repeating me that "it cannot work". You provided details on how to not break FSE in the mVR distribution files IIRC and you said that you were in contact wih them so I thought that you could raise the subject someday, nothing more. If it doesn't break the D3D exclusive mode of EVR, it should be possible to do the same with mVR
leeperry is offline   Reply With Quote
Old 23rd November 2012, 20:26   #15733  |  Link
tetsuo55
MPC-HC Project Manager
 
Join Date: Mar 2007
Posts: 2,317
Quote:
Originally Posted by nevcairiel View Post
I've also heard rumors of some people actually getting DXVA2 Decoding to work on XP, not sure if there is truth in that.
Depending on hardware and driver versions you can get DXVA2 in XP
However it is usually limited to only one of the codecs. In my case my hd 4770 only had VC1 DXVA2 on XP.

There is no real technical limit afaik, it is just that the drivers donīt support the capability. Rumors back then where that this was on purpose to push vista.
__________________
MPC-HC, an open source project everyone can improve. Want to help? Test Nightly Builds, submit patches or bugs and chat on IRC
tetsuo55 is offline   Reply With Quote
Old 23rd November 2012, 21:23   #15734  |  Link
JarrettH
Registered User
 
Join Date: Aug 2004
Location: Canada
Posts: 860
Keep meaning to tell you, but since 0.85 MPC acts as if the window is always on top. I have to click minimize to get it out of the way. Is there a way to change this behaviour?
JarrettH is offline   Reply With Quote
Old 23rd November 2012, 22:29   #15735  |  Link
ajp2k11
Registered User
 
Join Date: Jul 2011
Posts: 57
Quote:
Originally Posted by madshi View Post
No. The log clearly says that madVR is not receiving VSync scanline information. I don't know why madVR would receive them for some videos but not for others. In any case, I don't think there's anything I can do about it...
It seems to manifest mostly with 1080p files, although some work. Especially fullscreen 16/9 1080p seems hard. Very weird, didn't have any problems at all with Win7. any suggestions where I should start looking? I'm desperate...

Anybody?
ajp2k11 is offline   Reply With Quote
Old 23rd November 2012, 22:37   #15736  |  Link
rahzel
Registered User
 
Join Date: Jul 2005
Posts: 359
Quote:
Originally Posted by madshi View Post
Can you guys please test the following:

(1) Use software decoding and e.g. Bilinear scaling.
(2) Use native DXVA2 decoding and e.g. Bilinear scaling.
(3) Use software decoding and DXVA2 scaling.
(4) Use native DXVA2 decoding and DXVA2 scaling.
AMD rig:

Every time native DXVA2 decoding is on, colors look different from reference (2 and 4). Colors / gamma, everything look the same between 1 and 3, and 2 and 4. As far as I can tell, DXVA2 scaling looks similar to Bilinear scaling. (edit: nm, no scaling was done, video was 1920x1080 on a 1080p display)

Driver: 12.10 with AMD Vision CP installed
GPU: MSI Radeon 5570
OS: Win 7 64-bit

----------------------------------------

Intel Rig:

This one was a bit more interesting. My AMD rig is connected to a 1080p display. I took my screenshots using a 1920x1080 video so no scaling was ever done. The color differences were caused by the software vs DXVA2 decoding.

My Intel rig is connected to a 32" 1360x768 native LCD TV but supports up to 1080p. At first I took all of my screenshots at my display's native resolution (1360x768) and every screenshot (2, 3 and 4) had different colors than the reference #1 shot. I thought that was odd since my AMD rig, only the screenshots using DXVA2 decoding had different colors. Because DXVA2 scaling takes screenshots in its display/scaled resolution and madVR's bilinear scaling took the screenshot in its native resolution, I thought that maybe the DXVA2 scaling caused the differences in color. So I then switched my display to 1080p and took screenshots, so no scaling was done. Sure enough, just like my AMD rig, 1 and 3 looked the same and 2 and 4 looked the same.

Something else I noticed that on my Intel rig, when I used DXVA2 scaling, I saw a green bar at the top.

Driver: 9.17.10.2867
GPU: Intel HD 4000
OS: Win 7 64-bit

FWIW, because you have to right-click to take a screenshot, every screenshot wasn't taken in FSE mode but in full-screen non-exclusive mode.

Last edited by rahzel; 24th November 2012 at 00:36.
rahzel is offline   Reply With Quote
Old 23rd November 2012, 22:56   #15737  |  Link
crotecun
Registered User
 
Join Date: Oct 2012
Posts: 27
Quote:
Originally Posted by madshi View Post
Can you guys please test the following:

(1) Use software decoding and e.g. Bilinear scaling.
(2) Use native DXVA2 decoding and e.g. Bilinear scaling.
(3) Use software decoding and DXVA2 scaling.
(4) Use native DXVA2 decoding and DXVA2 scaling.

The reference is (1) for colors, brightness, contrast and gamma. This is how the image must look. Please check which of (2), (3) and (4) are different from the reference and which are identical. Also please check if maybe (4) is even more different than (2) and (3) are.

(I don't need screenshots.)

Please also list your GPU, your drivers, and your OS.

Thank you!!!
DXVA2 decoding and software decoding has no difference in colors/gamma. Scaling in DXVA2 looks slightly but noticeably sharper/clearer than bilinear scaling.

GPU: Radeon HD 5670
Driver: 11.5 without Catalyst, default driver from MS update
OS: Windows 7 64-bit
crotecun is offline   Reply With Quote
Old 23rd November 2012, 23:03   #15738  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 934
Quote:
Originally Posted by madshi
Can you guys please test the following:

(1) Use software decoding and e.g. Bilinear scaling.
(2) Use native DXVA2 decoding and e.g. Bilinear scaling.
(3) Use software decoding and DXVA2 scaling.
(4) Use native DXVA2 decoding and DXVA2 scaling.
Can't test 2 & 4 on the material I was testing since it doesn't work with DXVA2 for me (576p/25). 3 has different colours (yellower) to 1, and 3 is also slightly squished horizontally compared to 1 (a few pixels at most).

Using Windows 8, nVidia GTS 250, latest drivers (306.97).
__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7

Last edited by DragonQ; 23rd November 2012 at 23:08.
DragonQ is offline   Reply With Quote
Old 24th November 2012, 01:32   #15739  |  Link
crotecun
Registered User
 
Join Date: Oct 2012
Posts: 27
Quote:
Originally Posted by TheLion View Post
My thoughts regarding defaults scalers:

As long as DXVA scaling isn't 100% stable and predictable I would vote for Catmull-Rom for all scaling as default. Bilinear defeats the purpose of using a high quality renderer. And the more elaborate algorithms may be too demanding for "casual hardware". I think Catmull-Rom (+ AR?) is the sweetspot here.
Quote:
Originally Posted by madshi View Post
How about Lanczos3 for upscaling, Catmull-Rom for downscaling and Bilinear for chroma? All without AR and without linear light. That should be fast enough for most (but not all) GPUs.
What counts as "casual hardware" anyway? And with these settings, what types of content could we expect to play smoothly on this hardware?
crotecun is offline   Reply With Quote
Old 24th November 2012, 01:48   #15740  |  Link
turbojet
Registered User
 
Join Date: May 2008
Posts: 1,840
Green line at the top here too with intel hd3000. Something that may help track the problem is it's not limited to resizing. If madvr isn't image resizing but dxva resizer is selected the green line shows up when decoding with intel quicksync, switch to another decoder and it disappears. Also happens when resizing no matter what the decoder. MPC also takes about twice as long to start up with madvr 0.85.x as it does with 0.84.x and EVR on intel hd3000, after initial start it's quick again when switching files. With nvidia startup is about the same speed it was with 0.84.x.

DXVA resize is pretty impressive on nvidia 9500GT though, sharp with very few artifacts. Chroma resize is the culprit for most of the artifacts, switching to softcubic100 greatly reduced them but caused other image problems so switched back to lanczos3ar. Is it planned to have dxva available for chroma resizing?

As for defaults, dxva is the 2nd largest load on the 2 gpu's here, so unless it can scale with performance of the gpu it might not be good for preventing slide shows, nor is lanczos really. Any of the bicubic's are about half the load as lanczos but maybe more important then default resizer is gpu queue. While process explorer doesn't show a significant load or ram increase using 8 gpu queues frame drops start around 60% load, while 4 gpu queues starts around 80%. Then there's the problem I had way back when with frozen player with default queues, luckily was able to drop them to 4 after a few minutes and issue disappeared. Is there any advantage to using 8 over 4? Drops frames at a lower load, noticeably slower startup and seeks on lower end hardware are disadvantages. 6/4 queues has never been a problem for me.

Last edited by turbojet; 24th November 2012 at 01:51.
turbojet is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 05:30.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.