Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > New and alternative a/v containers

Reply
 
Thread Tools Search this Thread Display Modes
Old Yesterday, 18:10   #22141  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,484
Quote:
Originally Posted by nevcairiel View Post
"not native mode" means copy-back, just like with DXVA2.
I thought so. Well, I don't think that's any problem since there has to be copying anyway and it probably won't load additional load on the dGPU, unlike copyback used on the same GPU for both rendering and decoding.
Or am I missing something?
aufkrawall is offline   Reply With Quote
Old Yesterday, 18:59   #22142  |  Link
XinHong
Registered User
 
Join Date: Jan 2011
Location: France
Posts: 29
Is it possible to get D3D11VA without a D3D11 graphics card ?

Because I have an Intel HD3300 (DirectX 10.1) & MadVR 0.92.1 and I get a green screen. If I switch to the GT525M (DirectX 11) it's OK.

Thanks
XinHong is offline   Reply With Quote
Old Yesterday, 19:03   #22143  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,484
Yes, I tested it on a 9600 Gt on Windows 10. However, madVR isn't working with DX11 on this card, and with DX9, the decoder falls back to copy mode (and so it's also the case for EVR).
Edit: oh, you mentioned a green screen. That's the result I got as well.
So either it's not working in native mode without DX11 hardware, or madVR DX11 doesn't work with non-DX11 hardware (I'd probably bet on the latter one).

Edit: Windows 10 video app uses DX11, so it uses D3D11VA decoding as well. It works without a green screen on the 9600 Gt. Very likely a madVR issue then.

Last edited by aufkrawall; Yesterday at 20:02.
aufkrawall is offline   Reply With Quote
Old Yesterday, 20:44   #22144  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 8,797
I'm not sure if that's my problem or LAV's problem? Could be that my D3D11 pixel shader is somehow not compatible with DX9 GPUs. Or could be that the D3D11 device created by LAV needs to be created with different flags? I don't know. nevcairiel, what do you think?
madshi is offline   Reply With Quote
Old Yesterday, 20:45   #22145  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 8,806
No idea without any hardware at hand. I can go dig in my GPU bin to see if i still have something that old but yet still with decoding support.
Check with EVR if copy-back works, as one intermediate step? That'll tell you if decoding works.

The best, er.., worst, I can do is apparently a GTX 260-216, which is a DX10 GPU. Will test that tomorrow.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; Yesterday at 22:09.
nevcairiel is offline   Reply With Quote
Old Yesterday, 22:09   #22146  |  Link
Great Dragon
Registered User
 
Great Dragon's Avatar
 
Join Date: Feb 2005
Location: Ukraine, Lviv
Posts: 120
Hello,

Is it OK that using NVIDIA CUVID decoder makes GPU to run on maximum performance (clock speed)?
While using regular DXVA renderer a GPU performs in regular Desktop Mode. I'm using GTX 1070.

Do I have some control over CUVID Renderer settings?
And what's the difference between CUVID and DXVA ?
Great Dragon is offline   Reply With Quote
Old Yesterday, 22:43   #22147  |  Link
strumf666
Registered User
 
Join Date: Jan 2012
Posts: 63
D3D11 native on intel HD2500 (I5 3470) works for me if that helps. Win 10 pro & Potplayer 64bit; intel drivers 10.18.10.4425
On a very quick test for H264, cpu utilization is a few % lower than on dxva2 copyback, but render times are much higher.

Last edited by strumf666; Yesterday at 22:59.
strumf666 is offline   Reply With Quote
Old Yesterday, 23:30   #22148  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,484
Quote:
Originally Posted by Great Dragon View Post
Is it OK that using NVIDIA CUVID decoder makes GPU to run on maximum performance (clock speed)?
That's normal for all CUDA apps, CUVID unfortunately isn't an exception.
DXVA2/D3D11VA on the other hand will give you lower GPU clocks than even CPU decoding with madVR because the driver apparently chooses lower clock speeds when hardware decoding is going on than without.

I don't really see any reason to use CUVID anymore. imho both CUVID and QuickSync probably could even get removed out of LAV Filters in favor of the new D3D11VA decoding.
It's probably not too harsh when you say that CUVID and the used QuickSync decoder are end of life or even already dead.
aufkrawall is offline   Reply With Quote
Old Today, 02:10   #22149  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,484
With new build, I can confirm that selecting IGP for decoding works with D3D11VA, great.
Buuut: Would it be possible to make copyback selectable like for DXVA2? Or do that automatically when a device is selected which is not the primary graphics adapter?
Kinda odd that it currently only works with renderers which refuse direct D3D11VA connection.
aufkrawall is offline   Reply With Quote
Old Today, 02:22   #22150  |  Link
AngelGraves13
Registered User
 
Join Date: Dec 2010
Posts: 207
VC-1 playback doesn't work for me with DX11 LAV Video (0.70.2-45 nightly) and madvr 0.92.1b. Can someone else confirm this on their end? My GPU is a 1080 Ti. DXVA2 N and CB work fine. Also, software works fine too, as well as CUDA.

Last edited by AngelGraves13; Today at 02:34.
AngelGraves13 is offline   Reply With Quote
Old Today, 08:55   #22151  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 8,806
Quote:
Originally Posted by aufkrawall View Post
Would it be possible to make copyback selectable like for DXVA2? Or do that automatically when a device is selected which is not the primary graphics adapter?
I didn't want a seperate entry in the decoder box because that doesn't allow seamless fallback between Native and Copy-Back.

However I have changed it now to only use Native when you select "Automatic" and updated the instruction hint to that effect. It probably makes the most sense to always obey user settings, and for testing purposes its probably a good thing as well.

Quote:
Originally Posted by AngelGraves13 View Post
VC-1 playback doesn't work for me with DX11 LAV Video (0.70.2-45 nightly) and madvr 0.92.1b. Can someone else confirm this on their end? My GPU is a 1080 Ti. DXVA2 N and CB work fine. Also, software works fine too, as well as CUDA.
VC-1 works just fine here. Make sure you don't use way too many buffers (ie. input queue size in madVR), it can sometimes cause decoding failures.

I should probably put an artificial limit in place for VC-1, it seems to be more affected by this then the others, but I would generally not recommend to use extremely large decoder queues in madVR either way.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; Today at 13:10.
nevcairiel is offline   Reply With Quote
Old Today, 13:58   #22152  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,484
I think with native hardware decoding, it's safe to stick to a queue which is as short as possible on the GPU side (aka 4 frames with madVR).
With a short GPU/presentation queue, there are sometimes presentation glitches reported after the playback started, but I haven't yet seen any at a later time. + it also reduces input lag when e.g. changing volume via mouse wheel.
However, setting CPU queue (isn't it actually a D3D11VA queue in our case?) to only 4 frames as well gives me some weird intensive stuttering which is not detected by madVR's stats. It already seems to be gone with 5 frames, but for safety I set it to 8.

Only 12 more hours till next nightly build.
aufkrawall is offline   Reply With Quote
Old Today, 14:08   #22153  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 8,806
4 is probably too short, especially because it also limits the GPU queues then, I would go with 8 or 16 at most personally.

I've been thinking about more strictly limiting the maximum number of surfaces in all cases to something reasonable like 32 (which is used for VC-1 now). There is a hard limit of 127 (ie. 7-bit), which is required for DXVA to function at all, but using that many buffers really has no advantages in native mode.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; Today at 14:11.
nevcairiel is offline   Reply With Quote
Reply

Tags
decoders, directshow, filters, splitter

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:16.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2017, vBulletin Solutions Inc.