Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
6th September 2019, 06:24 | #581 | Link |
Registered User
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,901
|
CUVID works always at maximum 3D clocks, all the time and it feels like a tad faster than any other decoder.
Also, CUVID exposes more codecs than D3D11 or DXVA2 like MPEG4 ASP (divx) and others, although not very common nowadays.
__________________
Win 10 x64 (19042.572) - Core i5-2400 - Radeon RX 470 (20.10.1) HEVC decoding benchmarks H.264 DXVA Benchmarks for all |
6th September 2019, 12:05 | #582 | Link |
Registered User
Join Date: Oct 2016
Posts: 896
|
garson looked at the clocked speeds and said they were the same, so presumably it's a situation where rendering itself maxes out the GPU even with DXVA. With such a low power card, maybe it's possible the API could make a small difference even if it's a edge case?
__________________
HTPC: Windows 10 22H2, MediaPortal 1, LAV Filters/ReClock/madVR. DVB-C TV, Panasonic GT60, Denon 2310, Core 2 Duo E7400 oc'd, GeForce 1050 Ti 536.40 |
6th September 2019, 13:17 | #583 | Link | |
Registered User
Join Date: May 2011
Posts: 51
|
Quote:
I run GPU-Z for 30 minutes and logged results into log file. Then I compared log files, everything looked pretty much same, except for video memory usage (with D3D11 it was ~850MB and with CUVID ~900MB). |
|
9th September 2019, 09:36 | #586 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
I have tried every possible combination I can of CUVID vs DXVA CB & DXVA Native with LAV Filter and Zoomplayer and...
EVR, MadVR, MadVR Deinterlacing ON/Off, Video Forced, Film, Centre or All, Exclusive, Windowed, DX11 Presentation On/Off and... I cannot get DXVA to deinterlace the same as Cuvid. Only DXVA Copyback with Software Deinterlacing (Westin or Yadif) looks similar on my Australian DTV broadcasts but this uses more CPU. Windows also reports ~40% GPU useage for most of these tests (Much lower for EVR of course as MADVR is not scaling etc). I understand the GPU clocks might still be higher for CUVID but its also worth noting my RTX fans do not spin up for any of the tests, CUVID or not. I can upload a test video if anyone is interested. Perhaps our broadcasts have strange mixed mode but the combing is quite pronounced if not using CUVID or Software Deinterlacing. Maybe there is something in my setup that does not turn the VA HW deinterlacing on in the renderer? MADVR shows deinterlacing as on (which looks the same as when its off). |
9th September 2019, 14:02 | #588 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
Not sure of anything at the moment.
I tried force film and force video and neither makes any difference to the combing during motion. I also did a bunch of tests with GPU-Z tonight and found that CUVID does not set the clock to max. It does set it higher than the other modes but it still scales with workload. If I run with Zoomplayer as a small window the clock runs at about 1400, at full screen it goes to 1900. Also the power used is only ~50w with a small window and goes up to 175w at full screen (3840x2160). The extra load I would mainly expect to be MadVR scaling workload. (Checking with evr will confirm this next time I have a play around) So comparing CUVID with dxva2 (and d3d11 for that matter) all ended up around 70% gpu load, all about 160-175watts and all setting the clocks to near max to achieve the MadVR settings I have. The only difference I am left with is CUVID is not giving me any deinterlacing artifacts. |
9th September 2019, 14:16 | #589 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
The test clip I am using is a TV show which then has the lotto numbers rolling across the bottom of the screen. These numbers break apart into combed half frames if not using CUVID.
It’s possible that the lotto overlay is a different cadence to the main show. I will need to dissect the clip some more to see what’s actually there. But CUVID definitely handles it correctly whatever it is. Software deinterlacers also work fine on it. |
10th September 2019, 03:24 | #591 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
Here is a sample (~110mb), at the 27sec mark the lotto numbers roll in which show combing artifacts with all but CUVID or software deinterlacing for me.
https://1drv.ms/v/s!AhrKCcZdae3QgY8E...CHmEA?e=X1c4HK Note: My setting were: Splitters : I tried TSREADER from Media Portal & LAV Splitter Decoders: LAV video decoder DXVA2 CB, DXVA2 Native, D3D11 Native, CUVID Renderers: EVR, MadVR Player: zoomplayer. Last edited by Calvi; 10th September 2019 at 05:21. |
10th September 2019, 20:18 | #592 | Link | |
Registered User
Join Date: Aug 2008
Posts: 343
|
Quote:
|
|
11th September 2019, 03:03 | #593 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
Thanks for taking a look littleD.
I have repeated the same observations on a totally different machine (Laptop with Nvidia 330GT). I also have repeated the same with MPC-HD instead of Zoomplayer and again only CUVID is 100% working for me. I also tried LAV output at 25 vs 50 frames, MadVR again as film vs video, forcing deinterlacing in LAV etc. Results are always the same on both machines even though a huge number of other things are different. Here are three screen caps from MPC-HC Using GT330 and Internal LAV (0.70.2.1-git) Note: My main system is RTX2070, driver 436.15 using LAV 0.74 and Zoomplayer 14.5 and this exhibits the same behaviour but the above tests are from my laptop with GT330, driver 341.02, MPC-HC & Internal LAV 0.70.2.1. Last edited by Calvi; 11th September 2019 at 03:10. |
11th September 2019, 05:37 | #594 | Link |
Registered User
Join Date: Aug 2008
Posts: 343
|
You did not metion this explicitely. Sorry thats why i ask. Did you set dxva CB AND choose nvidia adapter? Because You should not leave any automatic gpu in lav filters under "Hardware device to use"
Last edited by littleD; 11th September 2019 at 05:39. |
11th September 2019, 09:36 | #595 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
Yes, tried all combinations of that also. There is only one adapter on my HTPC. In Auto it chooses the RTX2070.
I also set it myself and the result was the same. I repeated this with my Laptop which has 2 displays and I tried Auto (It chooses the GT330), GT330 #1 and GT330 #2. All the same results again. Note: your adapter is intel so it may behave completely differently. I'll try my desktop when I get a chance as this has an ATI card. |
11th September 2019, 13:40 | #598 | Link |
Registered User
Join Date: Jul 2019
Posts: 38
|
I have also tried my Desktop now with ATI5700 (Yes its old) and of course cannot test CUVID but DXVA2 CB deinterlaces this clip *Almost* correctly.
It fails on the first lotto ball then manages to deal with all the middle ones one and then fails on the end one. Its as if the HW deinterlacer turns on late and off early. (Probably the best the HW deinterlacer in this card can do). Note: Again if software deinterlacing - (eg Yadif) then all frames are correctly deinterlaced (unsurprisingly as this would be independent of Video Card, and Yadif is probably better than this old cards HW deinterlacing). Note: The above tests were with EVR. With MadVR the results are the same except Forcing Film Mode causes all the lotto balls to not deinterlace (as expected). If correctly set to deinterlace then results are identical to EVR so the deinterlacing in the renderer is working fine here. (As best as this card can do anyway). All of the above is what I would expect for an ATI card (and explains why littleD's sees DXVA2 working for intel) but I am still left with a discrepancy between CUVID and DXVA2 for Nvidia cards. I would love to know whether I am doing something wrong or if there is an actual difference in the HW deinterlacing here? Has anyone else tried my sample clip with a Nvidia card? Last edited by Calvi; 11th September 2019 at 13:54. |
11th September 2019, 17:24 | #599 | Link |
Registered User
Join Date: Oct 2016
Posts: 896
|
I can confirm the issue with my 1050 Ti and I also have a 'solution' for you: in NVIDIA control panel, uncheck 'Use inverse telecine', and DXVA will have the same deinterlacing behaviour as CUVID.
The problem isn't really the deinterlacing quality, it's that it just not activates reliably. The programme being played is 2:2 pulldown so no need to deinterlace it, and apparently NVIDIA doesn't reliably detect the small video insert being overlaid onto it with the lotto balls, while CUVID in video mode obviously does as it treats every frame as video. Note that I say 'solution' because doing that will disable progressive cadence detection in NVIDIA's deinterlacer, but if you only watch modern 25 fps programming I'd say it's safe. The quality of 2:2 pulldown material will be slightly diminished however (you could see shimmering or other artifacts sometimes). On my Radeon, the GPU detects the interlaced overlay when the 2nd ball appears (which is acceptable IMO) and correctly deinterlaces the rest after that except the white ball. With cadence detection enabled, NVIDIA detects the interlacing in the middle of the second ball's travel, but then loses it again pretty much immediately, and then does the same again with the 3rd ball.
__________________
HTPC: Windows 10 22H2, MediaPortal 1, LAV Filters/ReClock/madVR. DVB-C TV, Panasonic GT60, Denon 2310, Core 2 Duo E7400 oc'd, GeForce 1050 Ti 536.40 Last edited by el Filou; 11th September 2019 at 17:28. |
|
|