Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 25th January 2014, 16:32   #21821  |  Link
pie1394
Registered User
 
Join Date: May 2009
Posts: 212
Quote:
Originally Posted by madshi View Post
Hmmmm... I've tried all of this on my NVidia 9400 GPU which had problems with black rectangles when using XySubFilter in the past. But no problems here, whatsoever. How can I reproduce these issues? Maybe you can provide me with small samples to test with, in case it has something to do with the samples somehow?
It looks like my HW or OS issue. Somehow P10 no longer works even with 0.86.10 version. Reverting the driver back to 320.49 does not help, either. I guess it is the time to reinstall the OS ... @_@


0.87.1 works fine on my HTPC, with HD7970 + Catalyst 13.12 -- Win7x64SP1. Just noticed the GPU's deinterlacing time is increased around 100%...

[720x480i60]
0.86.10 --> 0.83ms
0.87.1 --> 1.64ms

[1440x1080i60]
0.86.10 --> 1.44ms
0.87.1 --> 4.21ms

Regarding the Image Doubling function on the above 720x480i60 contents to 1920x1080p60 display mode, even HD7970 is not quick enough to handle 60fps. It needs more than 17ms (Debanding_with_AngleDetect + NNEDI3 2x + Jinc3AR) processing time vs 6.2ms (Debanding_with_AngleDetect + Jinc3AR)
pie1394 is offline   Reply With Quote
Old 25th January 2014, 16:45   #21822  |  Link
Deim0s
Registered User
 
Join Date: Jul 2012
Posts: 20
flashmozzg,
Quote:
You can download GPU-Z and monitor your GPU RAM usage during playback.
I can do this by using open source ProcessHacker.
If it helps madshi, here are the results:
When starting playback in windowed mode
After switching to full screen mode and get a black screen
Deim0s is offline   Reply With Quote
Old 25th January 2014, 16:57   #21823  |  Link
PetitDragon
Registered User
 
Join Date: Sep 2006
Posts: 80
Quote:
Originally Posted by cyberbeing View Post
Conclusion: NVIDIA R331 branch drivers and newer broke madVR display with OpenCL dither. madVR OpenCL NNEDI3 code may have never been functional on recent NVIDIA GPU architectures (Fermi & Kepler?). Remember my earily post where I mentions your version of the OpenCL code for the Avisynth plugin produce a "failed to allocate OpenCL resources error", while SET's original OpenCL code works fine on my GPU.
At the early beginning I was wondering too why new madVR doesn't work, while SEt's plugin just works fine.
PetitDragon is offline   Reply With Quote
Old 25th January 2014, 16:57   #21824  |  Link
michkrol
Registered User
 
Join Date: Nov 2012
Posts: 167
Quote:
Originally Posted by Soukyuu View Post
Stupid question: how do I create/use profiles? I don't see any configuration options for that in the tray icon settings.
Open settings, left click any settings group (processing/scaling ...), click create profile group. I'm sure you'll work it out from there
When created, the profiles get switched automagically with your auto-select rules or by customizable key shortcuts. Be sure to read the help for auto-select rules scripting.
michkrol is offline   Reply With Quote
Old 25th January 2014, 17:00   #21825  |  Link
PixelH8
Registered User
 
Join Date: Dec 2012
Posts: 5
Quote:
Originally Posted by cyberbeing View Post
NVIDIA GTX 770 + madVR 0.87g

OpenCL Dither
R304 branch = Functional
R310 branch = Functional
R313 branch = Functional
R319 branch = Functional
R325 branch = Functional
R331 branch = Broken
R334 branch = Broken

OpenCL NNEDI3
R304 branch = Broken
R310 branch = Broken
R313 branch = Broken
R319 branch = Broken
R325 branch = Broken
R331 branch = Broken
R334 branch = Broken

R304 branch release tested = 309.00 (10/28/2013)
R310 branch release tested = 310.90 (12/29/2012) & 312.69 (10/28/2013)
R313 branch release tested = 314.22 (03/14/2013)
R319 branch release tested = 321.10 (12/05/2013)
R325 branch release tested = 327.23 (09/12/2013)
R331 branch release tested = 331.40 BETA (09/27/2013) & 332.21 (12/19/2013)
R334 branch release tested = 334.67 (01/15/2014)

Conclusion: NVIDIA R331 branch drivers and newer broke madVR display with OpenCL dither. madVR OpenCL NNEDI3 code may have never been functional on recent NVIDIA GPU architectures (Fermi & Kepler?). Remember my earily post where I mentions your version of the OpenCL code for the Avisynth plugin produce a "failed to allocate OpenCL resources error", while SET's original OpenCL code works fine on my GPU.
^This. My own testing mirrors this exactly.

On my system with the 560 Ti, the last driver to have OpenCL error diffusion working is 327.23.
Every version after this is total FAIL. Something went very wrong indeed. Nothing in the release notes for 331.40 indicate any changes that would adversely affect OpenCL applications but who knows? Maybe whoever compiled the latest drivers forgot to flip the OpenCL switch.

On the plus side, Nvidia does make installing and uninstalling drivers as pain-free as possible, so there's that.

EDIT: I forgot to mention I was using the latest 0.87.1 build of madvr. Also, I'm not 100% sure if this is only a problem that affects Fermi cards and newer so I really have to get my GTS 250 system up and running to find out for sure. I also have a stockpile of drivers going back to 2009, so let the good times roll.

Last edited by PixelH8; 25th January 2014 at 17:32. Reason: missing details
PixelH8 is offline   Reply With Quote
Old 25th January 2014, 17:10   #21826  |  Link
noee
Registered User
 
Join Date: Jan 2007
Posts: 530
Quote:
Originally Posted by pie1394
Just noticed the GPU's deinterlacing time is increased around 100%...
I notice the same thing using a 1080i video:

.86: Queues all full, no drops, deint=7.3ms
.87.1: Render and present queues struggle ~3-4, 50+ drops after 2 min of playback, deint=18.13ms

HD6570

Edit: Should add, this is a VC1 video using DXVA2 decode
noee is offline   Reply With Quote
Old 25th January 2014, 17:22   #21827  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 930
Quote:
Originally Posted by noee View Post
I notice the same thing using a 1080i video:

.86: Queues all full, no drops, deint=7.3ms
.87.1: Render and present queues struggle ~3-4, 50+ drops after 2 min of playback, deint=18.13ms

HD6570

Edit: Should add, this is a VC1 video using DXVA2 decode
Glad it's not just me and not just Intel GPUs!
__________________
HTPC Hardware: Intel Celeron G530; nVidia GT 430
HTPC Software: Windows 7; MediaPortal 1.19.0; Kodi DSPlayer 17.6; LAV Filters (DXVA2); MadVR
TV Setup: LG OLED55B7V; Onkyo TX-NR515; Minix U9-H
DragonQ is offline   Reply With Quote
Old 25th January 2014, 18:00   #21828  |  Link
Soukyuu
Registered User
 
Soukyuu's Avatar
 
Join Date: Apr 2012
Posts: 169
Quote:
Originally Posted by michkrol View Post
Open settings, left click any settings group (processing/scaling ...), click create profile group. I'm sure you'll work it out from there
When created, the profiles get switched automagically with your auto-select rules or by customizable key shortcuts. Be sure to read the help for auto-select rules scripting.
Thanks, I saw the "add new device" button, but not the "add profile" on the groups

I was holding off testing debanding until the final version, and man, does it make a difference. Though low seems to be the only non-destructive setting. What does the "don't analyze gradient angles" option affect? I didn't really see any difference
__________________
AMD Phenom II X4 970BE | 12GB DDR3 | nVidia 260GTX | Arch Linux / Windows 10 x64 Pro (w/ calling home shut up)
Soukyuu is offline   Reply With Quote
Old 25th January 2014, 18:04   #21829  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,985
i just checked deint ms too they are high at start but after some time about the same like 86.9.

but while i was testing this is find a new bug introduced in 87.x.
when you change the deinterlacer mode with shift + control + alt + t i get - creating Direct3d Device failed (80070005) picture
one items it even switch to 8 bit color deep.
doesn't matter if film to video or video to film.

edit: works fine when change in the tray options strange but doesn't try to change resolution this way.

Quote:
I was holding off testing debanding until the final version, and man, does it make a difference. Though low seems to be the only non-destructive setting. What does the "don't analyze gradient angles" option affect? I didn't really see any difference
it checks for parts that need debanding and didn't apply it to the hole image (very short version)
huhn is online now   Reply With Quote
Old 25th January 2014, 18:15   #21830  |  Link
Thunderbolt8
Registered User
 
Join Date: Sep 2006
Posts: 2,171
OpenCL crashes (bluescreen) on a laptop with radeon 7650 with 13.12 drivers (leshcat drivers v3).
__________________
Laptop Acer Aspire V3-772g: i7-4202MQ, 8GB Ram, NVIDIA GTX 760M (+ Intel HD 4600), Windows 8.1 x64, madVR (x64), MPC-HC (x64), LAV Filter (x64), XySubfilter (x64)
Thunderbolt8 is offline   Reply With Quote
Old 25th January 2014, 18:16   #21831  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by HolyWu View Post
Error Diffusion works on 327.23, but NNEDI chroma upscaling or doubling doesn't.
Quote:
Originally Posted by Soukyuu View Post
Code:
        dith | doubl | upsc
314.07:  yes |  yes  | yes*
314.22:  yes |  yes  | yes*
320.18:  yes |  yes  | yes*
320.49:  yes |  yes  | yes*
327.23:  yes |  yes  | yes*
331.58:   no | yes** | yes***
331.65:   no | yes** | yes***
331.82:   no | yes** | yes***
332.81:   no | yes** | yes***

  * yellow tint on 10bit h264 only
 ** no green channel (and no luma?) on 10bit h264 only
*** green tint on 10bit h264 only
Examples of yellow, green tints and missing green channel.

327.23 is the first driver officially released for win8.1, so I'm glad it works. Reverting to it until the issue is fixed.
Oh and... openCL dithering makes me drop frames, rendering queue is running out very fast. Same applies to all NNEDI3 features.
I guess my GPU is just too slow.

edit: openCL dithering also makes the ctrl+J osd an opaque black box.

@everyone: don't forget to mention your GPU when you report the results (or just put it in your sig)
Quote:
Originally Posted by HolyWu View Post
Tested GTX460 with drivers 327.23 and 331.40. Error Diffusion only works on 327.23, does not work on 331.40. NNEDI3 chroma upscaling and doubling don't work at all on both versions.
Quote:
Originally Posted by cyberbeing View Post
NVIDIA GTX 770 + madVR 0.87g

OpenCL Dither
R304 branch = Functional
R310 branch = Functional
R313 branch = Functional
R319 branch = Functional
R325 branch = Functional
R331 branch = Broken
R334 branch = Broken

OpenCL NNEDI3
R304 branch = Broken
R310 branch = Broken
R313 branch = Broken
R319 branch = Broken
R325 branch = Broken
R331 branch = Broken
R334 branch = Broken

R304 branch release tested = 309.00 (10/28/2013)
R310 branch release tested = 310.90 (12/29/2012) & 312.69 (10/28/2013)
R313 branch release tested = 314.22 (03/14/2013)
R319 branch release tested = 321.10 (12/05/2013)
R325 branch release tested = 327.23 (09/12/2013)
R331 branch release tested = 331.40 BETA (09/27/2013) & 332.21 (12/19/2013)
R334 branch release tested = 334.67 (01/15/2014)

Conclusion: NVIDIA R331 branch drivers and newer broke madVR display with OpenCL dither.
Quote:
Originally Posted by PixelH8 View Post
^This. My own testing mirrors this exactly.

On my system with the 560 Ti, the last driver to have OpenCL error diffusion working is 327.23.
Every version after this is total FAIL. Something went very wrong indeed. Nothing in the release notes for 331.40 indicate any changes that would adversely affect OpenCL applications but who knows? Maybe whoever compiled the latest drivers forgot to flip the OpenCL switch.

On the plus side, Nvidia does make installing and uninstalling drivers as pain-free as possible, so there's that.

EDIT: I forgot to mention I was using the latest 0.87.1 build of madvr. Also, I'm not 100% sure if this is only a problem that affects Fermi cards and newer so I really have to get my GTS 250 system up and running to find out for sure. I also have a stockpile of drivers going back to 2009, so let the good times roll.
Thanks for doing all those testing, guys! It's interesting to see that some features work with some drivers and some GPUs. I "like" especially Soukyuu's results where NNEDI3 produces different color artifacts depending on source color space and driver version. That pretty much shows how unreliable NVidia OpenCL is. It doesn't even go from working to non-working, but seemingly the color channels are interpreted differently depending on driver version. That's a nightmare!

Quote:
Originally Posted by Soukyuu View Post
What does the "don't analyze gradient angles" option affect? I didn't really see any difference
Enabling it makes debanding stronger for image areas which are likely large gradiants, while keeping debanding strength the same for areas which probably contain image detail. Basically it should improve debanding strength where it's needed without losing a lot of additional detail in other image areas. It costs quite a bit of added GPU performance, though.

Quote:
Originally Posted by pie1394 View Post
Regarding the Image Doubling function on the above 720x480i60 contents to 1920x1080p60 display mode, even HD7970 is not quick enough to handle 60fps. It needs more than 17ms (Debanding_with_AngleDetect + NNEDI3 2x + Jinc3AR) processing time vs 6.2ms (Debanding_with_AngleDetect + Jinc3AR)
Image Doubling doesn't have just one settings. How many neurons did you use? And did you tell madVR to double chroma resolution, too? Furthermore: If you do use image doubling, I'd recommend to use a cheaper follow-up algorithm for "image upscaling". E.g. use Bicubic50AR instead of Jinc3AR.

Quote:
Originally Posted by DragonQ View Post
Improved over 0.87. Hard to say whether it's really usable without the 10-bit chrome & image buffers. Certainly not better than 0.86.11. Average stats:

0.87.1 Software:
Deinterlace: 18.65 ms
Split: 18.85 ms
Rendering: 8.64 ms
Present: 0.42 ms
Dropped Frames: 63
Delayed Frames: 2

0.86.11 Software:
Deinterlace: 8.29 ms
Split: 9.15 ms
Rendering: 4.70 ms
Present: 0.17 ms
Dropped Frames: 14
Delayed Frames: 0

0.87.1 DXVA2 Native:
Deinterlace: 13.96 ms
Split: 16.53 ms
Rendering: 8.38 ms
Present: 0.41 ms
Dropped Frames: 4
Delayed Frames: 1

0.86.11 DXVA2 Native:
Deinterlace: 7.58 ms
Split: 8.08 ms
Rendering: 5.02 ms
Present: 0.16 ms
Dropped Frames: 8
Delayed Frames: 0

Using: HD4000, MPC-HC, LAV Filters, Smooth Motion, Bicubic75 Chroma upscaling, Catmull-Rom downscaling, playing 1080i/25 @ slightly under 1080p.
Quote:
Originally Posted by pie1394 View Post
0.87.1 works fine on my HTPC, with HD7970 + Catalyst 13.12 -- Win7x64SP1. Just noticed the GPU's deinterlacing time is increased around 100%...

[720x480i60]
0.86.10 --> 0.83ms
0.87.1 --> 1.64ms

[1440x1080i60]
0.86.10 --> 1.44ms
0.87.1 --> 4.21ms
Quote:
Originally Posted by noee View Post
I notice the same thing using a 1080i video:

.86: Queues all full, no drops, deint=7.3ms
.87.1: Render and present queues struggle ~3-4, 50+ drops after 2 min of playback, deint=18.13ms

HD6570

Edit: Should add, this is a VC1 video using DXVA2 decode
Ok.

-------

So for all users who have:

(1) Stability problems (D3D9 error messages, freezing when switching video files in FSE mode etc).
(2) Performance issues with deinterlacing.

Please try the various test builds you can find here:

http://madshi.net/madVRtestbuilds.rar

Every test build reverts one change back to v0.86.11. This way hopefully we can identify which change is reponsible for which problem.
madshi is offline   Reply With Quote
Old 25th January 2014, 18:24   #21832  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
That pretty much shows how unreliable NVidia OpenCL is. It doesn't even go from working to non-working, but seemingly the color channels are interpreted differently depending on driver version. That's a nightmare!
This is NVIDIA's way of telling you to use CUDA, instead of depending on their automated OpenCL to CUDA driver translation functioning properly. As a bonus, you can actually debug and profile CUDA kernels on NVIDIA as well, while no such thing exists for OpenCL on NVIDIA.
cyberbeing is offline   Reply With Quote
Old 25th January 2014, 18:30   #21833  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,838
Quote:
Originally Posted by madshi View Post
Thanks for doing all those testing, guys! It's interesting to see that some features work with some drivers and some GPUs. I "like" especially Soukyuu's results where NNEDI3 produces different color artifacts depending on source color space and driver version. That pretty much shows how unreliable NVidia OpenCL is. It doesn't even go from working to non-working, but seemingly the color channels are interpreted differently depending on driver version. That's a nightmare!
Should've gone with CUDA, the API is much easier to use, too!
More importantly, I don't think NVIDIA (or quite possibly any other GPU vendor) cares much about such (tiny) compute use-cases.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 25th January 2014 at 18:41.
nevcairiel is offline   Reply With Quote
Old 25th January 2014, 18:34   #21834  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 497
0.87.1 is working great here (speed-wise it seems at least on par with 0.86.11, if not faster, havenīt tested with de-interlacing yet, though).

I found one bug though, which is reproducible every time:

1) enable smooth motion - enable always
2) play a video file
3) enter fullscreen-exclusive
4) leave fullscreen-exclusive and return to windowed mode
-> black screen
5) going back to fullscreen-exclusive the picture is back
6) leave fullscreen-exclusive again
-> black screen
...

GTX580, 331.82 drivers (quadro)

@madshi:
I donīt know if this is helpful for you, but on the 3dcenter-forum there is a member called Blaire, which seems to be part of the NV BETA Testing-Program and he seems to know a lot of various related (known) issues and he reports directly to them (driver related problems, primarily he reports game issues). He seems to be always up-to-date when it comes to new drivers, too. It seems that when he is reporting bugs directly, they have a very high chance to get fixed ASAP. Thatīs only an observation on my part though, I didnīt actually every have a conversation with him, since I never had serious problems, thankfully.

Also, if you tell him that a lot of the madVR users are going to consider switching to AMD in the future, because of the lackluster OpenCL drivers, NV may listen a bit more closely. Seems worth a try, at least. Not your problem if they cannot provide stable drivers for such things, anyway.

Last edited by iSunrise; 25th January 2014 at 18:44.
iSunrise is offline   Reply With Quote
Old 25th January 2014, 18:36   #21835  |  Link
cca
Anime Otaku
 
Join Date: Oct 2002
Location: Somewhere in Cyberspace...
Posts: 437
CUDA though works exclusively and only on Nvidia, it's a proprietary interface. OpenCL isn't. Nvidia's attitude is forcing developers to support their own interface so they have to do double the work.

Sent from my Nexus 7 using Tapatalk
__________________
AMD FX8350 on Gigabyte GA-970A-D3 / 8192 MB DDR3-1600 SDRAM / AMD R9 285 with Catalyst 1.5.9.1/ Asus Xonar D2X / Windows 10 pro 64bit
cca is offline   Reply With Quote
Old 25th January 2014, 18:37   #21836  |  Link
noee
Registered User
 
Join Date: Jan 2007
Posts: 530
Some more feedback....OpenCL...HD6570

I've been testing SD video (and interlaced) and film, upscaling to 1080, with the NNEDI Double Luma, 64neurons. I can't use secondary scaling options that use pixel shaders, but DXVA2 scaling works great in this case. I can still see some ringing around hair and people's heads, but it seems to be working very well, using about 25-30% less GPU than Jinc3/AR with SD film.

Edit: % perf numbers

Last edited by noee; 25th January 2014 at 18:44.
noee is offline   Reply With Quote
Old 25th January 2014, 18:42   #21837  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,985
Quote:
So for all users who have:

(1) Stability problems (D3D9 error messages, freezing when switching video files in FSE mode etc).
(2) Performance issues with deinterlacing.

Please try the various test builds you can find here:

http://madshi.net/madVRtestbuilds.rar

Every test build reverts one change back to v0.86.11. This way hopefully we can identify which change is reponsible for which problem.
so all 6 didn't work all got the same error with switching video mode to film mode and all freeze/crash with fullscreen exclusive switching file.

BUT! after that 86.9 didn't work too at least the film video mode switching the error is different the player is completely crashing (bug report sending didn't work yet again) to fix this i have to reset all settings.

so i try all 6 again but this time i reset them all each time... and type the display modes in each time X-)

maybe just an problem with an registry entry?
huhn is online now   Reply With Quote
Old 25th January 2014, 18:42   #21838  |  Link
Soukyuu
Registered User
 
Soukyuu's Avatar
 
Join Date: Apr 2012
Posts: 169
Quote:
Originally Posted by madshi View Post
Enabling it makes debanding stronger for image areas which are likely large gradiants, while keeping debanding strength the same for areas which probably contain image detail. Basically it should improve debanding strength where it's needed without losing a lot of additional detail in other image areas. It costs quite a bit of added GPU performance, though.
Hmm, so setting deband to say, "low", that option will increase it to "high" where needed?

As for nVidia's openCL, I guess I just have to be happy other openCL seems to function properly, i.e. flaccl still being lossless. But that kind of scares me to be honest. Maybe I'll go with AMD for my next GPU this iteration...
__________________
AMD Phenom II X4 970BE | 12GB DDR3 | nVidia 260GTX | Arch Linux / Windows 10 x64 Pro (w/ calling home shut up)
Soukyuu is offline   Reply With Quote
Old 25th January 2014, 18:44   #21839  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by cyberbeing View Post
This is NVIDIA's way of telling you to use CUDA, instead of depending on their automated OpenCL to CUDA driver translation functioning properly. As a bonus, you can actually debug and profile CUDA kernels on NVIDIA as well, while no such thing exists for OpenCL on NVIDIA.
Quote:
Originally Posted by nevcairiel View Post
Should've gone with CUDA, the API is much easier to use, too!
More importantly, I don't think NVIDIA (or quite possibly any other GPU vendor) cares much about such compute use-cases.
That's all nice and fine. But think about the future, too. What if I add a dozen new features relying on computing? Am I really expected to create, debug and profile two versions of every kernel? It might not be twice the work, but it would definitely be a lot extra work. I'd really *REALLY* prefer to stick with one API. I'm considering to maybe at some point adding a full OpenCL rendering queue, dropping D3D9 rendering completely (just for GPUs which support it, of course). That would mean there would be several dozens of pixel shaders that need to be converted to OpenCL. Converting them all to OpenCL *and* CUDA would increase my workload quite a lot. So much that I might actually forget about the whole idea.

Quote:
Originally Posted by iSunrise View Post
0.87.1 is working great here (speed-wise it seems at least on par with 0.86.11, if not faster, havenīt tested with de-interlacing yet, though).

I found one bug though, which is reproducible every time:

1) enable smooth motion - enable always
2) play a video file
3) enter fullscreen-exclusive
4) leave fullscreen-exclusive and return to windowed mode
-> black screen
5) going back to fullscreen-exclusive the picture is back
6) leave fullscreen-exclusive again
-> black screen
...

GTX580, 331.82 drivers (quadro)
Everybody please, if you find a bug, always double check with v0.86.11 for now. I need to know which bugs are new bugs and which already existed in v0.86.11. I'm mostly only interested in fixing new bugs for now.

@iSunrise, if this issue did not occur in v0.86.11 (please double check), could you please also check the test builds I posted a couple posts above to see if any one of them fixes this problem or not? Thanks.

Quote:
Originally Posted by noee View Post
Some more feedback....OpenCL...HD6570

I've been testing SD video (and interlaced) and film, upscaling to 1080, with the NNEDI Double Luma, 64neurons. I can't use secondary scaling options that use pixel shaders, but DXVA2 scaling works great in this case. I can still see some ringing around hair and people's heads, but it seems to be working very well, using about 12-15% less GPU than Jinc3/AR.
Unfortunately if you activate DXVA2 scaling, NNEDI3 is not used at all. Sorry, should have said so. But it would be technically quite difficult to combine NNEDI3 and DXVA2 scaling. Try NNEDI3 for Luma only + Bilinear.

Last edited by madshi; 25th January 2014 at 18:51.
madshi is offline   Reply With Quote
Old 25th January 2014, 18:46   #21840  |  Link
noee
Registered User
 
Join Date: Jan 2007
Posts: 530
haha! Yes, that explains a lot.....

Okay, I see, yeah. I can only run 32 neurons with Bilinear without drops, but it works, GPU is >80%.

Last edited by noee; 25th January 2014 at 18:50.
noee is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:18.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.