Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 10th March 2016, 15:42   #36801  |  Link
RainyDog
Registered User
 
Join Date: May 2009
Posts: 172
Quote:
Originally Posted by aufkrawall View Post
Sometimes my GTX 980 gets stuck in the wrong powerstate when opening videos, it doesn't run with full clock then and drops lots of frames. I suspect this is a randomly triggered bug by NNEDI3 in the OpenCL driver, a bit annoying.
Has someone else observed anything similar? Happens with the current driver (364.51) and some previous ones as well. Win 10 x64.
Add MPC-HC, or whichever video player you use, to 3D applications in the Nvidia control panel and change its power status to prefer maximum power.
RainyDog is offline   Reply With Quote
Old 10th March 2016, 15:53   #36802  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,716
I don't think that this would help since the use of OpenCL/CUDA forces the card into a specific powerstate anyway.
aufkrawall is offline   Reply With Quote
Old 10th March 2016, 17:44   #36803  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,463
Quote:
Originally Posted by markanini View Post
Enhance details: 1.0 is amazing for resolution
Kinda late to the party and I dunno what voodoo magic is behind "enhance details" but it really looks outstanding @0.7 with NEDI+2X SR@1+FRC+SSIM 2D 25% on SD
leeperry is offline   Reply With Quote
Old 10th March 2016, 17:45   #36804  |  Link
Werewolfy
Registered User
 
Join Date: Feb 2013
Posts: 135
Quote:
Originally Posted by madshi View Post
Are you using DXVA deinterlacing or forced film mode? Does the problem occur with both? And which queue is the first one which is getting into trouble (less than half full)?
I use forced film mode. It seems that decoder queue and ivtc queue limit the other queues, it never fills completely. But the moment I have dropped frames is when render queue drops down.
I tested DXVA and it's not really better. Decoder queue go higher than before but I also have dropped frames when render queue drops down.

Here some videos of my tests, I hope it will be usefull and easier for you to understand.

http://www.filedropper.com/testdeinterlacing1_1

(It's the first time I'm using this file hosting website, I don't know what it worths).

Quote:
Originally Posted by madshi View Post
Would be great if the new NVidia drivers improved the situation there, of course!!
Unfortunately, on my side it still doesn't work as intended even with the new Nvidia drivers, glitch handlling mode enabled or not.

Quote:
Originally Posted by Uoppi View Post
Maybe try upping the GPU queue size even more?

I need to use a GPU queue of at least 20 to avoid dropped frames with interlaced NTSC DVDs. Any less gives me occasional and "unexpected" dropped frames. After extensive testing, 20-20-6 works for me perfectly.

(My setup is Win10, factory-overclocked GTX 960 and a dual core Ivy Bridge Pentium, using D3D11 windowed mode.)
Higher GPU queue doesn't help because it's limited by the decoder that's never filled. On my videos, you can see that I even used 128-24-15 without success.
__________________
Windows 8.1 x64 - Intel Core i5-4670K (4.2 GHz) - 8 GB DDR3 - MSI Geforce GTX 1080 8 GB - Sony KD-55A1

Last edited by Werewolfy; 10th March 2016 at 20:43.
Werewolfy is offline   Reply With Quote
Old 10th March 2016, 18:22   #36805  |  Link
Georgel
Visual Novel Dev.
 
Georgel's Avatar
 
Join Date: Nov 2015
Location: Bucharest
Posts: 196
Quote:
Originally Posted by leeperry View Post
Kinda late to the party and I dunno what voodoo magic is behind "enhance details" but it really looks outstanding @0.7 with NEDI+2X SR@1+FRC+SSIM 2D 25% on SD
It does work nicely with real life videos.

It does not work well with animated videos, inducing grain, and what could be perceived as noise.
Georgel is offline   Reply With Quote
Old 10th March 2016, 21:03   #36806  |  Link
Uoppi
Registered User
 
Join Date: Oct 2015
Posts: 99
Quote:
Originally Posted by huhn View Post
there are problem with deinterlancing with windows 10.

of cause this is not an madVR error.
I was just about to say Windows 10 + GTX 960 has worked quite nicely for me too.

Are you referring to some DXVA deinterlacing quality issues? At least deinterlacing seems to work glitch-free to my eyes, so I'm just curious what problems you are referring to.
Uoppi is offline   Reply With Quote
Old 10th March 2016, 22:49   #36807  |  Link
iSeries
Registered User
 
Join Date: Jan 2009
Posts: 607
Quote:
Originally Posted by ryrynz View Post
Only 2ms difference between 1D and 2D? I think something's broken, you should maybe look into resetting your settings as my GTX 960 has about 8ms render times when downscaling 4K content and 16-18ms when using 2D (with AR and LL)
It depends on how much downscaling is required but yes there is always some performance cost over 1d. However, specifically when supersampling, 2d adds 40ms to rendering times over 1d. Using 'Everest' as a test:

Example 1 (1080p supersampling):
Chroma - jinc
Doubling - superxbr 100
Enhancements - crispen edges 1, enhance detail 1, superres 3
Downscaling - ssim 1d
=30ms rendering time

Example 2 (1080p supersampling):
Chroma - jinc
Doubling - superxbr 100
Enhancements - crispen edges 1, enhance detail 1, superres 3
Downscaling - ssim 2d
=70ms rendering time

As you can see, the only difference between the two examples is 1d / 2d, with 2d adding 40ms to rendering times. Surely, that can't be right? Like I said, in every other downscaling or doubling and then downscaling scenario 2d is perfectly usable.
iSeries is offline   Reply With Quote
Old 10th March 2016, 23:13   #36808  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,977
Quote:
Originally Posted by Uoppi View Post
I was just about to say Windows 10 + GTX 960 has worked quite nicely for me too.

Are you referring to some DXVA deinterlacing quality issues? At least deinterlacing seems to work glitch-free to my eyes, so I'm just curious what problems you are referring to.
amd crimson deint is simply broken it is using a low quality bob. i haven't tested the newest beta yet.

and nvidia deint was kind of "broken" the last time i check and had issue with telecine/hybrid content. the high quality CUVID deint is not usable at all under windows 10.
huhn is offline   Reply With Quote
Old 10th March 2016, 23:14   #36809  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,977
Quote:
Originally Posted by aufkrawall View Post
I don't think that this would help since the use of OpenCL/CUDA forces the card into a specific powerstate anyway.
and that powerstate is the highes possible so it can help here.
huhn is offline   Reply With Quote
Old 11th March 2016, 00:24   #36810  |  Link
Manni
Registered User
 
Join Date: Jul 2014
Posts: 733
Quote:
Originally Posted by RyuzakiL View Post
Has anyone tried the latest Crimson driver 16.3 on Win10x64, MadVR?

and does MadVR work properly on Win10x64? or is it still Win8.1 that MadVR works perfectly.
I have tested Crimson 16.3 on 8.1 x64 tonight quickly and it seems to be working fine (see my sig for detailed config).

The white crush in 3D seems to be improves, now it does resolve up to 231-232, so it's not too bad, but still not as good as with 14.12.

My only issue is that I get a crash of MPC-BE almost everytime the first time I launch a movie. Second time works fine. It appeared relatively recently but I haven't been able to trace which version of MadVR or MPC-BE introduced this.

It could be a Crimson issue (but not 16.32 specific, I had it earlier), I still have to try to revert to 14.12 to see if it goes away, but as it means I'd lose 3D support with LAV/MadVR, which I really like, I'd rather try to find a solution. There is no error message or crash dump, so debugging isn't easy.

Otherwise all seems fine.
__________________
Win10 Pro x64 b1903 MCE
i7 3770K@4.0Ghz 16Gb@2.18Ghz EVGA GTX 1080 Ti SC2 11Gb@2GHz 436.48 RGB Full 8bits
MPC-BE/LAV/MadVR/jRiver/MyMovies V5.25
Denon X8500H>HD Fury Maestro>JVC RS2000
Manni is offline   Reply With Quote
Old 11th March 2016, 02:39   #36811  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,716
Quote:
Originally Posted by huhn View Post
and that powerstate is the highes possible so it can help here.
Unfortunately, you can't do anything if the driver refuses to cange the p-state due to a bug that makes it stuck.
aufkrawall is offline   Reply With Quote
Old 11th March 2016, 02:46   #36812  |  Link
RyuzakiL
Registered User
 
Join Date: May 2015
Posts: 36
Quote:
Originally Posted by Manni View Post
I have tested Crimson 16.3 on 8.1 x64 tonight quickly and it seems to be working fine (see my sig for detailed config).

The white crush in 3D seems to be improves, now it does resolve up to 231-232, so it's not too bad, but still not as good as with 14.12.

My only issue is that I get a crash of MPC-BE almost everytime the first time I launch a movie. Second time works fine. It appeared relatively recently but I haven't been able to trace which version of MadVR or MPC-BE introduced this.

It could be a Crimson issue (but not 16.32 specific, I had it earlier), I still have to try to revert to 14.12 to see if it goes away, but as it means I'd lose 3D support with LAV/MadVR, which I really like, I'd rather try to find a solution. There is no error message or crash dump, so debugging isn't easy.

Otherwise all seems fine.
Yep I also updated my drivers to the latest AMD Crimson 16.3 coming from the last AMD CCC under Win 8.1 and surprised with the performance it brings with my games (Shadow of Mordor).

BUT AMD Crimson enables my Crossfire when it detects MadVR (Wasting power on the 2nd GPU) unlike the old AMD CCC which it continues to use Single GPU only.

So that being said it's another round of compromises but this time, I might stick on using the latest AMD Crimson since I get higher frame rates on my games.

I wonder how it handles under Win.10, I heard scary stories on that Spying OS and the Last time I used it with MadVR It doesn't like Fullscreen Exclusive Mode and leads to dropped frames and CTD's
RyuzakiL is offline   Reply With Quote
Old 11th March 2016, 04:30   #36813  |  Link
RyuzakiL
Registered User
 
Join Date: May 2015
Posts: 36
Disable ITC and Enable GPU Scaling AMD Crimson

So I stumble about these 2 settings in AMD Crimson and I read about disabling ITC processing and enabling GPU Scaling bypasses the display processor entirely so ergo eliminating redundant conversions by the HDTV's processor since it was bypassed.

and also by doing this you're letting the GPU itself to do all the video processing and eliminating the HDTV to do redundant conversions, also it gives the fastest refresh and best fps in games when bypassing the display processor.
RyuzakiL is offline   Reply With Quote
Old 11th March 2016, 06:48   #36814  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,463
Quote:
Originally Posted by RyuzakiL View Post
disabling ITC processing and enabling GPU Scaling bypasses the display processor entirely so ergo eliminating redundant conversions by the HDTV's processor
never seen any diff with mVR using 13.12 in 8bit
leeperry is offline   Reply With Quote
Old 11th March 2016, 08:48   #36815  |  Link
Uoppi
Registered User
 
Join Date: Oct 2015
Posts: 99
Quote:
Originally Posted by huhn View Post

and nvidia deint was kind of "broken" the last time i check and had issue with telecine/hybrid content.
Do you mean cadence detection? I see occasional cadence breaks with animated 3:2 content (Ren & Stimpy) but with movie content (Cronenberg's Crash), IVTC seems to work just fine, i.e. looks like cadence is consistently detected.
Uoppi is offline   Reply With Quote
Old 11th March 2016, 09:02   #36816  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 930
Quote:
Originally Posted by huhn View Post
amd crimson deint is simply broken it is using a low quality bob. i haven't tested the newest beta yet.

and nvidia deint was kind of "broken" the last time i check and had issue with telecine/hybrid content. the high quality CUVID deint is not usable at all under windows 10.
Yeah I had to switch back to Catalyst because deinterlacing is completely buggered with Crimson. I couldn't even find a link to Catalyst on AMD's website so had to download an old beta version then run the updater to bring it up to date. I'm amazed a bug with such a simple fix (i.e. change the default deinterlacing method to vector adaptive) hasn't been fixed yet.
__________________
HTPC Hardware: Intel Celeron G530; nVidia GT 430
HTPC Software: Windows 7; MediaPortal 1.19.0; Kodi DSPlayer 17.6; LAV Filters (DXVA2); MadVR
TV Setup: LG OLED55B7V; Onkyo TX-NR515; Minix U9-H
DragonQ is offline   Reply With Quote
Old 11th March 2016, 09:56   #36817  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,977
Quote:
Originally Posted by leeperry View Post
never seen any diff with mVR using 13.12 in 8bit
older driver than 14.something are always outputting 10 bit if the device supports that (and i haven't seen a TV that doesn't support 12 bit input) regardless of the input bit deep. which is hopefully just adding some 0 zeros and nothing else but you got an extra conversation step with older driver.
huhn is offline   Reply With Quote
Old 11th March 2016, 10:00   #36818  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,977
Quote:
Originally Posted by Uoppi View Post
Do you mean cadence detection? I see occasional cadence breaks with animated 3:2 content (Ren & Stimpy) but with movie content (Cronenberg's Crash), IVTC seems to work just fine, i.e. looks like cadence is consistently detected.
it was really bad when lavfilter added w3fdif in the nightlies i haven't check it again. could be ok now.
huhn is offline   Reply With Quote
Old 11th March 2016, 17:53   #36819  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
madshi, I don't know what algorithm you used to remap the HDR content to SDR but according to the ST.2084 standard everything under 100nit should be exactly the same as in the SDR.
Do you think it is the right time to discuss light output, gamma curves remapping, color space remapping, etc?
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.
James Freeman is offline   Reply With Quote
Old 11th March 2016, 17:55   #36820  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,836
Quote:
Originally Posted by James Freeman View Post
madshi, I don't know what algorithm you used to remap the HDR content to SDR but according to the ST.2084 standard everything under 100nit should be exactly the same as in the SDR.
I'm sure he has read the appropriate specifications.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:28.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.