Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 16th April 2014, 12:54   #26001  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by Asmodian View Post
Sadly I don't think finer grained neuron settings are possible. From the NNEDI3 docs
Oh you nail it!

BTW using monostatic ED1 LL, low debanding, CC AR LL downscaling, J3AR upscaling for chroma & luma, with my 1Ghz 7850 I used to be capped to:
-64x NNEDI luma for <=24p >=1.85 720p@1080p
-32x NNEDI luma for >=25p 720p@1080p

Now with the new bulds, I can do 64x NNEDI luma for <=25p 720p@1080p but that's still a no-go for 29.97fps.
leeperry is offline   Reply With Quote
Old 16th April 2014, 13:44   #26002  |  Link
pie1394
Registered User
 
Join Date: May 2009
Posts: 212
Tested with Core i5-3570K + Z77 + DDR3-2400 8GB + HD7970 (PCIe Gen3 x16, Catalyst 13.12) on Win7x64SP1. Both test build#1 and build#2 allow more stressful NNEDI3 settings if compared to 0.87.9.

Yet test build2 has obviously less impact on concurrent GPU deinterlacing job's performance than test build1.

Here is the test build's top playback settings with Debanding+Angle_detection, No_Dithering options on HD7970:

[720x480i60 to 1920x1080p60]
Chroma scaling: NNEDI3_32 (was NNEDI3_16)
Image Doubling: Luma with NNEDI3_32, Chroma with NNEDI3_32 (Chroma was NNEDI3_16)
Image Up-scaling: Jinc3_AR (was Bi-cubic 100 AR)

[1280x720p24 to 1920x1080p24]
Chroma scaling: NNEDI3_32 (was NNEDI3_16 with Luma NNEDI3 doubling)
Image Doubling: Luma with NNEDI3_64 (was NNEDI3_32 at max)
Image Up-scaling: Jinc3_AR
Image Down-scaling: Spline3_AR

[1440x1080i60 to 1920x1080p60]
Chroma scaling: NNEDI3_32 (was NNEDI3_16)
Image Doubling: Luma with NNEDI3_32 (was impossible)
Image Up-scaling: Jinc3_AR
Image Down-scaling: Spline3_AR

[1920x1080p24 P10 to 1920x1080p24]
Chroma scaling: NNEDI3_128 (was NNEDI3_64)

[1920x1080i60 to 1920x1080p60]
Chroma scaling: NNEDI3_32 (was NNEDI3_16)

Last edited by pie1394; 17th April 2014 at 01:04. Reason: correct 1280x720p24 max setting on 0.87.9
pie1394 is offline   Reply With Quote
Old 16th April 2014, 13:59   #26003  |  Link
Anime Viewer
Troubleshooter
 
Anime Viewer's Avatar
 
Join Date: Feb 2014
Posts: 339
Quote:
Originally Posted by Asmodian View Post
Yes, but only with SLI on. SLI is much better with the test builds though still not as fast as without SLI.
Its weird that you would see a difference SLI or not Nvidia systems shouldn't have the interop issue. Given that you saw apparent improvement with the test builds I tried them on an Optimus system using the Nvidia gpu for mpc, and NNEDI3. It made no difference on the Optimus system. Nvidia users don't need to waste time trying this test build.
__________________
System specs: Sager NP9150 SE with i7-3630QM 2.40GHz, 16 GB RAM, 64-bit Windows 10 Pro, NVidia GTX 680M/Intel 4000 HD optimus dual GPU system. Video viewed on LG notebook screen and LG 3D passive TV.
Anime Viewer is offline   Reply With Quote
Old 16th April 2014, 14:21   #26004  |  Link
sexus
Registered User
 
sexus's Avatar
 
Join Date: Apr 2011
Posts: 198
well who knows we may see some improvements coming our way for us nvidia users , sure would like to get more optimization for my titan over here , get some stuttery playback on some movies here and there even on 720p content sometimes , am using NNEDI64 for chroma upscaling, which seems to

be pretty hardcore on my titan with image doubling but with jinc3 for chroma upscaling i dont get any occasional stuttering , odd, my image doubling settings are NNEDI64 for both luma doubling at 1.5 scaling factor and quadrupling at 3.0 scaling factor and 16 NEEDI on both chroma doubling at 1.5 scaling and quadruppling at 3.0 scaling factor

error diffusion is set to option 1 and smooth motion set to only if motion judder without it, debanding is enabled medium to high , the trade quality for performance only has one option checked and that is dont use linear light for dithering which was preset no idea why

Last edited by sexus; 16th April 2014 at 14:25.
sexus is offline   Reply With Quote
Old 16th April 2014, 15:11   #26005  |  Link
jaju123
Registered User
 
Join Date: Apr 2012
Posts: 16
Hey Madshi, your reply about the AMD interop problem has been forwarded to the AMD driver team. Hopefully in the future we will see some improvements relating to this issue and maybe a complete resolution.
I'll post any updates in this thread as they come
jaju123 is offline   Reply With Quote
Old 16th April 2014, 15:58   #26006  |  Link
seiyafan
Registered User
 
Join Date: Feb 2014
Posts: 162
Seeing that 32 neurons is the default, is 16 neurons still better than none? or does it bring other issues? My card is not fast enough to do 32.
seiyafan is offline   Reply With Quote
Old 16th April 2014, 16:01   #26007  |  Link
cca
Anime Otaku
 
Join Date: Oct 2002
Location: Somewhere in Cyberspace...
Posts: 437
For me the difference of the new test builds can be summarized as follows: with build 1 I can do NEEDI3 at 32 neurons scaling 1280x720p@29.97fps to 1080p. With build 2 I cannot, it just drops frames. It is a huge improvement for my aging Radeon 5850.
__________________
AMD FX8350 on Gigabyte GA-970A-D3 / 8192 MB DDR3-1600 SDRAM / AMD R9 285 with Catalyst 1.5.9.1/ Asus Xonar D2X / Windows 10 pro 64bit
cca is offline   Reply With Quote
Old 16th April 2014, 16:52   #26008  |  Link
Procrastinating
Registered User
 
Procrastinating's Avatar
 
Join Date: Aug 2013
Posts: 71
Quote:
Originally Posted by seiyafan View Post
Seeing that 32 neurons is the default, is 16 neurons still better than none? or does it bring other issues? My card is not fast enough to do 32.
NNEDI3 produces an entirely different set of artifacts compared to traditional upscalers, and increasing neurons decreases these artifacts. Having 16 neurons is generally considered to be comparable to Jinc 3 in terms of having "equivalent degrees" of distortion on an image, though these are often also considered "nicer" artifacts, particularly with anime. 64 neurons is where you really see the wonders of NNEDI3 though, rather than just looking at a "nicer" distorted image.

Last edited by Procrastinating; 16th April 2014 at 16:55.
Procrastinating is offline   Reply With Quote
Old 16th April 2014, 18:21   #26009  |  Link
Farfie
Registered User
 
Farfie's Avatar
 
Join Date: Aug 2012
Posts: 51
Quote:
Originally Posted by seiyafan View Post
Seeing that 32 neurons is the default, is 16 neurons still better than none? or does it bring other issues? My card is not fast enough to do 32.
For anime I'd say using NNEDI3 at any amount of neurons is worth it. For live content, I still like what it does, but it's more in the open. My opinions.
Farfie is offline   Reply With Quote
Old 16th April 2014, 18:45   #26010  |  Link
n3w813
Registered User
 
Join Date: Jan 2006
Posts: 80
@leeperry
Have you had a chance to test your Nvidia GTX 750ti OC yet? I'm thinking about replacing my current vid card with that one.
n3w813 is offline   Reply With Quote
Old 16th April 2014, 19:06   #26011  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Not yet, madshi caught me by surprise with yesterday's faster builds and I'm still kinda reluctant to buying a graphic card with a 128bit memory bus...Lucky me, my seller rarely checks his emails and it would appear that the white dots I was getting were due to a malfunctioning flat HDMI cable(thrown as a freebie with a TV), for some supernatural reason it would act up when the GPU reached 70°C and dots would disappear a few mins later...and that never happened on the windows desktop

hu1kamania was kind enough to post results using his o/c 750Ti, it's essentially on par with a 7850 without the interop lag lottery and saving ±70W in the process
leeperry is offline   Reply With Quote
Old 16th April 2014, 22:52   #26012  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,406
Quote:
Originally Posted by Anime Viewer View Post
Its weird that you would see a difference SLI or not Nvidia systems shouldn't have the interop issue. Given that you saw apparent improvement with the test builds I tried them on an Optimus system using the Nvidia gpu for mpc, and NNEDI3. It made no difference on the Optimus system. Nvidia users don't need to waste time trying this test build.
Given the large increase in PCI-E activity with SLI on I think it is safe to assume something like AMD's interop issue is going on with SLI enabled. Especially since an attempt to improve AMD's issue improved SLI performance so dramatically.

Optimus is not similar to SLI in any way.

I agree there is no need for single Nvidia GPU users to test these builds but I would love to see someone else test SLI performance. (And CF users, are there no CF users anymore?)

I noticed a lot of Optimus specific options when editing profiles with Nvidia Inspector, has anyone played with them to see if madVR behavior can be improved on an Optimus system?
Asmodian is offline   Reply With Quote
Old 16th April 2014, 23:07   #26013  |  Link
peplegal
Registered User
 
Join Date: Jan 2014
Posts: 6
Test version is not working !

Hi guys...

...for some esoteric reason, when I overwrite those files of the last test version (AMD Interop) over my default instalation, MPC-HC stops loading MadVR (start using LAV render instead).

I rename "madVRinteropTest2.ax" to "madVR.ax" and put the ".ax", the ".dll" and the ".exe" files over previous files.

What am I doing wrong ?

Any advice ?

Thanks in advance.
peplegal is offline   Reply With Quote
Old 17th April 2014, 01:25   #26014  |  Link
Stereodude
Registered User
 
Join Date: Dec 2002
Location: Region 0
Posts: 1,436
Quote:
Originally Posted by madshi View Post
Can you please try to find out which exact madVR build introduced this problem? Also a debug log might help figuring out why the switch fails. Please don't switch back and forth in the debug log. Just let it fail, then stop, I don't need to see it working in the log, I only need to see the fail. Please enable the OSD (Ctrl+J) while creating the debug log, because otherwise the log will not contain all important information.
FWIW, the MPC-HC 1.7.4 update seems to fix the issue. Not sure why some versions of MadVR behaved differently with the older versions of MPC-HC though.

Do you still want the debug log?

Last edited by Stereodude; 17th April 2014 at 01:31.
Stereodude is offline   Reply With Quote
Old 17th April 2014, 02:19   #26015  |  Link
Procrastinating
Registered User
 
Procrastinating's Avatar
 
Join Date: Aug 2013
Posts: 71
Quote:
Originally Posted by peplegal View Post
MPC-HC stops loading MadVR (start using LAV render instead).
Uninstall everything, do a registry cleanup with CCleaner or whatever, reinstall everything, give your user account full control of the madVR folder and files. The latter part is probably the only real problem here, but the former is always nice to do once in a while.

Last edited by Procrastinating; 17th April 2014 at 02:21.
Procrastinating is offline   Reply With Quote
Old 17th April 2014, 05:56   #26016  |  Link
Osjur
Registered User
 
Osjur's Avatar
 
Join Date: Oct 2010
Posts: 5
Hi all.

I have been testing build 1 with overclocked 290X and I think these are the best settings so far for me without dropped frames:

1920x1080 10bit @ 23.97fps to 2560x1600
Debanding on, Error Diffusion 1 dithering, all trade quality settings unticked and Smooth Motion on

Chroma & Image upscaling: Jinc 3 taps + ar

NNEDI3 Luma doupling: 64 neurons
NNEDI3 Luma quadrupling: 64 neurons

Way faster compared to normal build where I was dropping frames like no other.

In which order should you enable those NNEDI3 algorithms?
I'm guessing Luma 2x, 4x, Chroma 2x, 4x, Chroma upscaling
Osjur is offline   Reply With Quote
Old 17th April 2014, 09:26   #26017  |  Link
QBhd
QB the Slayer
 
QBhd's Avatar
 
Join Date: Feb 2011
Location: Toronto
Posts: 697
Quote:
Originally Posted by Osjur View Post
Hi all.

I have been testing build 1 with overclocked 290X and I think these are the best settings so far for me without dropped frames:

1920x1080 10bit @ 23.97fps to 2560x1600
Debanding on, Error Diffusion 1 dithering, all trade quality settings unticked and Smooth Motion on

Chroma & Image upscaling: Jinc 3 taps + ar

NNEDI3 Luma doupling: 64 neurons
NNEDI3 Luma quadrupling: 64 neurons

Way faster compared to normal build where I was dropping frames like no other.

In which order should you enable those NNEDI3 algorithms?
I'm guessing Luma 2x, 4x, Chroma 2x, 4x, Chroma upscaling
Quadrupling for that content seems to me like a complete waste of resources... And personally I want Chroma Upscaling before Chroma Doubling.

Also you did not mention what Image downscaling you use

QB
__________________
QBhd is offline   Reply With Quote
Old 17th April 2014, 10:49   #26018  |  Link
Qaq
AV heretic
 
Join Date: Nov 2009
Posts: 422
Test 1 works faster for me = better. Win7, 7750, PCIe 2.0 x16.
Qaq is offline   Reply With Quote
Old 17th April 2014, 11:07   #26019  |  Link
Osjur
Registered User
 
Osjur's Avatar
 
Join Date: Oct 2010
Posts: 5
Quote:
Originally Posted by QBhd View Post
Quadrupling for that content seems to me like a complete waste of resources... And personally I want Chroma Upscaling before Chroma Doubling.

Also you did not mention what Image downscaling you use

QB
CRARLL for downscaling

Just testing how much my card can take before choking.

Still trying to figure out the best compromise between image quality and speed.
Osjur is offline   Reply With Quote
Old 17th April 2014, 13:06   #26020  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
There seems to be just as many ppl who prefer one build or the other, maybe madshi will come up with a version sharing the best of both builds? Or are we gonna end up with a sub-option in mvR's config panel in order fix AMD's sloppy drivers?

I'm still surprised to see that they haven't released WHQL drivers in +4 months now, if we're lucky they'll have the kindness of releasing some by the end of the year. I'm not installing beta drivers, especially when I see that the latest ones might brick R9 290X Lightning boards due to bogus fan speeds under load...that's what I call beta.
leeperry is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 05:34.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.