Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th June 2015, 11:25   #31081  |  Link
kalston
Registered User
 
Join Date: May 2011
Posts: 164
Does the fact overlay mode report 24fps in fraps while regular FSE report 144 (or whatever my screen refresh rate is) matter in any way? I'm curious because I've started using Smooth motion (as I've heard it's great with high refresh rates) and I've noticed that it doubles the reported framerate when using overlay mode (so 24fps becomes 48, 60 becomes becomes 120 etc.) while obviously with regular FSE it just shows a number equal to my refresh rate no matter what.

(overlay feels just as smooth and reliable as FSE for me so I kinda doubt it matters but I'd rather make sure I'm not doing anything wrong )
kalston is offline   Reply With Quote
Old 15th June 2015, 13:01   #31082  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,959
Quote:
Originally Posted by aufkrawall View Post
That's great! So now AMD users now have good alternative to NNEDI3, combined with SuperRes.
AMD was/is still better at nnedi3 than NVIDIA for the same price even with the interop copyback.

but no real issue for both AMD and NVIDIA they both aim at gaming performance not openCL.
huhn is offline   Reply With Quote
Old 15th June 2015, 13:04   #31083  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,959
Quote:
Originally Posted by kalston View Post
Does the fact overlay mode report 24fps in fraps while regular FSE report 144 (or whatever my screen refresh rate is) matter in any way? I'm curious because I've started using Smooth motion (as I've heard it's great with high refresh rates) and I've noticed that it doubles the reported framerate when using overlay mode (so 24fps becomes 48, 60 becomes becomes 120 etc.) while obviously with regular FSE it just shows a number equal to my refresh rate no matter what.

(overlay feels just as smooth and reliable as FSE for me so I kinda doubt it matters but I'd rather make sure I'm not doing anything wrong )
this has to do how frames are present. you can ignore your fraps results. FSE new path has just more repeated frames drawn while overlay leaves them out and presenting "nothing" which comes don't to the same result.
huhn is offline   Reply With Quote
Old 15th June 2015, 13:22   #31084  |  Link
kalston
Registered User
 
Join Date: May 2011
Posts: 164
Good to know, I really love overlay mode as it combines windowed mode advantages with FSE's performance. My monitor won't accept 10bit input so it appears there is really no point bothering with FSE on my machine.
kalston is offline   Reply With Quote
Old 15th June 2015, 13:26   #31085  |  Link
RyuzakiL
Registered User
 
Join Date: May 2015
Posts: 36
Madvr 88.12 + AMD Crossfire Workaround

First of all i think this is the best version madvr had so far in terms of handling queues especially when using DX11 FSE 10bit mode. (And Display mode switching + FSE was insanely fast this time. Around 3 seconds waiting time compared to previous version where you had to wait for 10-15 seconds at best.)

I'm running on Win 8.1 64bit 2X HD7850 on crossfire + FX8320 @ 4.3ghz

Using Samsung 40inch Led Tv as my monitor. AMD CCC video settings turned all to off except deinterlacing and ITC Processing and Pixel Format: RBG 4:4:4

i followed the madvr setup guide found here https://imouto.my/madvr/
but made some modifications>> i changed NNEDI3 and
use Super-XBR instead since it seems lighter on AMD cards than NNEDI3 + InteropHack enabled.

I'm also using SuperRes Filter on Chroma Upscaling though I know that SuperRes Filter's default settings was for NNEDI3 but it seems picture quality wasn't affected much.

I cannot discern any big difference between Super-XBR vs. NNEDI3 so performance wise i selected Super-XBR since my HD7850 liked it better hehe.

Workaround for AMD Crossfire Users >>

Since the stupid AMD Catalyst always mistake Madvr as a game, ergo Crossfire is always turned on whenever it detects Madvr.

Now i found a simple work around to prevent AMD Catalyst to not use Crossfire when it detects Madvr. Here's the procedure.

Just open your AMD CCC and head on to "My Digital Flat-Panels Tab" and put a Check to "Enable GPU up-scaling" ofcourse put the dot on "scale image to full panel size" and voila! no more crossfire on Madvr yehey!

advantages:

1. No more power wasting due to Crossfire (we already know that madvr will not benefit on using crossfire)

2. In some way performance will be better since crossfire is not enabled (maybe because no more mirroring of data between gpu's)

disadvantages:

1. Madvr Display Mode Switching will not gonna work any longer, I don't know why but it seems using the said work around will bypass Madvr's Display Mode Switching.

2. It may in some way interfere with madvr's handling of things, especially when using DX11 FSE 10bit queues aren't full when using this work around. So you are stuck on using DX11 Windowed Fullscreen 8bit.

So in the end it's up to you.

If you want DX11 FSE 10bit running smoothly with crossfire wasting energy then do not use this work around.

But if you want to save electricity and make the 2nd card enter powersave then use this workaround.

FYI i cannot discern any difference at all using DX11 FSE 10bit vs. DX11 WF (Windowed Fullscreen) 8bit. That's why i'm using this setup.


Any corrections, objections and violent reactions are welcome.

P.S you need to use Madvr Smooth Motion since you are stuck on 60hz when using this workaround.

Last edited by RyuzakiL; 15th June 2015 at 13:30.
RyuzakiL is offline   Reply With Quote
Old 15th June 2015, 13:59   #31086  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 497
Quote:
Originally Posted by madshi View Post
Currently we have about 2 million options in the upscaling refinement page. I want most of them gone. So is it really too fast to remove an option when after a week's worth of testing *everybody* had the same opinion about that option?
I completely understand your developer point-of-view, I would probably do the same, just so that I could concentrate on the more important points on my checklist. I think you just did too much at the same time, when you introduced too many options in one of the last updates, that's why people really get confused and try out 2 million different settings and they never really provide meaningful feedback for only one option. I have never once had the feeling you did this on purpose though, but I think you can understand that people like me, who also have limited time to do these tests at least need 3-4 weeks to get familiar with the new settings and to concentrate on one specific test case. Otherwise, you will end up with a chaotic mix of feedback, which doesn't really help anyone.

Quote:
Originally Posted by madshi View Post
Fair enough. That also explains your preference for lower sharpness settings, though. When applying sharpening before upscaling you always need lower sharpening strength compared to sharpening after upscaling.
Yes, and it's important that others not just turn up the knobs, but also see the negatives and the positives, that's one of the very reasons why there need to be example shots. And most of the feedback I have read about finesharp was not screenshot based at all, everyone just basically says "I like XX better than YY", but very rarely they took their time to actually show it to everyone else. When we did the dithering tests, there was great communication, screenshots and lots of example shots where you could very clearly detect the positives and negatives and you also reacted very fast to improve them. With finesharp, this was not the case, since no one other than TheLion, me and you even provided screenshots. That's 3 people out of maybe 20 that basically just said "I like A better than B", but they could not tell us, why. If you enable picture processing on a modern LCD, you have to be very careful, but the majority will simply tell you that it's great for picture quality. And we know that such quick judgements are rarely about accuracy, but because of very quick first impressions.

If you show the average Joe two pictures on two of the very same monitors at the same time and you maximize every picture processing setting on one monitor, the majority will fall for it and pick the one with the heavy processing, but not the more accurate one. I, however, specifically look for an accurate representation, that's why I also bought an Eizo CG, because I want to see every detail in the source and I don't think that picking an algorithm that looks totally processed (like finesharp LL does) makes any sense at all when you can have finesharp without LL. The negatives will quickly add up and you end with heavy ringing that destroys every picture.

One of the main reasons why a lot of DVDs were so bad, was not because of their low native resolution, but because of their excessive ringing that was applied in post. They are almost completely unwatchable when you upscale them to HD or 4K. Finesharp with the current LL will amplify that even more and that doesn't make any sense when you can just use finesharp with no LL without these negatives.

Quote:
Originally Posted by madshi View Post
I wouldn't recommend 0.100, either. And maybe I wouldn't go very high in the image enhancement settings. But I think in upscaling refinement higher thinning values may be useful.
At least in image enhancement, I would not go any higher, but that's up for discussion. If someone provides some positives with going a lot higher and proves that it doesn't harm, then there should not be anything against it.

Quote:
Originally Posted by madshi View Post
It seems several other users are having a different subjective impression, though. Things like this are hard to judge. I can't value one user's subjective impression higher than those of several others.
As long as you still value screenshots more and that still rings an alarm when you look at them, I see no harm in that. But the majority is not a good measurement of accurateness, like I already explained above.

Quote:
Originally Posted by madshi View Post
From what I can see, LL on/off both produce different kinds of artifacts. The key question is which kind of artifacts are worse. I know your opinion about it. I'm not sure myself.

I'm wondering whether maybe I should try a FineSharp LL build using the BT.709 gamma curve instead of a 2.2 power curve. Maybe that could give us the best of both worlds? Will have to wait for next weekend, though.
If you can get rid of the heavy ringing, I would not be against LL. However, with results like this, this heavy amount of ringing is what made you create your AR algorithm for the upscalers in the first place. With finesharp, you basically destroy your own work, by adding heavy ringing again. I just cannot see how that's a good thing. Especially not when you're watching cartoons or anime or other things, where this will totally look processed and completely unnatural. I have not found a sample that doesn't exhibit the heavy ringing. If you zoom into your source (PotPlayer can do that natively, so madVR does all the rendering here) it's hard to miss it. It's everywhere.

And like I already explained, I can also see that the diagonal lines will get brightened up a bit, but that's because of the compression artefacts that get amplified by using higher strength values. The squirrel is however completely unnaffected by this, without all of the heavy ringing. And for me, real images are always more important than diagonal lines. In real images, the tiny amount of brightening is not noticable at all. At least not in the shots I saw, including the ones that you kindly provided.

I am pretty sure that 6233638 and cyberbeing would also love to have a say in this, but sadly, both seem to be too time restricted or just not here atm. So please at least leave the LL option in place, so others still are able to test with before/after results.

Last edited by iSunrise; 15th June 2015 at 14:11.
iSunrise is offline   Reply With Quote
Old 15th June 2015, 14:03   #31087  |  Link
Meulen92
Registered User
 
Join Date: May 2014
Posts: 9
Quote:
Originally Posted by RyuzakiL View Post
Workaround for AMD Crossfire Users >>

Since the stupid AMD Catalyst always mistake Madvr as a game, ergo Crossfire is always turned on whenever it detects Madvr.

Now i found a simple work around to prevent AMD Catalyst to not use Crossfire when it detects Madvr. Here's the procedure.

Just open your AMD CCC and head on to "My Digital Flat-Panels Tab" and put a Check to "Enable GPU up-scaling" ofcourse put the dot on "scale image to full panel size" and voila! no more crossfire on Madvr yehey!

advantages:

1. No more power wasting due to Crossfire (we already know that madvr will not benefit on using crossfire)

2. In some way performance will be better since crossfire is not enabled (maybe because no more mirroring of data between gpu's)

disadvantages:

1. Madvr Display Mode Switching will not gonna work any longer, I don't know why but it seems using the said work around will bypass Madvr's Display Mode Switching.

2. It may in some way interfere with madvr's handling of things, especially when using DX11 FSE 10bit queues aren't full when using this work around. So you are stuck on using DX11 Windowed Fullscreen 8bit.

So in the end it's up to you.

If you want DX11 FSE 10bit running smoothly with crossfire wasting energy then do not use this work around.

But if you want to save electricity and make the 2nd card enter powersave then use this workaround.

FYI i cannot discern any difference at all using DX11 FSE 10bit vs. DX11 WF (Windowed Fullscreen) 8bit. That's why i'm using this setup.


Any corrections, objections and violent reactions are welcome.

P.S you need to use Madvr Smooth Motion since you are stuck on 60hz when using this workaround.
AFAIK you could just create a custom game profile in CCC for Madvr, then scroll all the way down and select Disabled as Crossfire mode. Enjoy regular MadVR use without crossfire and without restricting your options.

Last edited by Meulen92; 15th June 2015 at 14:22.
Meulen92 is offline   Reply With Quote
Old 15th June 2015, 14:15   #31088  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,716
It's definitely LL that makes contoures look too thick and dark with FS.
not LL:


LL:


Sample (1 frame):
http://www52.zippyshare.com/v/lTUNZizZ/file.html

So I vote to make not LL the new default option. In fact, I don't see any reason for having a LL option if it distorts images as shown.

LL is devilish. Always, it seems.

As for thinning: With low values I don't see any effect (at least no positive). Combined with SuperRes, it can even look worse because it already seems to introduce tiny bits of aliasing which then get stronger. With higher values, this aliasing also becomes visible without SuperRes. So I don't see any point in this parameter, thus I vote to set it to 0 by default and probably even remove this option.

Last edited by aufkrawall; 15th June 2015 at 14:23.
aufkrawall is offline   Reply With Quote
Old 15th June 2015, 14:24   #31089  |  Link
RyuzakiL
Registered User
 
Join Date: May 2015
Posts: 36
Quote:
Originally Posted by Meulen92 View Post
AFAIK you could just create a custom game profile in CCC for Madvr, then scroll all the way down and select Disabled as Crossfire mode. Enjoy regular MadVR use without crossfire and without restricting your options.

thanks! Now that is much simpler. i hope this kind of configuration or procedure should be posted on the 1st page. So new users wouldn't have to ask again or search their eye balls out.

Just my two cents.
RyuzakiL is offline   Reply With Quote
Old 15th June 2015, 14:28   #31090  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,832
Quote:
Originally Posted by aufkrawall View Post
So I vote to make not LL the new default option. In fact, I don't see any reason for having a LL option if it distorts images as shown.

LL is devilish. Always, it seems.
That is just false. Both LL on and LL off will cause problems in certain content.
How do you think we arrived on the decision for LL on at the first place (and incidentally noone objecting)? Because someone thought LL on looked better? Exactly!

Quote:
Originally Posted by RyuzakiL View Post
thanks! Now that is much simpler. i hope this kind of configuration or procedure should be posted on the 1st page. So new users wouldn't have to ask again or search their eye balls out.
If you run a CrossFire or SLI setup, you better know how to setup a profile to disable it, since its not going to work with everything.
Because if you don't, your in for a world of hurt.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 15th June 2015, 14:37   #31091  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,716
Quote:
Originally Posted by huhn View Post
AMD was/is still better at nnedi3 than NVIDIA for the same price even with the interop copyback.

but no real issue for both AMD and NVIDIA they both aim at gaming performance not openCL.
I was not able to use NNEDI3 at all with Hawaii GPU, while it was never an issue on GK110 (PCIe 2.0). With a GTX 980, NNEDI3 64 even can be used for 1080p24 (or even p30) -> WQHD, although it is only specified with 5TFLOPs while Hawaii XT is with 5.6TFLOPs. I have never read that Hawaii can do 1080p24 -> WQHD with 64 neurons.

Quote:
Originally Posted by nevcairiel View Post
That is just false. Both LL on and LL off will cause problems in certain content.
How do you think we arrived on the decision for LL on at the first place (and incidentally noone objecting)? Because someone thought LL on looked better? Exactly!
"Better" is not objective. LL scaling seems to always manipulate brightness in a way that leads to an inaccurate result compared to the source. At least from what I've seen.
Well, I don't know if it's actually comparable, but I'll post an example regarding C-R DS in the next few days (could be a bit time consuming to reproduce) which shows this clearly too.

Even if LL can look "better" in some cases, it can be hardly worth it when it totally fails in some other cases (while gamma corrected hardly ever seems to completely fail).
aufkrawall is offline   Reply With Quote
Old 15th June 2015, 14:40   #31092  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,225
Madshi, are there any knobs to turn for Super-xbr with regards to sharpness outside of SuperRes?
ryrynz is offline   Reply With Quote
Old 15th June 2015, 14:43   #31093  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,832
Quote:
Originally Posted by aufkrawall View Post
"Better" is not objective. LL scaling seems to always manipulate brightness in a way that leads to an inaccurate result compared to the source. At least from what I've seen.
Well, I don't know if it's actually comparable, but I'll post an example regarding C-R DS in the next few days (could be a bit time consuming to reproduce) which shows this clearly too.

Even if LL can look "better" in some cases, it can be hardly worth it when it totally fails in some other cases (while gamma corrected hardly ever seems to completely fail).
I could link you an example right now where not using LL downscaling distorts the brightness quite badly, too. It always depends a bit on the source. Both variants are not perfect. In my experience, on Live-Action content LL downscaling will usually look more natural. On animation it may be another matter entirely.
Maybe the same categories apply to FS as well. LL may look better on live-action, but have problems with animation or other content with artificial hard borders (like zoomed macro images).
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 15th June 2015 at 14:46.
nevcairiel is offline   Reply With Quote
Old 15th June 2015, 14:43   #31094  |  Link
tobindac
Registered User
 
Join Date: May 2013
Posts: 115
For those of us valuing sharpness above all else, all other algorithms seem useless compared to super xbr based on the example pictures. Though to be honest I don't seem to be able to show such differences here on common sources. Do you use it on image upscaling or something, because here I only see a chroma upscaling option for it, for which I always had a hard time seeing differences between algorithms on 1080p sources.
tobindac is offline   Reply With Quote
Old 15th June 2015, 14:47   #31095  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,716
Quote:
Originally Posted by nevcairiel View Post
I could link you an example right now where not using LL downscaling distorts the brightness quite badly, too. It always depends a bit on the source. Both variants are not perfect. On Live-Action content LL downscaling will usually look more natural. On animation it may be another matter antirely.
Ok. However, I for myself have already stumbled over three (with FS now four) more or less "real world" examples in which LL was clearly worse.
Well, then I suppose it might be best if madshi leaves us the option to choose, like with scaling.
aufkrawall is offline   Reply With Quote
Old 15th June 2015, 15:36   #31096  |  Link
ThurstonX
Registered User
 
Join Date: Mar 2006
Posts: 58
Quote:
Originally Posted by tobindac View Post
For those of us valuing sharpness above all else, all other algorithms seem useless compared to super xbr based on the example pictures. Though to be honest I don't seem to be able to show such differences here on common sources. Do you use it on image upscaling or something, because here I only see a chroma upscaling option for it, for which I always had a hard time seeing differences between algorithms on 1080p sources.
I had that problem last night when I finally got a few minutes to test. S-XBR is in the Image Doubling section, in the drop-down menus for Luma and Chroma (I like to Capitalize certain Words, cuz I ain't no e.e. cummings ;-) It's true that S-XBR is in the Chroma upscaling section, but not the Image upscaling section. I think that's where the confusion comes in. madshi's post set me straight.

@madshi:
FWIW, I just opened the Settings to confirm the above, and sure enough "Double Chroma Resolution" is checked. That must be the new default setting, as I've never checked that option in the past. I think that maybe "Double Luma Resolution" is also getting checked as a new default (both set to "NNEDI3, 16 neurons"), as I'm pretty sure I didn't check double luma on this laptop (it can't handle it).

Last edited by ThurstonX; 15th June 2015 at 15:55.
ThurstonX is offline   Reply With Quote
Old 15th June 2015, 17:57   #31097  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,959
Quote:
Originally Posted by aufkrawall View Post
I was not able to use NNEDI3 at all with Hawaii GPU, while it was never an issue on GK110 (PCIe 2.0). With a GTX 980, NNEDI3 64 even can be used for 1080p24 (or even p30) -> WQHD, although it is only specified with 5TFLOPs while Hawaii XT is with 5.6TFLOPs. I have never read that Hawaii can do 1080p24 -> WQHD with 64 neurons.
now compare the prices of these cards and you will see what i mean.

and my r9 270 outperforms my gtx 760 when it comes down to madVR. of cause my AMD is in a very much needed PCIe 3.0 system and i know some AMD systems can't do nnedi3 at all but that's rare most people with the same cards have no issue.
my r9 270 can do 256 neuron 480p23 to 1080p23 easily my 760 GTX can't do more than 128 neurons in that case.
maybe the copyback issue gets out of hand at higher resolutions than 1080p.
huhn is offline   Reply With Quote
Old 15th June 2015, 18:20   #31098  |  Link
Eyldebrandt
Registered User
 
Join Date: Oct 2012
Posts: 26
I know madshi wants some report about FS first, but it will be more practical to me to make an overview of the lasts features.

Finesharp
I used it years ago when Didée released it on avisynth after an user of doom9 ask him what was the best sharpener for HD source.
So here we have a important point.
Finesharp is definitively not an option to considering if we talk about sources < 720p.
FS is a factory for aliasing, distorsions, and artifacts. He has to be used only with very HQ content. Linear Light or not, he's too strong for low quality sources, the result is always horrible.
At least in my opinion, and my opinion is the only thing i can provide here.

Thinning control seems to be the equivalent setting to "xstr" in the Didée's avisynth script.
In others words, xstr/thinning is the setting than you absolutely want to turn down the most, unless you're an aliasing fan.

Finesharp is a very destructive sharpener, and to be efficient, i think he will be only used with Blu-ray or 1080p content.

And even on these types of sources, i think the strength should not exceed 1.2, in upscaling refinement, and no more than 0.7/0.8 in image enhancement.

Repair was set at 1.0, and to be honest, i don't know what to think about that. By the fact, i have not succeed to rule if repair fonction is equivalent to the "cstr" factor in the avisynth script. I think is some of, but not completely, cos i suspect the madvr version is quite different in some ways than the original script.
For fact, if repair have some strong common point with cstr, so 1.0 seems to be too high for me.

LumaSharpen

I discovered this sharpener with a previous build of madVR.
I think it's an adaptation of a SweetFx option, and i welcome it with a sort of mistrust, because i hate SweetFX :d
Anyway, i tried to let him a chance, and after some tests, i have to say he can have some utility, on very HQ content, again.
With really moderate settings, and considering is just an eye candy toy, he adds some kind of pop effect wich is not unpleasant.

As he active on the luma, he brighter just some objects and pixels in a frame, and in some ways, i like it.

Maybe a lead for futures evolutions, add an usharp mask (like Unsharp HQ) to work with luma sharpen will open a new way for those who wants to build a very pop picture. If madshi let Luma Sharpen in madVR, and i'll like he will, this will be an option to hardly considerate.

An other opinion is if you're upscaling the content, you should definitively not turn on a sharpener in image enhancement, whatever it is.

SuperRes
I'm a great fan of the method. I bought it just on the principe.
But, as shiandow is currently working at improve it, i don't think it's the right time to forge a definitive opinion about.

Super-xbr

As i play almost only 1080p content on 3440x1440 monitor and 4K VP, and as i have the rigs to motorize anything, i was a faithful user of NNEDI3 in chroma upscaling and Double resolution.
But super-xbr have insinuate the doubt in me.
On some aspects, super-xbr outclasses NNEDI. Which are aliasing control, clarity and natural look. But he's beaten by NNEDI3 on cleanliness, ringing control and precision.
This is worth for almost all types of content.

Then, super-xbr is incredibly less greedy for a result which is different but fairly comparable in terms of quality to NNEDI3, event at 128 or 256 neurons.

And more than that, it seems to me than super-xbr produce fantastic result in chroma upsampling.

To be more accurate, here are my differents settings, if that can help for anything.

720p content on 1080p TV.
- Custom Res at 2560x1440 in Nvidia panel Control (no DSR)
- Chroma upscaling : super-xbr (without SuperRes)
- Luma/chroma doubling : super-xbr
- Upscaling refinement : Super Res (NNEDI3 defaults, 4 passes)

1080p content on 1080p TV
- Custom Res at 2560x1440 in Nvidia panel Control (no DSR)
- Chroma upscaling : super-xbr (without SuperRes)
- Luma/chroma doubling : super-xbr
- Upscaling refinement : Super Res (NNEDI3 defaults, 4 passes), FS (strength 0.4, thinning 0.007), LS (strength : 0.25, clamp : 0.045, radius 0.3)
- image downscaling : C-R AR LL

1080p content on 3440x1440 monitor
- Chroma upscaling : super-xbr (without SuperRes)
- Luma/chroma doubling : super-xbr
- Upscaling refinement : Super Res (NNEDI3 defaults, 4 passes), FS (strength 0.7, thinning 0.012), LS (strength : 0.35, clamp : 0.45, radius 0.4).
- image downscaling : C-R AR LL

1080p content on 4K VP
- Chroma upscaling : super-xbr (without SuperRes)
- Luma/chroma doubling : NNEDI 64
- Upscaling refinement : Super Res (NNEDI3 defaults, 4 passes), FS (strength 0.9, thinning 0.015), LS (strength : 0.45, clamp : 0.45, radius 0.6).

Edit: and for all settings, I refine the image after every ~2X upscaling step and i apply SuperRes first.

Last edited by Eyldebrandt; 15th June 2015 at 18:36.
Eyldebrandt is offline   Reply With Quote
Old 15th June 2015, 18:20   #31099  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,716
Quote:
Originally Posted by huhn View Post
now compare the prices of these cards and you will see what i mean.

and my r9 270 outperforms my gtx 760 when it comes down to madVR. of cause my AMD is in a very much needed PCIe 3.0 system and i know some AMD systems can't do nnedi3 at all but that's rare most people with the same cards have no issue.
my r9 270 can do 256 neuron 480p23 to 1080p23 easily my 760 GTX can't do more than 128 neurons in that case.
maybe the copyback issue gets out of hand at higher resolutions than 1080p.
No question, Radeons perform well in madVR apart from NNEDI3. But we were talking about NNEDI3, weren't we.
And yes, small Kepler GPUs can be awfully slow with Compute and often also lack shader power in general compared to GCN Radeons.
But Kepler is end of life, it is almost completely replaced by Maxwell GPUs, from low end (which is less low end than some years ago) to high end. And Maxwell often is much faster regarding Compute than Kepler (in some cases even faster than GCN, which is a highly Compute optimized architecture with many crossbars etc.).
The pricing is high, but an OCed 970 gives a lot of bang for bucks in madVR, and can be close to completely silent.
aufkrawall is offline   Reply With Quote
Old 15th June 2015, 18:35   #31100  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,959
Quote:
Originally Posted by aufkrawall View Post
No question, Radeons perform well in madVR apart from NNEDI3. But we were talking about NNEDI3, weren't we.
but my way cheaper r9 270 outperforms my 760 in term of nnedi3 and that by a lot.
Quote:
And yes, small Kepler GPUs can be awfully slow with Compute and often also lack shader power in general compared to GCN Radeons.
But Kepler is end of life, it is almost completely replaced by Maxwell GPUs, from low end (which is less low end than some years ago) to high end. And Maxwell often is much faster regarding Compute than Kepler (in some cases even faster than GCN, which is a highly Compute optimized architecture with many crossbars etc.).
The pricing is high, but an OCed 970 gives a lot of bang for bucks in madVR, and can be close to completely silent.
i doubt the gtx 960 which is about 5% faster than my gtx 760 can beat my r9 270 in term of nnedi3 and general processing power when used for madVR.
but i will find this out my self soon enough i need a GTX 960 soon anyway.
huhn is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:21.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.