Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th May 2015, 14:02   #30041  |  Link
Razoola
Registered User
 
Join Date: May 2007
Posts: 454
Quote:
Originally Posted by James Freeman View Post
In FSE mode the OSD will display the following:

In D3D11;
fullscreen exclusive mode, 8 bit
fullscreen exclusive mode, 10 bit

In D3D9;
fullscreen exclusive mode (new path)
fullscreen exclusive mode (old path)


madshi,
In d3d9 mode when I switch "present several frames in advance" OFF in "exclusive mode settings" it also effects the windoed mode, meaning windowed or FSE it is always (old path).
Yes that is the behaviour I'm seeing here so I guess its ok. Maybe if DX11 and DX9 text is added to those messages it could save potential confusion going forward.
Razoola is offline   Reply With Quote
Old 15th May 2015, 14:08   #30042  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by Dogway View Post
I see, so that post of you for chroma doubling was in relation to a middle build which was working things out differently. No problem then if both use NNEDI.
Correct.

Quote:
Originally Posted by Dogway View Post
In avisynth every color conversion is highly avoided due to quality loss, even in 32-bit precision on 16-bit pipelines and YV24 intermediary as in my example (yes, this is possible in avisynth, so unless I missed something -color spaces?- it's an apples to apples comparison).

In that regard, every *real world* conversion leads to quantization errors which translate to blurring. Just run a colored grain test and it will show all the shortcomings of your current process
I'll say it again: You cannot use AviSynth as a proof for anything related to madVR. AviSynth has historically been limited to 8bit. I know there are some hacks you can use to force higher bitdepth with some filters, if those filters have special support for that. But I wouldn't bet on that every part of the processing pipeline from start to finish really makes full use of the highest bitdepth.

Look at the numbers/math from my previous reply and comment on that. That's a much better ground for discussion than running AviSynth tests with weird bitdepth hacks.

Quote:
Originally Posted by Dogway View Post
you also need to use proper wide gamut color spaces to hold the float32 textures otherwise some colors are going to be clipped.
Well, madVR's pipeline is running in video levels and not in PC levels, so there's sufficient foot and head room for all channels to not clip anything important.

Quote:
Originally Posted by Dogway View Post
But again I ask, what steps in the process forces you to go to RGB that justifies these color conversions?
There's RGB, YCbCr, R'G'B' and Y'CbCr. For best scientific scaling you'd need to scale in RGB or YCbCr. And that's what you can do with madVR by enabling the "scale in linear light" option. However, for upscaling this has proven to not work as well a expected. E.g. ringing artifacts get *much* stronger that way. So I'm using R'G'B' scaling instead. There are several reasons for that. One is that R'G'B' is probably a more perceptually uniform colorspace than Y'CbCr, which means that linear scaling algorithms might produce slightly superior results in R'G'B' compared to Y'CbCr. Another is that madVR supports all kinds of input formats. E.g. DCI sources are encoded in R'G'B'. Or PC game recordings, or other stuff. madVR has to handle both Y'CbCr and R'G'B' sources. And it's simply easier to convert them all to the same format, so that I don't have to implement multiple parallel, but totally different processing pipelines.

Anyway, as I said before, the R'G'B' <> Y'CbCr conversions are nearly lossless, due to the high bitdepth I'm using for processing and storage, and due to having enough head room and foot room for out-of-gamut colors. So there's no worry.

Anyway, if you need proof, the FineSharp HLSL shader package contains RGB <-> YCbCr conversion routines:

http://www.mediafire.com/download/yi...oYuv_ToRGB.zip

You can use them in MPC-HC to let madVR convert between R'G'B' and Y'CbCr, and you can repeat that process e.g. 10 times in a row. Do that with any test video you like, then compare the output with the custom shaders enabled/disabled. I believe you'll not be able to see any difference. Please uncheck all the madVR "trade quality for performance" options in madVR for this test, though.
madshi is offline   Reply With Quote
Old 15th May 2015, 14:08   #30043  |  Link
Razoola
Registered User
 
Join Date: May 2007
Posts: 454
@madshi, I see your point about the OSD getting wide in relation to adding DX11 or DX9 to the OSD's 'Fullscreen exclusive mode' line. Have you considered removing 'fullscreen' from that line given that is clearly visible from the screen size anyway. In effect replace 'fullscreen exclusive mode, 8 bit' with 'DX11 exclusive mode, 8 bit'. Then do the same with the DX9 modes also?
Razoola is offline   Reply With Quote
Old 15th May 2015, 14:15   #30044  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Ya, makes sense, I guess...
madshi is offline   Reply With Quote
Old 15th May 2015, 14:17   #30045  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,822
Quote:
Originally Posted by Razoola View Post
@madshi, I see your point about the OSD getting wide in relation to adding DX11 or DX9 to the OSD's 'Fullscreen exclusive mode' line. Have you considered removing 'fullscreen' from that line given that is clearly visible from the screen size anyway. In effect replace 'fullscreen exclusive mode, 8 bit' with 'DX11 exclusive mode, 8 bit'. Then do the same with the DX9 modes also?
i would prefer something like FSE D3D11 10 bit.
huhn is offline   Reply With Quote
Old 15th May 2015, 14:25   #30046  |  Link
Barnahadnagy
Registered User
 
Join Date: Apr 2014
Posts: 13
Quote:
Originally Posted by madshi View Post
Anyway, if you need proof, the FineSharp HLSL shader package contains RGB <-> YCbCr conversion routines
I wanted to write a little test application for this to prove this precision is more than enough, but with this I just tested it a bit. Surprisingly, if I zoom in 2500%, I can see very faint differences, I didnt expect to see anything really. Then again, I put in about 30 conversions... Anyways, if this doesnt prove that the conversions are high-enough precision, don't know what does.

Regarding OSD: I quite like "FSE D3D11 10 bit".
Barnahadnagy is offline   Reply With Quote
Old 15th May 2015, 14:51   #30047  |  Link
Anime Viewer
Troubleshooter
 
Anime Viewer's Avatar
 
Join Date: Feb 2014
Posts: 333
extra artifacts in madVR comparison?

Quote:
Originally Posted by madshi View Post
Thanks for the feedback! Seems there'll be another Shiandow deband iteration coming. <sigh> Of course it's all good, the algorithm getting better all the time. But I hope you users/testers won't be bored at some point, retesting revised algos all the time?
I think there is something strange at play in madVR resulting in artifacts. Starting by setting both programs to defaults, then configuring both for 10-bit, using the same upscalers, DX11, Shiandow's debanding (with identical settings), SuperChromaRes, and SuperRes (with identical NEDI settings in both), and ordered dithering in both I am seeing more artifacts in madVR than MPDN in the same video scenes. I've included a screen shot below.
Anyone have theories as to what may be causing the additional artifacts in madVR?

http://s17.postimg.org/lznm6e7j1/madvr_vs_mpdn.jpg
In particular I notice the additional artifacts around the Japanese characters/letters, edge and teeth of the creature, and floating green debris. While its not a huge overall difference it is noticeable to me.
__________________
System specs: Sager NP9150 SE with i7-3630QM 2.40GHz, 16 GB RAM, 64-bit Windows 10 Pro, NVidia GTX 680M/Intel 4000 HD optimus dual GPU system. Video viewed on LG notebook screen and LG 3D passive TV.
Anime Viewer is offline   Reply With Quote
Old 15th May 2015, 14:54   #30048  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 1,009
Quote:
Originally Posted by madshi View Post
For best scientific scaling you'd need to scale in RGB or YCbCr. And that's what you can do with madVR by enabling the "scale in linear light" option.
Yes, that's true, in avisynth we compress and uncompress gamma in RGB as well. I forgot that bit, it's also prone to ringing or clipping for blacks so it makes sense, thanks for explaining. In that regard is understandable to default to RGB.

I will test with the HLSL shader, it's possible that addgrainC is not deterministic so I can have a better look with the real thing instead. Will be calibrating the panel soon and create a profile for madVR so I still have to learn how the whole thing works.
Dogway is offline   Reply With Quote
Old 15th May 2015, 15:26   #30049  |  Link
tobindac
Registered User
 
Join Date: May 2013
Posts: 115
It's probably normal seeing detailing artifacts when you pump the sharpening algorithms. Their job is to expose small deviations and make them clear. What did you expect to happen, pull out of a hat that you don't want to expose a small part of them?

If your source isn't anything too damaged, I would start with no filters at all and then start adding only when needed, if you only care about your everyday normal viewing experience.

Last edited by tobindac; 15th May 2015 at 15:33.
tobindac is offline   Reply With Quote
Old 15th May 2015, 15:32   #30050  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by Anime Viewer View Post
I think there is something strange at play in madVR resulting in artifacts. Starting by setting both programs to defaults, then configuring both for 10-bit, using the same upscalers, DX11, Shiandow's debanding (with identical settings), SuperChromaRes, and SuperRes (with identical NEDI settings in both), and ordered dithering in both I am seeing more artifacts in madVR than MPDN in the same video scenes. I've included a screen shot below.
Anyone have theories as to what may be causing the additional artifacts in madVR?
For screenshot comparisons it's MUCH better to use two separate images, so you can put each in a browser tab and switch back and forth. Doing that with your images shows that it's not the same frame, which is already a big "no no" for screenshot comparisons. Ignoring that, the madVR image is much much sharper than the other image. Of course if you blur the whole image you also blur the artifacts. So the result is not surprising. I'm not sure why the other image is so much softer. Seems the settings are not perfectly matched, for some reason.

Quote:
Originally Posted by Dogway View Post
I will test with the HLSL shader, it's possible that addgrainC is not deterministic so I can have a better look with the real thing instead.
Let me know what you find!
madshi is offline   Reply With Quote
Old 15th May 2015, 15:45   #30051  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,713
@Shiandow:
How can I use it in madVR?
It crashes when the shader files are copied into the specific folder.

Last edited by aufkrawall; 15th May 2015 at 16:05.
aufkrawall is offline   Reply With Quote
Old 15th May 2015, 15:46   #30052  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,463
Quote:
Originally Posted by madshi View Post
Actually, no. The "octuple" option had nothing to do with that. You can't uncheck "double/quad chroma resolution" if you have NEDI selected because NEDI handles luma+chroma at the same time.
I'm quite sure my problem was real with 0.87.6 using NEDI but then again I was tired so maybe I missed the extra N and 3 at the end as I was doing A/B comparisons between both before you released this build.....long story short 0.87.7 does work exactly as intended TYVM

If anything's confusing it's that both algorithms are almost named exactly the same when one is a GPU hog and looks artificial and the other feeds RGB so it'll also double chroma simultaneously.

I really didn't like NNEDI3's chroma upscaling/doubling artificial look and far prefered Jinc3AR so I'm not sure whether I'm too happy with NEDI's chroma doubling being mandatory but it can't be disabled anyway and PQ is fantastic so whatever

I take it that NEDI chroma upscaling isn't possible?

Quote:
Originally Posted by madshi View Post
Isn't "always - if upscaling is needed" exactly the same as 1.01/2.01? I'm confused.
I was hoping this would mean ">1.0" for both so you could go nuts and force 4X/8X in order to go SuperSampling.

I would personally prefer if "always - if upscaling is needed" really meant what is says as in "always - if >1.0 upscaling is needed" even for 4X/8X and that the next step was ">1.01" for 2X and ">2.01" for 4X/8X.

Quote:
Originally Posted by Asmodian View Post
AMD still has interop issues which slow them down so I don't think AMD is a better option for NNEDI3 image doubling
Oh yah, NEDI doesn't use OpenCL and I far prefer it in combo with SuperRes to NNEDI3, Error Diffusion runs DirectCompute and apparently the interop lag is only related to OpenCL so that means that I could finally update from the 13.12 drivers then? Anyone tried that on W7SP1?

AMD will more than likely never fix this issue and might have (far) more efficient GPU's out in a few months so I'd rather not depend on OpenCL anymore.

Quote:
Originally Posted by iSunrise View Post
Think about comparisons, when you change settings. Without madVR updating the image we would not have been able to do comparisons within the player. This is intentional and very useful.
Pausing mVR is a total no can do on my rig(HD7850/W7SP1/13.12) as the GPU load goes completely nuts, it's pretty frigging annoying tbh as I always have to stop playback instead of pausing. Pausing is a privilege I'm not entitled to as it would appear
leeperry is offline   Reply With Quote
Old 15th May 2015, 15:55   #30053  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by leeperry View Post
I really didn't like NNEDI3's chroma upscaling/doubling artificial look and far prefered Jinc3AR so I'm not sure whether I'm too happy with NEDI's chroma doubling being mandatory but it can't be disabled anyway and PQ is fantastic so whatever
That confuses me. If NNEDI3 is artificial looking then NEDI is even more so, IMHO. And the effect is more pronounced for luma than for chroma.

Quote:
Originally Posted by leeperry View Post
I take it that NEDI chroma upscaling isn't possible?
It's possible, but extra work, and I hope that SuperChromaRes will be the better option. But it's too early to talk about that, because we're still in the Deband feedback phase.

Quote:
Originally Posted by leeperry View Post
Pausing mVR is a total no can do on my rig(HD7850/W7SP1/13.12) as the GPU load goes completely nuts, it's pretty frigging annoying tbh as I always have to stop playback instead of pausing.
That's due to PotPlayer wanting to draw animations and stuff during paused playback. It's madVR's fault, really, but if you turn the PotPlayer madVR specific OSD off, this problem should go away for now. Of course then you'll probably lose the FSE compatible OSD, as well. The high GPU load in paused state with PotPlayer will probably be fixed in a future build, but it has low priority for now.
madshi is offline   Reply With Quote
Old 15th May 2015, 16:14   #30054  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Anything special to look for besides how effective the debanding filter is?
I guess a lot of people don't have a clue (including me) what to look for, that's why this poll is slowish to say the least.

I guess I'll start with a super compressed video where the banding are absolutely everywhere.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 15th May 2015 at 16:24.
James Freeman is offline   Reply With Quote
Old 15th May 2015, 16:17   #30055  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,463
Quote:
Originally Posted by madshi View Post
That confuses me. If NNEDI3 is artificial looking then NEDI is even more so, IMHO. And the effect is more pronounced for luma than for chroma.
Well, I think I really hate those "fractal artifacts" with NNEDI3 you mentioned several times:
Quote:
Their NEDI first pass (before running the Super-Res passes) looks very clean, almost without any directional/fractal artifacts.
I realize that you were talking about SmartEdge 2 and maybe it's actually SuperRes that impresses me but at the end of the day I prefer NEDI+SR over NNEDI3, the picture looks more natural to me and it's not a GPU hog so that really kills all birds at once for me.

Quote:
Originally Posted by madshi View Post
That's due to PotPlayer wanting to draw animations and stuff during paused playback. It's madVR's fault, really, but if you turn the PotPlayer madVR specific OSD off, this problem should go away for now. Of course then you'll probably lose the FSE compatible OSD, as well. The high GPU load in paused state with PotPlayer will probably be fixed in a future build, but it has low priority for now.
Well, its fully skinnable D3D GUI that doesn't break mVR's FSE is part of the package and I really don't like the "windowed/exclusive" constant nagging in the top left corner when disabling it.

I also wish pausing in mVR would kick in some sort of screensaver in order to avoid leaving a fixed image for too long(crucial issue on plasma, and OLED as well FWIR). When I want to pause, I have to close PotP and switch to a TV channel, oh well PQ is totally worth the hassle so no biggy

Quote:
Originally Posted by madshi View Post
we're still in the Deband feedback phase.
Sorry, can't help with that due to my colorblindness.
leeperry is offline   Reply With Quote
Old 15th May 2015, 16:26   #30056  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,713
NNEDI introduces aliasing, NNEDI3 doesn't do this.
That's why NNEDI3 looks by far the best with cartoon content with extreme scaling. To me it looks most natural and I can't tell if artifacts are because of the source or the scaling itself. All I know is that there are least artifacts and it's much sharper than Jinc3.
At least for luma it's better, imho.
aufkrawall is offline   Reply With Quote
Old 15th May 2015, 16:40   #30057  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by James Freeman View Post
Anything special to look for besides how effective the debanding filter is?
I guess a lot of people don't have a clue (including me) what to look for, that's why this poll is slowish to say the least.

I guess I'll start with a super compressed video where the banding are absolutely everywhere.
The trick is that a good deband algo on the one hand has to remove as much banding as possible, but on the other hand should keep all real image detail intact. This is hard because sometimes the difference between banding and image detail is almost impossible to know for a dumb computer algorithm. So what you need to look for is a video source which has visible banding, but which also has low contrast detail. Debanding doesn't have a problem with high contrast detail. But if there's detail which is only visible very weakly, then debanding is in danger of removing it. E.g. imagine a faint real image detail pattern on some shadow area. This pattern might consist of gray tones which are just 1-2 RGB values apart from each other. This is a situation where debanding could accidently remove this faint image detail.

The perfect deband algorithm would remove all the banding, and at the same time keep all the real image detail. IMHO that's probably impossible to do for a non-human-intelligence algorithm. So the best compromise is needed.

Quote:
Originally Posted by leeperry View Post
Well, I think I really hate those "fractal artifacts" with NNEDI3 you mentioned several times:

I realize that you were talking about SmartEdge 2 and maybe it's actually SuperRes that impresses me but at the end of the day I prefer NEDI+SR over NNEDI3, the picture looks more natural to me and it's not a GPU hog so that really kills all birds at once for me.
NEDI has much stronger fractal artifacts than NNEDI3. However, SuperRes really helps there. So a combination of NEDI+SuperRes *may* be able to compete with NNEDI3. However, you can also combine NNEDI3 with SuperRes, too! So there's a lot of things to try out to find the best quality. But as I said, it's too early to discuss that. We'll come to SuperRes discussion sooner or later, but we're not there yet.

Quote:
Originally Posted by aufkrawall View Post
NNEDI introduces aliasing, NNEDI3 doesn't do this.
That's why NNEDI3 looks by far the best with cartoon content with extreme scaling. To me it looks most natural and I can't tell if artifacts are because of the source or the scaling itself. All I know is that there are least artifacts and it's much sharper than Jinc3.
At least for luma it's better, imho.
Agreed. However, SuperRes has the potential to change things. Without SuperRes I would not have added NEDI. Only the combination makes sense, IMHO. But as mentioned before, I'd like to concentrate on debanding for now and move SuperRes/NEDI etc discussion to a later time.

-------

The new Shiandow Deband version will probably be available in madVR on Sunday. It appears to be another step up from the previous build, so I'll be quite interested in hearing your opinions.
madshi is offline   Reply With Quote
Old 15th May 2015, 17:13   #30058  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Found a thoroughly compressed video with lots of night sky content, banding, blocking and what not; did some testing.

Well, my conclusion as before, both the default and shiandow's are doing fine job.
I would be happy with either one.

Both remove from the smallest banding to the biggest macroblocking without apparent "detail" loss.
If the video is so compressed that the detail becomes banding or blocking I don't think we should call it "detail" because there is none, nor should this "detail" should be somehow restored.
You can't restore something that looks like an ugly gradient into a cloud, or a square block into a dog can't you?...
Basically what I'm saying is that if the video is so compressed that there are visible blocking and banding, the 1-3 RGB steps of detail that create the black cat in the shadows, cannot be restored by any algorithm.
IMO, what a good debanding algo should do is remove the banding without effecting the clearly visible detail, the square macroblock cat is not there anymore no matter what you do.

If you ask me specifically about the edges of clear objects near the already debanded gradient (what I think is actual detail loss), I would say they look just as good as without debanding on all settings.
Maybe I'm just not observant enough, but to me it's all good, I am satisfied with the default or with shiandow's.

That's my take.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.
James Freeman is offline   Reply With Quote
Old 15th May 2015, 17:14   #30059  |  Link
daert
Registered User
 
Join Date: May 2015
Posts: 16
Quote:
Originally Posted by madshi View Post
2) Or you have the number of video frames which shall be presented in advance set to 16. I've just found out after releasing v0.88.7 that 15 is that max I can do. Using 16 means the D3D11 device creation fails. So lower this option to 14 (15 isn't currently supported), and D3D11 should start working again.
That's it! It works with 14 frame presented in advance. Thanks

Quote:
Originally Posted by madshi View Post
So your blurred mode is known to Windows as 23p and your "good" mode is known to Windows as 24p? Which modes did you add to the madVR display mode changing list?
Yes, it's correct but in madvr display mode I have to set 1080p23 even if the "good" mode is 24p. If I set 1080p24 it will create a brand new resolution with exact 24.000 Hz which also has a blurred image like the original "bad" 23p.
In d3d9 (windowed and FSE) and d3d11 (windowed only) madvr treats the 1080p23 mode as the "good" one. Only in d3d11 FSE there is the switch with the "bad" one.
This switch occours even if I disable the display modes and set the "good" mode through the nvidia control panel
daert is offline   Reply With Quote
Old 15th May 2015, 17:34   #30060  |  Link
pie1394
Registered User
 
Join Date: May 2009
Posts: 212
Quote:
Originally Posted by XRyche View Post
I actually saw your comment and tried that before I posted last night. Thank you though.
Even with the default setting, it still shows FSE 8-bit mode before the monitor's native's bit depth is changed from 8-bit to 10-bit. After that change, FSE 10-bit mode works fine on mine with madVR 0.88.7.

- Core i5-3550K, Z87, DDR3-2400 8GB
- Win7x64 SP1
- HD7970/3GB + Catalyst Omega 14.12
- Sony 65HX920 @ 1920x1080p23~p60 (always) 10-bit signal modes.
pie1394 is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 16:58.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.