Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 4th December 2012, 03:46   #15941  |  Link
iamhacked
Registered User
 
Join Date: Jun 2012
Posts: 5
Quote:
Originally Posted by 6233638 View Post
From looking at those numbers, I think that it would probably be best to use Bicubic 75 Chroma, which can be a significant visual improvement over Bilinear in many cases (often a close visual match to Lanczos or Jinc) for a minimal performance hit compared to Bilinear.

While I know that some people prefer softer Chroma settings, Bicubic 75 still seems to be the best balance between ringing/aliasing and maintaining the correct brightness/saturation for Chroma with a minimal performance hit. I wouldn't use it for Luma, but I've found almost nothing where it has caused problems for Chroma.


It may not be my personal preference, but I would then probably suggest using either Lanczos 3 or Spline 3 (probably Spline due to the reduced ringing) without the AR filter as the default for Luma upscaling.

The rendering times I posted are for comparative purposes from upscaling Blu-ray beyond 1080p, which is not indicative of real-world use (typically DVD or 720p up to 1080p) so I haven't run into any performance issues with playback when testing those settings yet. (almost 50ms would imply dropping frames)

If you're really wanting to reduce rendering times for your default, I would probably go with some level of SoftCubic for the Luma upscaling, but I know that's too soft for a lot of people. It does manage to avoid aliasing and ringing very well at higher levels though. The other Bicubic variants probably have too much ringing or aliasing to be suitable for the default Luma algorithm.



And now that I've overclocked the card (1GHz GPU/720MHz RAM) I seem to be able to play Blu-ray downscaled in linear light using Catmull-Rom AR without any problems, though it is probably still too taxing on some systems.

I still need to do a good performance and image quality test for downscaling algorithms. It's really not a priority for me though, as I use a 1080p native display, and the only time I downsample is when I have something playing at the side of a web browser, which is never a film, and therefore not particularly important to me as far as image quality is concerned.
Nice point. I have a Dimension E521 (AMD Athlon™ 64 X2, 1GB RAM, Nvidia GeForce 6150LE) and using Bicubic 50 for chroma upscaling does look better than Bilinear. However, any number above 50 will cause 720p video to stutter (using LAV filters). Using Bicubic for luma/image upscaling will cause video to stutter as well. Bilinear seems to be the only option for luma upscaling to me.
iamhacked is offline   Reply With Quote
Old 4th December 2012, 03:47   #15942  |  Link
ajp_anton
Registered User
 
ajp_anton's Avatar
 
Join Date: Aug 2006
Location: Stockholm/Helsinki
Posts: 782
Quote:
Originally Posted by 6233638 View Post
I wish I had access to an Intel HD 4000 to test performance with, because that seems like a reasonable card to use as the "minimum spec" for the defaults.
HD 3000 at 1700MHz, is that close enough to HD 4000?
1920x1080 -> 2560x1440
Default (lanc3 AR / bilinear) renders at around 41ms.

Last edited by ajp_anton; 4th December 2012 at 03:51.
ajp_anton is offline   Reply With Quote
Old 4th December 2012, 04:25   #15943  |  Link
6233638
Registered User
 
Join Date: Apr 2009
Posts: 1,019
Quote:
Originally Posted by ajp_anton View Post
HD 3000 at 1700MHz, is that close enough to HD 4000?
1920x1080 -> 2560x1440
Default (lanc3 AR / bilinear) renders at around 41ms.
Thanks. That seems like Lanczos 3 AR Luma, Bilinear Chroma is pushing it, and you're likely to experience frame drops. (or just avoid them)

I'd really need to have one myself to use the same material and settings to do a proper comparison/test though.

I would be interested in seeing what your numbers are for Bicubic 75 Chroma & Lanczos 3 without AR for Luma with the same material though. I really think that using Bilinear for Chroma is too much of a compromise, even if it would let you use Lanczos 3 AR.

In the test I posted above, those settings almost halved rendering times - as good as it can look, I think the anti-ringing filter is probably too demanding to have enabled as a default.

Quote:
Originally Posted by iamhacked View Post
Nice point. I have a Dimension E521 (AMD Athlon™ 64 X2, 1GB RAM, Nvidia GeForce 6150LE) and using Bicubic 50 for chroma upscaling does look better than Bilinear. However, any number above 50 will cause 720p video to stutter (using LAV filters). Using Bicubic for luma/image upscaling will cause video to stutter as well. Bilinear seems to be the only option for luma upscaling to me.
That's interesting, I was under the impression that all the Bicubic variations (Mitchell-Netravali, Catmull-Rom, Bicubic, and SoftCubic) were all the same algorithm with adjusted values, that wouldn't change the load they put on the GPU. I certainly don't see a meaningful change between them. (less than 0.5ms)



EDIT: And from doing some extra testing, at least with some scale factors (I don't know if it will change dynamically) Nvidia basically using Bilinear scaling with the DXVA2 option. (results are slightly different) So it's even worse than I thought, considering that with madVR's Bilinear Luma scaling I was getting render times of about 3ms compared to the 50ms+ of DXVA2.

EDIT2: Actually, it's worse than that - if you use DXVA2 for Luma upscaling, Chroma upscaling is basically ignored. Unless DXVA2 is handled a lot better on AMD/Intel, I wonder if it should actually be removed.

It's maybe not identical to the bilinear option in madVR, but the results are still equally bad.The DXVA2 results are a little smoother when compared to Bilinear in madVR (is chroma being filtered?) but you can see that there's almost no difference between Bilinear Chroma and Lanczos 8 AR when you use DXVA2 scaling.

Render times are considerably higher with DXVA2 scaling as well.

Last edited by 6233638; 4th December 2012 at 05:14.
6233638 is offline   Reply With Quote
Old 4th December 2012, 15:14   #15944  |  Link
ajp_anton
Registered User
 
ajp_anton's Avatar
 
Join Date: Aug 2006
Location: Stockholm/Helsinki
Posts: 782
Quote:
Originally Posted by 6233638 View Post
Thanks. That seems like Lanczos 3 AR Luma, Bilinear Chroma is pushing it, and you're likely to experience frame drops. (or just avoid them)

I'd really need to have one myself to use the same material and settings to do a proper comparison/test though.

I would be interested in seeing what your numbers are for Bicubic 75 Chroma & Lanczos 3 without AR for Luma with the same material though. I really think that using Bilinear for Chroma is too much of a compromise, even if it would let you use Lanczos 3 AR.
Lanc3 + BC75: 32ms
Remember I upscale 1080p to 1440p, not what most people do, so I'd say speedwise 41ms it's just fine. And today it's 38ms. It seems to depend a little on what's in the actual image (which I find a little strange, unless it's because of the AR).

720p -> 1080p:
Lanc3 AR + bilinear: 28ms
Lanc3 + bicubic: 27ms

Last edited by ajp_anton; 4th December 2012 at 15:16.
ajp_anton is offline   Reply With Quote
Old 4th December 2012, 16:58   #15945  |  Link
mindbomb
Registered User
 
Join Date: Aug 2010
Posts: 578
I have an interesting problem. With hardware deinterlacing on my radeon, it doubles the frame rate, and thus, halves the movie frame interval, requiring even faster rendering than I can manage. Can I disable this in the registry somehow to keep the frame rate the same?
mindbomb is offline   Reply With Quote
Old 4th December 2012, 17:04   #15946  |  Link
DragonQ
Registered User
 
Join Date: Mar 2007
Posts: 930
If the material is interlaced, that's what's supposed to happen (50/60p after deinterlacing). If it's progressive material in an "interlaced wrapper" (e.g. films or dramas in a TV stream) then it should be detected as such and deinterlaced using "weave", reproducing the original progressive image (25/30p after deinterlacing).
__________________
HTPC Hardware: Intel Celeron G530; nVidia GT 430
HTPC Software: Windows 7; MediaPortal 1.19.0; Kodi DSPlayer 17.6; LAV Filters (DXVA2); MadVR
TV Setup: LG OLED55B7V; Onkyo TX-NR515; Minix U9-H
DragonQ is offline   Reply With Quote
Old 4th December 2012, 17:07   #15947  |  Link
MSL_DK
Registered User
 
Join Date: Nov 2011
Location: Denmark
Posts: 137
This question has probably been asked before ... What scaling algorithms is recommended for blu-ray, no bad rips, only original blu-ray.
MSL_DK is offline   Reply With Quote
Old 4th December 2012, 18:28   #15948  |  Link
MSL_DK
Registered User
 
Join Date: Nov 2011
Location: Denmark
Posts: 137
I do not know if it's ycms who fails or it is the way madVR handle ycms ... but something's wrong

madvr pixel shader



madvr ycms

MSL_DK is offline   Reply With Quote
Old 4th December 2012, 18:50   #15949  |  Link
DarkSpace
Registered User
 
Join Date: Oct 2011
Posts: 204
Many thanks for this piece of work, it's really great! However, I'd like to report a possible bug:
When switching the screen refresh rate (manually, not using the automatic Refresh Rate Changer) to 120Hz on my Laptop's internal 1080p screen, madVR stops outputting a picture. The Media Player window still changes its size to the video resolution in Windowed Mode, but both Fullscreen Exclusive and Windowed Mode only show a black picture. Interestingly, the Debug OSD (already enabled from before swit) doesn't display in Windowed mode, but in FSE, it briefly shows an additional line
Code:
Desktop Composition Rate: 120.000Hz [there are a few trailing zeros, but I couldn't count them]
before that line vanishes, but the interesting part is that it shows
Code:
display 0.000000Hz
It doesn't update during playback at all, and when the playback is paused, it only updates the Render Queue. I have included a Debug Log of a few seconds of video playing for both Windowed Mode and FSE. Also interesting is that while the screen stays blank except for the Debug OSD, the FSE seekbar does show up, albeit with a delay of a few seconds. Also, after a while, it seems MPC-HC just hangs, because audio also stops playing. In the case of the Debug logs, the audio stopped at around 25 seconds.
I should mention that EVR-CP works perfectly fine.
My Setup:
Code:
AMD Radeon HD 670M with up-to-date drivers
Windows 7 Ultimate
MPC-HC 1.6.5.6291
LAV Filters 0.54.1
ReClock 1.8.7.9
madVR 0.85.1
I did use the search and didn't find anything alike. Unfortunately, I don't have any other screens capable of 120Hz at all, so I can't test if it's maybe my drivers' fault.
DarkSpace is offline   Reply With Quote
Old 4th December 2012, 19:54   #15950  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,478
Quote:
Originally Posted by sneaker_ger View Post
Problem 2:
It seems to only be related to DXVA2 decoding, not the scaling. I created a log. (Opened video, seeked once, waited a few seconds, seeked again, waited a few seconds, seeked again, waited a few seconds, then closed the player)
Out-of-order/jerkyness on all three seeks. It looks a bit like this happens until the decoder has caught up after the seek. Not really sure.

log (mirror)
sample (mirror)

MPC-HC
Win 7 x64
HD 5850 (Cat 12.11 Beta 8)
Quote:
Originally Posted by madshi View Post
I've tested with NVidia 9400, LAV 54.1 and my latest madVR sources on win7 x64 and I can't reproduce any issues here. Seeking works just fine without any issues. Can you please double check with LAV 54.1, and with the next madVR build?
Catalyst 12.11 beta 11 seems to have fixed the issue.
sneaker_ger is offline   Reply With Quote
Old 4th December 2012, 19:57   #15951  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by secvensor View Post
image upscaling - Jink 4 or 8 select
go to
croma upscaling - Jink 3
go to
image downscaling - Spline 4
go to
image upscaling - Jink 4 or 8 be in false
Thanks, this will be fixed in v0.85.2.

Quote:
Originally Posted by petri234 View Post
Don't know if this has been mentioned yet, but DXVA decoding with default settings crashes MPC-HC for me about 50% of the time when viewing 23.976p material. This only happens when skipping forward or backward, however if the player does not crash, I can skip forward safely as many times as I please. I am using LAV filters and a laptop with ATI 4670.
Does this occur with all 23.976p movies for you, or just with some? Does it e.g. depend on the codec? Have you updated to the latest LAV version? I've not heard about such instability by anyone else yet, so I wonder why it only seems to affect you. Maybe it's just with some specific video files?

Quote:
Originally Posted by manma View Post
I'm not sure if I understand completely yet. Can increasing or decreasing queue sizes help reduce dropped frames? Ever since I set up yCMS color calibration with madvr, I've had a good amount frame dropping, and I need to do something to make up for the performance loss. Is there anything I can do other than get another GPU (not an option, as I'm on a laptop).
You could try turning down the scaling algorithms. In the worst case you could try bilinear (ouch). Tuning the queue sizes might have minor effects, but nothing major, so I don't think it will help.

Quote:
Originally Posted by 6233638 View Post
I really don't think that having Bilinear as the default for Chroma is a good choice. As Chroma is only ever a 2x scaling operation, the performance impact of using a better scaling algorithm seems to be minimal in most cases.
Interesting. I haven't really measured performance, but it makes some sense. Chroma upsampling is cheaper because I hard coded it to do exactly 2x which allowed me to write more efficient shader code. Maybe using Bilinear as default for chroma upsampling isn't necessary. But then we come back to the very old discussion about which algorithm to use instead, and we never seem to be able to agree there...

Quote:
Originally Posted by 6233638 View Post
While not related to selecting good defaults, in my testing, I also happened to come across a use-case for Jinc 8. There are some images I came across where using Jinc 8 AR for Chroma was the only option that looked really good, being the only choice that avoided aliasing and maintained the correct brightness/saturation for Chroma. This is not an endorsement of using it, as there are most likely problems using it with other images (as there tends to be with higher-tap filters) but I thought it was interesting.

The only other algorithm that looked decent with this image was SoftCubic 100, which was too dull/desaturated, but avoided most of the aliasing.
Yeah, my preference for using SoftCubic 100 is exactly that it avoids aliasing quite nicely with problematic sources. I've recently found a broadcast where a guy was wearing a black/red patterned jacked and it looked so much less bad with SoftCubic 100 compared to e.g. Bicubic or Lanczos.

Quote:
Originally Posted by 6233638 View Post
Now I don't suggest that SoftCubic 80 be the default for Luma at all, but I worry that Lanczos 3 AR will be too taxing for a lot of hardware.
I really hoped that going from Lanczos 4 (original default) to Lanczos 3 would allow me to activate AR by default, but maybe you're right and it's too taxing. However, from HD 4000 reports it seems that Lanczos 3 AR is not really a problem. So the big quesion is which hardware to use as reference.

Quote:
Originally Posted by 6233638 View Post
From looking at those numbers, I think that it would probably be best to use Bicubic 75 Chroma, which can be a significant visual improvement over Bilinear in many cases (often a close visual match to Lanczos or Jinc) for a minimal performance hit compared to Bilinear.
I'm still not sure which chroma algorithm to use. With some content I prefer SoftCubic, but in the meanwhile I've also run into real life content where a sharper chroma algorithm looks noticeably better.

Quote:
Originally Posted by 6233638 View Post
And now that I've overclocked the card (1GHz GPU/720MHz RAM) I seem to be able to play Blu-ray downscaled in linear light using Catmull-Rom AR without any problems, though it is probably still too taxing on some systems.
Interesting. So Catmull-Rom AR downscaling in linear light is less taxing than Lanczos 3 AR upscaling?

Quote:
Originally Posted by 6233638 View Post
That's interesting, I was under the impression that all the Bicubic variations (Mitchell-Netravali, Catmull-Rom, Bicubic, and SoftCubic) were all the same algorithm with adjusted values, that wouldn't change the load they put on the GPU.
All 2-tap algorithms should have exactly the same performance. And all 3-tap algorithms should have exactly the same performance. Etc...

Quote:
Originally Posted by 6233638 View Post
EDIT: And from doing some extra testing, at least with some scale factors (I don't know if it will change dynamically) Nvidia basically using Bilinear scaling with the DXVA2 option. (results are slightly different) So it's even worse than I thought, considering that with madVR's Bilinear Luma scaling I was getting render times of about 3ms compared to the 50ms+ of DXVA2.

EDIT2: Actually, it's worse than that - if you use DXVA2 for Luma upscaling, Chroma upscaling is basically ignored. Unless DXVA2 is handled a lot better on AMD/Intel, I wonder if it should actually be removed.
madVR v0.85.2 will change from NV12 -> NV12 DXVA2 scaling to NV12 -> RGB DXVA2 scaling. Maybe NVidia will use a better algorithm then? I don't know, but a retest might make sense.

Quote:
Originally Posted by mindbomb View Post
I have an interesting problem. With hardware deinterlacing on my radeon, it doubles the frame rate, and thus, halves the movie frame interval, requiring even faster rendering than I can manage. Can I disable this in the registry somehow to keep the frame rate the same?
There are different types of content. If you have native video content, doubling the framerate is the only right thing to do. If you have native movie sources (telecined to 25i/30i) you can switch madVR into film mode (see keyboard shortcuts) and you'll get 24p/25p output instead of 50p/60p.

Some day in the future I hope to be able to automatically switch between DXVA deinterlacing and madVR's IVTC. But that won't come any time soon.

Quote:
Originally Posted by DragonQ View Post
If the material is interlaced, that's what's supposed to happen (50/60p after deinterlacing). If it's progressive material in an "interlaced wrapper" (e.g. films or dramas in a TV stream) then it should be detected as such and deinterlaced using "weave", reproducing the original progressive image (25/30p after deinterlacing).
Unfortunately DXVA2 is not telling anybody which content type it detected during deinterlacing. At the same time it's not DXVA2 which decides the framerate, but the renderer. So madVR always outputs double framerate when deinterlacing with DXVA2.

Quote:
Originally Posted by MSL_DK View Post
I do not know if it's ycms who fails or it is the way madVR handle ycms ... but something's wrong
Unfortunately I don't have access to the yCMS sources so I can't say what's wrong exactly or fix the problem myself. However, I would guess that inconsistencies in your measurements might be the problem.

Quote:
Originally Posted by DarkSpace View Post
Many thanks for this piece of work, it's really great! However, I'd like to report a possible bug:
When switching the screen refresh rate (manually, not using the automatic Refresh Rate Changer) to 120Hz on my Laptop's internal 1080p screen, madVR stops outputting a picture. The Media Player window still changes its size to the video resolution in Windowed Mode, but both Fullscreen Exclusive and Windowed Mode only show a black picture. Interestingly, the Debug OSD (already enabled from before swit) doesn't display in Windowed mode, but in FSE, it briefly shows an additional line
Code:
Desktop Composition Rate: 120.000Hz [there are a few trailing zeros, but I couldn't count them]
before that line vanishes, but the interesting part is that it shows
Code:
display 0.000000Hz
It doesn't update during playback at all, and when the playback is paused, it only updates the Render Queue. I have included a Debug Log of a few seconds of video playing for both Windowed Mode and FSE. Also interesting is that while the screen stays blank except for the Debug OSD, the FSE seekbar does show up, albeit with a delay of a few seconds. Also, after a while, it seems MPC-HC just hangs, because audio also stops playing. In the case of the Debug logs, the audio stopped at around 25 seconds.
I should mention that EVR-CP works perfectly fine.
My Setup:
Code:
AMD Radeon HD 670M with up-to-date drivers
Windows 7 Ultimate
MPC-HC 1.6.5.6291
LAV Filters 0.54.1
ReClock 1.8.7.9
madVR 0.85.1
I did use the search and didn't find anything alike. Unfortunately, I don't have any other screens capable of 120Hz at all, so I can't test if it's maybe my drivers' fault.
There was one other user with 120Hz which reported the same problem many months ago. The problem is that madVR tries to find a pattern in the VSync scanline position readings (which are done in high frequency by madVR) and it seems that sometimes when using 120Hz madVR has a problem finding a reliable pattern. As a result madVR is unable to figure out the exact refresh rate and when exactly the VSync interrupts will occur in the future. This is needed for playback, though, because the whole madVR presentation logic is based on syncing the frames to the future VSync interrupts. I would really like to fix this problem, but it's going to be hard without having a 120Hz display for testing, and unfortunately all my displays top out at 60Hz at the moment...

Quote:
Originally Posted by DragonQ View Post
I'm sure I've mentioned this before but auto-detection of deinterlacing seems to be broken for some HDTV (UK) streams in MKVs (maybe other types of files too, dunno). The same streams in TS files deinterlace fine, and the MKVs deinterlace fine in EVR too. Using LAV Splitter/Video/Audio.

EDIT: On second thoughts, I think this is probably an issue in MKV Merge. My older MKVs work fine, newer ones don't. If it is, Mosu needs to know why the interlacing isn't detected properly so can you help with this please madshi?
The problem is the framerate in the MKV header. It says 50p. LAV Splitter reads that and passes it on to the decoder. The decoder passes that forward to madVR and that's why madVR decides not to deinterlace the video because madVR generally does not deinterlace 50p or 100i content.
madshi is offline   Reply With Quote
Old 4th December 2012, 19:59   #15952  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by sneaker_ger View Post
Catalyst 12.11 beta 11 seems to have fixed the issue.
Oh, that's cool!!
madshi is offline   Reply With Quote
Old 4th December 2012, 20:27   #15953  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
madVR v0.85.2 released

http://madshi.net/madVR.zip

Code:
* fixed: CoreAVC DXVA decoder didn't work (introduced in v0.85.1)
* fixed: ffdshow DXVA decoder didn't work
* fixed: when using DXVA2, sometimes BTB and WTW were lost
* fixed: thumbnail creation with MPC-HC sometimes didn't always
* fixed: Jinc option was sometimes incorrectly disabled
* fixed: 4:2:2/4:4:4 -> NV12 conversion routines used point sampling
* added dithering to 10/16bit -> NV12 conversion routines
* added SSE2 routine for P010/P016/P210/P216 -> NV12 conversion
* added options for custom display output levels
* added display specific color controls
* added volatile source color controls, with keyboard shortcuts
* brightness control now changes image gamma instead of white level
* contrast control now changes image contrast instead of black level
* custom shaders now run in PC levels (0-255) instead of TV levels
* added optimized DXVA copyback solution for NVidia and Intel GPUs
* optimized quality of DXVA2 NV12 -> HLSL NV12 conversion
* combined DXVA deinterlacing and DXVA scaling into one step
* modified DXVA scaling to now output RGB instead of NV12
* added color correction and auto-loading for new subtitle interface
* linear light processing might have gotten slightly faster
Some comments:

(1) madVR now supports 3 (!!) different color controls. I know, it's a bit confusing, but there's is some sort of sense to it: There are now (a) per-display color controls available in the madVR "device manager", which allows you to maybe optimize display setup/calibration. Then there are (b) "source" color controls available which you can only modify by assigning keyboard shortcuts to them. These "source" color control settings are not stored, so they reset themselves to neutral values when you close the media player or load a new video. You can change these source color controls to correct badly encoded videos without having to worry about resetting the color controls afterwards again. Then there are (c) the color controls of the media player. These don't really make too much sense because they are neither resetting themselves, nor are they really per monitor/display. But madVR supports them, nevertheless, just to have a complete solution.

(2) The "brightness" and "contrast" color controls now do not change the black and white levels, anymore. Instead they modify the look of the image to appear brighter/darker or more/less contrasty, while still keeping peak black and white identical. If you want to modify black and white output levels, you can now use the new custom output levels option in the madVR device manager. Or you can fix bad movie encodings by using the new "source black level" and "source white level" controls, which you can only modify by assigning a keyboard shortcut, and which auto reset themselves.

(3) I've totally reworked the way madVR deals with DXVA2 output. This is very very complicated, so let me try to explain:

A. When using DXVA2 scaling (regardless of whether DXVA2 deinterlacing and/or decoding is used), madVR asks DXVA2 to convert the NV12 source to RGB. madVR then auto analyzes what the GPU did and reverts the DXVA2 color conversion (RGB -> YCbCr 4:4:4), so that madVR can apply its own color conversion. This sounds bad? Yes, it is, but it seems to work nicely and it allows me to get rid of all the scaling problems like green lines with odd widths/heights which I had when asking DXVA2 to scale from NV12 -> NV12, and it also means that when using DXVA2 scaling now also chroma is always upsampled by DXVA2, too, which saves more power because madVR can then totally skip chroma upsampling. ATI and NVidia support output to high bitdepth RGB, so the whole process works nicely with ATI and NVidia (at least on Windows 7). Unfortunately Intel currently only outputs 8bit RGB, so there banding could be re-introduced. But I'm working with egur on that, maybe we can manage to get this fixed in the Intel drivers.

B. When not using DXVA2 scaling, but when using DXVA2 decoding and/or deinterlacing, madVR uses a different approach for NVidia and Intel GPUs compared to AMD GPUs. With AMD GPUs, madVR is able to access the DXVA2 NV12 output in lossless quality without needing to do fancy workarounds. With NVidia and Intel that's not possible, so with those I'm now using "copyback" to get lossless quality (only when your CPU supports SSE4.1, though). My copyback algorithm could/should be slightly more efficient than those used by external decoders, but I'm not sure by how much. Maybe someone can do a comparison test?

Generally I hope that colors are now always identical when using DXVA2 decoding, deinterlacing and scaling, compared to when not using any DXVA2 stuff at all. But I wasn't really able to test this in every filter & setting combination on every OS with every GPU, of course, so I'm relying on you guys to double check.

(4) Custom shaders should now produce 100% identical results to EVR/VMR9 (except for the broken "nightvision" shader script), except that madVR doesn't clip BTB/WTW, on the cost of probably slightly slower performance.

(5) madVR's 4:2:2/4:4:4 -> 4:2:0 and high-bitdepth -> 8bit conversion routines (when using DXVA2 deinterlacing and/or scaling) should now have highest possible quality with proper dithering + correct chroma handling. So banding should be noticeably reduced with e.g. 10bit encodes, when using DXVA2 scaling or deinterlacing.
madshi is offline   Reply With Quote
Old 4th December 2012, 20:43   #15954  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,478
I get huge DXVA2 decoding issues with this one. Usually the video area will just be transparent upon opening a file. Also, the MPC-HC process does not exit correctly and stays up and running.
The funny thing is that the debug version works better than the normal version, so I cannot just upload a log. (Well, I could anyways, of course.)
sneaker_ger is offline   Reply With Quote
Old 4th December 2012, 20:54   #15955  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,791
Quote:
Originally Posted by madshi View Post
My copyback algorithm could/should be slightly more efficient than those used by external decoders, but I'm not sure by how much.
Care to elaborate why?
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 4th December 2012, 21:10   #15956  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by sneaker_ger View Post
I get huge DXVA2 decoding issues with this one. Usually the video area will just be transparent upon opening a file. Also, the MPC-HC process does not exit correctly and stays up and running.
The funny thing is that the debug version works better than the normal version, so I cannot just upload a log. (Well, I could anyways, of course.)
Which GPU, which OS, which driver, which splitter, which decoder, which media player? Have you tried different decoders? Different splitters, different media players? Thanks!

Quote:
Originally Posted by nevcairiel View Post
Care to elaborate why?
Because external decoders have to first download the whole video frame from GPU RAM to CPU RAM, then put the data into an IMediaSample. Then the renderer has to upload it back to the GPU. So basically it's GPU -> CPU. Then CPU -> GPU.

madVR can internally do this better because I have direct access to both the DXVA surface and the target texture at the same time. So I can directly copy from one GPU surface to the other GPU surface. So basically I only have to do half of the work compared to an external copyback decoder (GPU -> GPU). However, the whole frame has to go twice over the PCIe bus with my solution, too, so I'm not sure how much more efficient my solution really is.
madshi is offline   Reply With Quote
Old 4th December 2012, 21:17   #15957  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,713
My issue with DXVA2 scaling and display mode switch is gone with .2.
aufkrawall is offline   Reply With Quote
Old 4th December 2012, 21:28   #15958  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,478
Quote:
Originally Posted by madshi View Post
Which GPU, which OS, which driver, which splitter, which decoder, which media player? Have you tried different decoders? Different splitters, different media players? Thanks!
HD 5850 (Cat 12.11 beta 11)
Win 7 x64
LAV Splitter 0.54.1
MPC-HC 1.6.5.6187
madVR 0.85.2 (settings reset to standard settings)

The decoder does seem to make a difference. In general LAV Video seems to be the most problematic, MPC's internal works best and Microsoft decoder is somewhere in between. Sometimes video doesn't start, sometimes seeking makes the player freeze. It's erratic and I have trouble narrowing it down further, but I will keep trying.
Now even LAV Video works sometimes.

Even if it does work, it takes about 0.5 seconds for the video to start. Does madVR do any checks at start up that could result in such a long delay?
sneaker_ger is offline   Reply With Quote
Old 4th December 2012, 21:43   #15959  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,713
Quote:
Originally Posted by sneaker_ger View Post
Even if it does work, it takes about 0.5 seconds for the video to start. Does madVR do any checks at start up that could result in such a long delay?
It doesn't take that long for me on Nvidia, not even with playback delay enabled.
aufkrawall is offline   Reply With Quote
Old 4th December 2012, 21:55   #15960  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,478
Now i cannot even reproduce the problem with LAV Video anymore. Maybe it was just the driver acting up again. I will keep an eye on the problem.
What still remains is the 0.5 seconds of transparent video area/delay until start.
sneaker_ger is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:11.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.