Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 14th October 2018, 13:34   #53201  |  Link
blaubart
Registered User
 
Join Date: Apr 2009
Posts: 42
@ huhn ..so also you didn't read and understand what I'm saying - this may be the beginning of a wonderful friendship
blaubart is offline   Reply With Quote
Old 14th October 2018, 13:34   #53202  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,923
should be a driver bug.

madVR doesn't do anything different if you send RGB, 4:2:0 or something like that.
huhn is offline   Reply With Quote
Old 14th October 2018, 13:52   #53203  |  Link
Thunderbolt8
Registered User
 
Join Date: Sep 2006
Posts: 2,197
Quote:
Originally Posted by chros View Post
Madshi knows about this and tries to do something about it.
ok, thanks.
__________________
Laptop Lenovo Legion 5 17IMH05: i5-10300H, 16 GB Ram, NVIDIA GTX 1650 Ti (+ Intel UHD 630), Windows 10 x64, madVR (x64), MPC-HC (x64), LAV Filter (x64), XySubfilter (x64) (K-lite codec pack)
Thunderbolt8 is offline   Reply With Quote
Old 14th October 2018, 15:29   #53204  |  Link
zapatista
Registered User
 
Join Date: Nov 2016
Posts: 21
Quote:
Originally Posted by zapatista View Post
a general question for the video experten here:
is it possible with madvr to use the GPU (for ex from a gtx 1060) to play mp4 video files instead of it being processed by the cpu ?
my haswell i7 cpu seems to do most of the work , and i am hoping to offload it to the more modern GPU that might be more efficient (fine for mp4 @ 2k, but the cpu maxes out on 4k mp4 files while the GPU is only @ 10% load). with HEVC files of same resolution, the GPU does most of the work and the cpu stays @ 20 % for ex, this is much more hardware cpu/gpu efficient (i know hevc is a much more efficient file format, but am hoping the gpu can be used more for mp4 decoding now to)
Quote:
Originally Posted by sneaker_ger View Post
Madvr doesn't do video decoding by itself, other components need to handle that. Usually, players/decoders that allow HEVC hardware decoding also allow H.264/AVC hardware decoding (H.264/AVC is the most common codec in mp4 files, but there are others). For example LAV Video/MPC-HC can do that.
thank you for the answer, i am not sure how to implement your advice however
i have installed madvr, mpc-ht and the LAV Filters 0.73.1 , and been using madvr for many years now on my older htpc setup (but have no expertise in video technology and what exactly does what part)

what settings should i look at to ensure my new GPU (gtx 1060) does the mp4 (h264/AVC) decoding so i can lower the CPU load on my older haswell processor

ps: my current newly re-installed mpc-ht and madvr settings are pretty much still set at default right now
zapatista is offline   Reply With Quote
Old 14th October 2018, 15:54   #53205  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,923
the default settings from mpc-hc do use hardware decoding by default by using DXVA2.

try this guide: https://www.avsforum.com/forum/26-ho...tup-guide.html

it's old but still "accurate". the Video Decoding part is where you should look into.

i recommend DXVA2 copyback.
hardware decoder doesn't really put strain on a GPU.
because a file is "mp4" doesn't mean it can be hardware decoded just to make this clear.
huhn is offline   Reply With Quote
Old 14th October 2018, 16:52   #53206  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
Quote:
Originally Posted by HDR View Post
I can't live without LG's "active HDR". It tries to mimic HDR10+/Dolby Vision by analyzing the whole scene and adjusting brightness and tone mapping per frame. I don't know if this is something madshi is working on or is even possible with madVR, but until/if this happens I'm sticking with LG's implementation.

It really helps with shadow detail in dark scenes. Often times madVR is simply too dark compared to LG with active HDR turned on.

You can use madVR for tone mapping and still turn on LG active HDR, but then you've got kind of a double tonemapping situation where the tv is trying to dynamically adjust brightness and highlights after madVR has already tone mapped. It doesn't quite look right IMO. Often times it will just make the highlights even darker.
Both madVR and all of the current OLED TVs do dynamic tone mapping that is similar to HDR10+. The goal is to reduce brightness compression in any scene where this is possible by ignoring the single peak brightness variable and measuring each frame and each scene independently. The brightest scenes usually see no relief in compression, but dimmer scenes can be brighter.

madVR's implementation generally alters the information above reference white (100 nits), while the newest LG OLEDs (especially the 2018 models) appear to alter reference white as well as the brighter specular highlights. The LG OLEDs are known to be the most dynamic when it comes to changes in brightness compared to solutions offered by Sony, Panasonic and Phillips. This is probably a matter of taste. None of the OLEDs should be boosting the brightness of any values in an artificial way, just restoring the recorded brightness of each pixel whenever possible. PQ values are absolute in terms of display nits and each pixel is meant to be displayed at a specific brightness. Tone mapping makes the image darker and sometimes causes a visible loss of detail when done poorly as well as some loss of color saturation for bright image areas.

Last edited by Warner306; 14th October 2018 at 17:04.
Warner306 is offline   Reply With Quote
Old 14th October 2018, 16:58   #53207  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
Quote:
Originally Posted by blu3wh0 View Post
I thought that madVR does do dynamic tone mapping to adjust brightness to the whole image, but I think this only seems to work in SDR. It should be able to do this since it does know peak brightness of each scene. However, for most of the range when output in HDR, brightness seems to be unchanged. Is this due to the static metadata that is sent for HDR? Are our TVs only relying on the HDR nits mastering in the metadata to scale brightness? Maybe if the metadata can be manipulated to send peak brightness of the scene as mastering metadata, it may change the brightness range dynamically. I don't know.
If you set pixel shader to 750 nits, only pixels above 750 nits will receive any tone mapping at all. This is how the current test builds work when "measure each frame's peak luminance" is checked. So the first 750 nits should be displayed 1:1 on your display (only if it is that bright), and anything brighter will be tone mapped. This means only very bright scenes should show a visible change.

Of course, this depends on how your display reacts to the input signal. It may take the 750 nit input and compress it instead of displaying it with clipping, which would be inaccurate and less bright than it should be.

Last edited by Warner306; 14th October 2018 at 17:01.
Warner306 is offline   Reply With Quote
Old 14th October 2018, 17:35   #53208  |  Link
corporalgator
Registered User
 
Join Date: Jul 2008
Posts: 60
Quote:
Originally Posted by zapatista View Post
thank you for the answer, i am not sure how to implement your advice however
i have installed madvr, mpc-ht and the LAV Filters 0.73.1 , and been using madvr for many years now on my older htpc setup (but have no expertise in video technology and what exactly does what part)

what settings should i look at to ensure my new GPU (gtx 1060) does the mp4 (h264/AVC) decoding so i can lower the CPU load on my older haswell processor

ps: my current newly re-installed mpc-ht and madvr settings are pretty much still set at default right now
How much of a load are we talking about? If you stick to letting your CPU decode, that frees up more of your GPU for madVR rendering.
corporalgator is offline   Reply With Quote
Old 14th October 2018, 17:38   #53209  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
That's not true. A hardware decoder is a seperate part of the GPU. Hardware acceleration will actually improve performance (not in madVR, but in general) by freeing up the CPU without harming GPU performance.
Warner306 is offline   Reply With Quote
Old 14th October 2018, 17:51   #53210  |  Link
Betroz
Is this for real?
 
Betroz's Avatar
 
Join Date: Mar 2016
Location: Norway
Posts: 168
Quote:
Originally Posted by Warner306 View Post
That's not true. A hardware decoder is a seperate part of the GPU. Hardware acceleration will actually improve performance (not in madVR, but in general) by freeing up the CPU without harming GPU performance.
For people who have a GTX 1080 card, what hardware decoder do you then recommend? CUVID or...?
__________________
My HTPC : i9 10900K | nVidia RTX 4070 Super | TV : Samsung 75Q9FN QLED
Betroz is offline   Reply With Quote
Old 14th October 2018, 17:54   #53211  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,127
DXVA2 copy-back if you want hardware deinterlacing and black bar detection. D3D11 Automatic (Native) for everyone else.
Warner306 is offline   Reply With Quote
Old 14th October 2018, 21:12   #53212  |  Link
Manni
Registered User
 
Join Date: Jul 2014
Posts: 942
Quote:
Originally Posted by Warner306 View Post
DXVA2 copy-back if you want hardware deinterlacing and black bar detection. D3D11 Automatic (Native) for everyone else.
You can get black bar detection with D3D11 copyback (by selecting the GPU instead of leaving auto/native in LAV). That’s what I use and it works fine, although there is a performance loss with copyback compared to native.
__________________
Win11 Pro x64 b23H2
Ryzen 5950X@4.5Ghz 32Gb@3600 Zotac 3090 24Gb 551.33
madVR/LAV/jRiver/MyMovies/CMC
Denon X8500HA>HD Fury VRRoom>TCL 55C805K
Manni is offline   Reply With Quote
Old 14th October 2018, 21:17   #53213  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,346
Quote:
Originally Posted by Manni View Post
You can get black bar detection with D3D11 copyback (by selecting the GPU instead of leaving auto/native in LAV). That’s what I use and it works fine, although there is a performance loss with copyback compared to native.
There is also a performance loss with D3D11 copyback compared to DXVA2 copyback.
D3D11 is really only recommended in native mode, its copy-back performance is lower and can impact the GPU more due to some constraints in the API (ie. you cannot access D3D11 textures directly from software, probably so that GPUs can handle them more efficiently if they don't need to worry about that).
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 14th October 2018, 23:36   #53214  |  Link
Manni
Registered User
 
Join Date: Jul 2014
Posts: 942
Quote:
Originally Posted by nevcairiel View Post
There is also a performance loss with D3D11 copyback compared to DXVA2 copyback.
D3D11 is really only recommended in native mode, its copy-back performance is lower and can impact the GPU more due to some constraints in the API (ie. you cannot access D3D11 textures directly from software, probably so that GPUs can handle them more efficiently if they don't need to worry about that).
Thanks, I'll try to go back to DXVA2 copyback then.

I hope Madshi will support D3D11 fully at some point, it would be nice for those of us who need to detect black bars.
__________________
Win11 Pro x64 b23H2
Ryzen 5950X@4.5Ghz 32Gb@3600 Zotac 3090 24Gb 551.33
madVR/LAV/jRiver/MyMovies/CMC
Denon X8500HA>HD Fury VRRoom>TCL 55C805K

Last edited by Manni; 14th October 2018 at 23:41.
Manni is offline   Reply With Quote
Old 15th October 2018, 00:31   #53215  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,923
the issue from blackbar detection comes from it runs on the CPU not because of d3d11. it is clearly possible to do it on the GPU and planned too if i'm not mistaken. but not 100 % sure why you want that on the GPU it's not like copyback hurts notable on good GPU but a GPU version of blackbar detection is clearly not going to be free.

@nev
if i understand d3d11 copyback with a headless GPU correctly than d3d11 decoding will only effect the decoding GPU not the rendering GPU because there is no real difference for the rendering GPU between this and software decoding right?

or in short d3d11 copyback from a headless GPU doesn't effect madVR at all?

and i would say d3d11 native should not be recommended until madshi support deinterlancing with it.
huhn is offline   Reply With Quote
Old 15th October 2018, 01:13   #53216  |  Link
HDR
Registered User
 
Join Date: Oct 2018
Posts: 17
Quote:
Originally Posted by nevcairiel View Post
Not really, its just "fake".

Actual dynamic HDR has mastering engineers controlling the parameters of every scene, so that intentional brightness differences are fully maintained, which can give a movie much more depth.
Those automatic algorithms have no clue about the creative intent of a movie and they just "unify" everything, which can take away a lot of depth.

For me, its in a similar boat to the extremely cranked up brightness and color vibrance, or those fake HDR encodes you can find online made from consumer SDR sources. It makes for good show floor presentation, but its far from the creative intent of a movie. If you are into that sort of thing, go nuts. But don't claim your subjective preference is the only valid way to watch anything, because thats just silly.
Considering there is no standard for tone mapping you can't say that LG's method is better or worse than any other method, it's just different.

I prefer LG's active HDR and so do many others.

Do you actually own an oled and have compared it on/off? You're acting like it turns the tv into vivid mode or something. It doesn't.

It subtly increases brightness in dark scenes to better show shadow detail, and subtly decreases brightness in bright scenes to restore highlight detail, similar to madVR's highlight recovery.

Anyway, we all have our preferences.
HDR is offline   Reply With Quote
Old 15th October 2018, 01:37   #53217  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,923
but madVR follows the bt 2390 standard.

it will change colors that can'T be shown by your display because it can't get bright enough.

if you set the target peak nit to the same of the source file or higher it will not change colors.

and if your LG plays around dark parts of the image it is clearly not different from vivid mode. because it is clearly capable to display the dark details at the creator intended brightness so there is no need to change anything.
huhn is offline   Reply With Quote
Old 15th October 2018, 02:52   #53218  |  Link
HDR
Registered User
 
Join Date: Oct 2018
Posts: 17
https://www.avnirvana.com/threads/hd...r-schemes.588/

Quote:
Originally Posted by Robert Zohn
From what I am told "active" HDR10 works very much like the proposed HDR10+.
Quote:
Originally Posted by Robert Zohn
Samsung has proposed a change to the SMPTE ST.2086 base standard HDR10 by adding dynamic metadata so it can perform much like Dolby Vision. Another very similar method that measures each frame on the fly and tone map the display to it's brightness capability, which is commonly called "active HDR".

LG and Sony are doing "active HDR" with the current base standard HDR10 on all of the 2017 X1 Extreme processor TVs, like the A1E OLED, Z9D, X940E and X930E, and LG employs a very similar active HDR10 processing with their SJ9500, C7, E7, G7 and W7 OLED TVs.

As I am more familiar with how Sony's active HDR10 operates I'll keep my comments to Sony's 2017 X1 Extreme models. You might notice very little difference between Dolby Vision and HDR10 on Sony's X1E equipped TVs. Sony HDR TVs don’t use the static brightness metadata in HDR10 (MaxCLL, MaxFALL) they actually measured brightness frame by frame and generate dynamic metadata for HDR10 content.

MaxFALL, stands for "Maximum Frame/Average Light Level" and MaxFALL corresponds to the highest frame average brightness of one frame from the entire content.

MaxCLL, is the Maximum Content Light Level and is an additional static HDR metadata that represents and measures the brightest pixel of the entire content.

In part this explains why the standard 10% peak luminance window test pattern may measure a lower peak luminance than what the display is actually capable to deliver when we view actual HDR content that only use a far smaller area of the screen with the HDR specular highlights that typically occupy less than 1% of the display.

This HDR anomaly is most noticeable when the content is mastered at 4k nits. In Sony's new 2017 X1 Extreme processor TVs the HDR algorithms are tuned to apply tone mapping (which reduces screen brightness and accuracy) when the brightness of the frame exceeds the TV set’s capabilities. It is then applied to ensure HDR highlights are properly displayed.
HDR is offline   Reply With Quote
Old 15th October 2018, 03:06   #53219  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,923
and now you go and mark the part in that quote that states it makes darker parts more bright.

the only reason to not measure the peak brightness is performance.
this doesn't change the max nits added by DV and HDR10+.
huhn is offline   Reply With Quote
Old 15th October 2018, 08:01   #53220  |  Link
Betroz
Is this for real?
 
Betroz's Avatar
 
Join Date: Mar 2016
Location: Norway
Posts: 168
Sorry if I'm being slow to understand this, but I need some clarification. For me to avoid Black Crush / dark image I must use the following settings :

- TV set to Black Level = Low
- Use Nvidia color settings, RGB, 8bcc and Full
- MadVr set to TV levels (16-255)

Is this correct then? My LG C8 TV have only two options for Black Level when in HDMI mode, High and Low. I have tried to set the TV to High and MadVr to 0-255, but that gives me raised blacks / lighter image instead. I'm so confused since I did not have this problem with my old Panasonic VT30 plasma, or at least I didn't notice it. Btw this is with SDR content.
__________________
My HTPC : i9 10900K | nVidia RTX 4070 Super | TV : Samsung 75Q9FN QLED
Betroz is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:28.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.