Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
30th March 2013, 01:33 | #18141 | Link | |
Registered User
Join Date: Oct 2012
Posts: 7,926
|
from madshi
Quote:
|
|
30th March 2013, 04:30 | #18142 | Link |
Registered User
Join Date: Dec 2012
Location: Neverland, Brazil
Posts: 169
|
That's exactly what I was asking but in a more in-depth manner. madshi says: "it's highly recommended that you disable the "trade quality" option for highest image quality" but that highest image quality alone didn't answer me. Does it increase sharpness? Or maybe it makes video playback even less jittery? Maybe it makes motion more fluid? I really didn't notice any difference so I want to know what I can expect from it.
__________________
madVR scaling algorithms chart - based on performance x quality | KCP - A (cute) quality-oriented codec pack |
30th March 2013, 05:09 | #18143 | Link | |
Registered User
Join Date: Jul 2008
Posts: 157
|
Quote:
Sent from my Blade S using Tapatalk 2 |
|
30th March 2013, 05:22 | #18144 | Link | |
Registered User
Join Date: Jul 2008
Posts: 157
|
Quote:
This is while using lav software decoder. Now it can handle anything I throw at it, given that the hardest load is upscaling and deinterlacing 576i to 1080p. With 1080i sources I can even enable jinc3 for image upscaling. Windowed mode however won't stop dropping frames no matter what I try. So a suggestion perhaps would be to have different scaling options depending on source resolution? Also wanted to thank Madshi for including the performance options saving me a good bit of money in the process. Sent from my Blade S using Tapatalk 2 |
|
30th March 2013, 08:35 | #18146 | Link |
Registered User
Join Date: Jul 2011
Posts: 10
|
Are there any benefits to using DXVA vs no hardware decoder at all (provided you have a CPU that can handle the processing on its own)?
I'm running an i7 930 @ 4.2Ghz, which handles video pretty well. But I also have a 7950 3GB @ 1150Mhz/1700Mhz. When using DXVA, there's a bit of a lag/loading delay when skipping around video, which isn't present if hardware decoding isn't used. I'm just wondering if there are any benefits to using DXVA (or hardware decoding in general), provided your CPU can handle the task on its own. |
30th March 2013, 08:52 | #18147 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,347
|
Hardware decoding is sometimes more power efficient, but all in all, if your CPU can handle it, then stick to CPU decoding, its more flexible and less error-prone.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
30th March 2013, 09:13 | #18148 | Link |
AV heretic
Join Date: Nov 2009
Posts: 422
|
With DXVA I can get more smoother playback. Funny thing that it seems a bit tricky: for example, I watched a movie and had to enable forced subs and after that I've got the smoothest motion I've ever experienced on my TV. This never happened with SW decoding. Dunno if its somehow related to subs or pause-unpause or something else... I wish I know.
|
30th March 2013, 12:12 | #18149 | Link |
Registered User
Join Date: Jul 2010
Posts: 12
|
Using any Intel drivers from the recent 15.31.x releases (15.31.1.3006 to 15.31.2.3055) I have an issue with the info panels (volume, play, stop etc) being left onscreen in the black bar areas.
Disabling exclusive mode fixes the issue as does rolling back to any 15.28.x driver. I'm running a Clevo P150EM laptop with Intel HD 4000 and Nvidia 680M. If I can do anything to assist with troubleshooting let me know. |
30th March 2013, 23:09 | #18152 | Link |
Registered User
Join Date: Aug 2005
Posts: 231
|
I know, I was talking about Smooth Motion.
edit: Actually, it can be used in Custom mode, but replacing madVR in MC's folder with the newer version is required, and there are some problems with that. Last edited by Weirdo; 30th March 2013 at 23:20. |
31st March 2013, 04:15 | #18153 | Link |
Registered User
Join Date: Nov 2009
Posts: 327
|
I've been trying to figure out how to approach the color management settings in madVR. I've found a lot of contradictory advice in this thread and from Google and am at a loss as to how to calibrate my display and what settings to pick in madVR.
I have: - 1x wide gamut LCD display (100% sRGB, ~98% AdobeRGB) - 1x colorimeter w/ ambient light sensor Based on what I've found, it seems like I need to use ArgyllCMS with TI3Parser to generate a 3DLUT. How should I set the following options: - White point: 6500K? - Tone curve: sRGB? BT.709? 2.2? 2.4? - Ambient light correction: yes? no? - (TI3Parser) Auto-calibration: yes? no? Likewise, in madVR after I've obtained the 3DLUT: - Disable GPU gamma: yes? no? - Gamma processing: enabled? 2.4? 2.2? BT.709? Power? Once I've set up my calibration, how can I evaluate it? Am I missing something or going about this in completely the wrong way? |
31st March 2013, 07:28 | #18154 | Link |
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
The reasons for contradictory advise, is because there are multiple options for going about this. Everybody has their own preferences, and certain displays may yield better results with certain methods. If you are picky, experiment. None of the available solutions for creating madVR 3DLUTs are near-perfect at this point. YMMV.
White point: D65 (x0.312713 y0.329016) PC & TV: I explicitly set these values. Tone curve: Personal Preference. First decide or experiment to figure out if you desire a power curve like 2.2, 2.35, 2.4, or a "special" curve like BT.709 or sRGB PC: I use BT.709 TV: I have a target I aim for, but ultimately whichever curve gives me D65 across the entire range by adjusting hardware controls alone, with roughly a known curve that works with the lighting conditions of that room Ambient light correction: Yes if using BT.709, otherwise No. PC: I usually set this to 32 lux to roughly match the controlled D65 lighting I have in this room, which brings the BT.709 gamma to an average of 2.35 or 2.4. TV: I don't use ArgyllCMS at all. Calibration was already done as desired with hardware controls. TI3Parser or yCMS: Personal Preference. PC & TV: I use yCMS, with White Point and Primaries only as measured by ColorHCFR TI3Parser Auto-calibration: Personal Preference, but in general should only be considered if using a BT.709 or sRGB curve. If you have an custom ICC profile installed, this will requires GPU gamma ramp disabled for Fullscreen Exclusive, and using Overlay instead of Windowed. PC: I tried using TI3Parser auto-calibration before, but always got worse results than having my GPU lut handle gamma instead. Gamma processing: Personal Preference. PC & TV: I've never used this, since I always calibrate my display to the gamma I desire. How to verify: Video Test Patterns loaded with madVR and measured via something like ColorHCFR in "DVD manual" mode. |
31st March 2013, 08:25 | #18155 | Link |
Registered User
Join Date: Jan 2009
Posts: 1,210
|
Why choose D65 over 6500k or vis-a-versa?
How do you determine the appropriate tone curve? Is it display dependent? Environment dependent? Or just personal preference without technical basis? Is controlled lighting required to use ambient lighting correction with D65? What is the best approach for TV calibration if there is lots of flexibility in hardware controls already? If a 3DLUT is utilized, isn't it best to leave gamma ramps disabled in madVR regardless of whether an ICC profile is installed or not because the calibration should be performed without any active ICC profiles in the first place? |
31st March 2013, 09:46 | #18156 | Link | ||||
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
Because D65 (~6504K | x0.312713 y0.329016) is expected for video, and when calibrating it's best to have a correct target.
6500K (x0.312779 y0.329183) is essentially a rough approximation of D65, while not being used by anything specifically. Quote:
Recommended viewing gamma is based on lighting conditions. For dim/dark viewing, a curve which averages anywhere from 2.35 to 2.6 is usually recommended on a technical basis. If you have a bright room, stick with using a gamma around 2.2. Using 32lux ambient compensated BT.709 curve essentially lightens darkest tones ~1.9 gamma, darkens bright tones ~2.6 gamma, with mid-tones ~2.4 gamma. I find this flatter and less punchy look, which favors shadow detail and reduces white crush as more natural and pleasing than using a power curve. Though occasionally you may run across certain content optimized for viewing using a 2.2 power curve, where a BT.709 curve can look really bad in the dark tones. If your display is able to do a particular curve smooth and naturally, you'll usually get superior results than forcing it into a drastically different curve it doesn't handle well. At a certain point, personal preference comes into play, and you may need to make a trade-off somewhere to have a subjectively pleasing viewing experience. Quote:
Quote:
Otherwise, if you have no idea what you're doing, consider paying for an ISF calibration and having someone else do it for you. Quote:
IMHO, if using a BT.709 curve you should just keep your ICC profile gamma ramp active, and not use yCMS grayscale measurements. Your GPU lut does the best job of maintaining the accuracy of a custom gamma curve. The only time you should consider going the route of disabling gamma ramp is: A) If you calibrate to a power-curve, are using yCMS grayscale measurement, and intend to use madVR's gamma adjustments. OR B) Use nand's tools to merge the ArgyllCMS gamma ramp into a 3DLUT because your GPU gamma ramp was causing noticeable banding. Last edited by cyberbeing; 31st March 2013 at 09:51. |
||||
31st March 2013, 10:29 | #18157 | Link |
Registered User
Join Date: Mar 2007
Posts: 934
|
My friend's TV has a gamma of ~1.4 and it isn't editable (no way of changing tint and the colour/brightness/contrast controls don't alter it). Hard to get used to, haha.
__________________
TV Setup: LG OLED55B7V; Onkyo TX-NR515; ODroid N2+; CoreElec 9.2.7 |
31st March 2013, 19:45 | #18158 | Link | ||
Registered User
Join Date: Apr 2009
Posts: 1,019
|
Quote:
This image from Wikipedia has 6000K marked rather than 6500K, but you can see how 6000K could be a value that is neutral, or heavily tinted green/magenta. Same thing for 6500K. This is why CCT values are not useful. Quote:
It can be fixed with madVR and 3DLUTs though. (as long as it's not dynamic) |
||
31st March 2013, 20:15 | #18159 | Link |
QB the Slayer
Join Date: Feb 2011
Location: Toronto
Posts: 697
|
I noticed the EDID info for my TV states it has a gamma of 2.5... I was wondering if it is a Power Curve 2.5 or a BT.709 Curve 2.5? It doesn't state what type it is. More of a curiousity than anything else and since we seem to be talking about this now I was hoping one of you guys could shed some light on it
QB
__________________
|
31st March 2013, 22:15 | #18160 | Link | |
Registered User
Join Date: Jan 2008
Posts: 589
|
Quote:
There's a few things I don't quite understand in the document, however: - You say "use 2.4 if you have high contrast, BT.1886 otherwise". Thing is, BT.1886 uses a gamma function with… 2.4 as the exponent. What's the difference? Is it because the BT.1884 function has "black compensation" (the "+ b" term)? As far as I'm aware, most calibration software (at the very least HCFR) displays black compensated gamma, i.e. when calculating gamma, L' is used with L' = L - L(black). That would mean that, within such software, the "+ b" is already accounted for and thus your target really is 2.4 as calculated by the software. - Appendix 2 indicates that "This Recommendation does NOT change any signal parameters defined in Recommendation ITU-R BT.709". The way I understand it, this only makes sense assuming that BT.709 defines the optical → electrical transfer function, while BT.1884 defines the electrical → optical transfer function. However, having different transfer functions for "input" and "output" strikes me as odd, as that would mean that when capturing something using an ideal reference camera and then playing it back on an ideal reference monitor, the display will not be consistent with the reality of what has been captured, because of the difference between the transfer functions used for capture and playback. What gives? Last edited by e-t172; 31st March 2013 at 22:17. |
|
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
|
|