Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
21st February 2018, 14:37 | #49141 | Link | |
Registered User
Join Date: Jan 2008
Posts: 589
|
Quote:
As for "3DLUTs are not just slow for fun", well, it might be that programs like ArgyllCMS's collink command are slow because there is no perceived need to make them fast: people use it offline and very infrequently, no one cares a great deal about how long it takes. I suspect it would be possible to do it much faster with little loss in quality if one were to specifically optimize for that. (Does anyone know how long LittleCMS takes to generate a transform?) I don't think web browsers use ICC profiles for video, but they do support it (to some extent) for still images. (Although they support it in kind of a retarded way because they don't use the monitor ICC profile - they always use sRGB as the destination. But that's neither here nor there.) Last edited by e-t172; 21st February 2018 at 14:47. |
|
21st February 2018, 14:41 | #49142 | Link |
Registered User
Join Date: Oct 2012
Posts: 7,903
|
first of all madVR it self doesn't send any image out of the graphics card at all. it a renderer it just send an image to the GPU driver that have to do with the rest.
so if the image send out of the GPU is 8 bit or 10 bit is a pure GPU driver thing. the 10 bit setting has to be set up for each connected devices separately and is by default 8 bit so that's most likely everything that's happening here. and the oppo needs 10 bit input support in the first place the specs of tis thing are not really clear... |
21st February 2018, 14:49 | #49143 | Link | |
Registered User
Join Date: Mar 2007
Location: London, UK
Posts: 576
|
Quote:
The Oppo definitely supports HDR10 on it's HDMI input. It's used quite regularly for this and, indeed, I tested it myself using 4:2:2, when the Oppo reports the PC outputting 4K/60 4:2:2 12-bit, as expected. In RGB mode the Oppo reports 4K/60 RGB 8-bit, which again is good and as expected, given the limitations of HDMI 2.0. [S]What I don't understand is why, if MadVR is as removed from the display as suggested, MadVR reports in its HUD outputing 10-bit when connected direct (which seems to lead to the driver outputting an illegal video mode), yet MadVR reports outputing 8-bit via the Oppo and all remains "legal". It seems MadVR is more aware of the display chain than thought.[/S] Edit: wait a minute I get what you are saying. Probably MadVR realises it has a different "display" connected, when the Oppo is in the chain and has defaulted to 8-bit. makes sense. Last edited by Jong; 21st February 2018 at 14:55. |
|
21st February 2018, 14:56 | #49144 | Link | ||
Registered User
Join Date: Oct 2012
Posts: 7,903
|
Quote:
does PS have to color correct an image 60 times a sec or even more? is the calculation PS uses done in a way it can be applied to a totally different image in a reasonable speed? Quote:
a 3d LUT takes so long to get a fast high quality color correction by using a huge amount of space and a huge amount of processing power for the creation of the LUT itself. |
||
21st February 2018, 15:23 | #49145 | Link | ||
Registered User
Join Date: Jan 2008
Posts: 589
|
Quote:
(The reason why it's so simple is because the color of a destination pixel is only determined by the color of the source pixel - there is no other input, which is the very reason why you can use a LUT in the first place.) Quote:
Last edited by e-t172; 21st February 2018 at 15:38. |
||
21st February 2018, 15:28 | #49146 | Link |
Registered User
Join Date: May 2012
Posts: 447
|
I don't know about Photoshop, but like I said, Firefox does do this - it calculates a 3DLUT on the fly in a manner of milliseconds, then caches it (the system isn't perfect incidentally, various APIs interacting in ways that aren't ideal, so I'm not sure it can reuse the generated 3DLUT for all images with the same profile, but that's not a fundamental problem). Now I imagine Firefox cuts corners to do this, and they replaced LCMS with the in-house qcms because LCMS wasn't fast enough - but if madVR has to spend say half a second to generate a 3DLUT, then caches it for every video with a matching color space, I think that would be fine.
__________________
Test patterns: Grayscale yuv444p16le perceptually spaced gradient v2.1 (8-bit version), Multicolor yuv444p16le perceptually spaced gradient v2.1 (8-bit version) |
21st February 2018, 15:43 | #49147 | Link | |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
|
Quote:
Photoshop for example isn't going to care if displaying a single image takes 100ms of color processing, its not in any area a user is going to notice. Video playback does care, hence entirely different requirements.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
|
21st February 2018, 15:47 | #49148 | Link | ||
Registered User
Join Date: Oct 2012
Posts: 7,903
|
Quote:
Quote:
|
||
21st February 2018, 16:04 | #49149 | Link | |||
Registered User
Join Date: Jan 2008
Posts: 589
|
Quote:
You don't even need to use a fancy upscaling algorithm for that - no one is going to notice the difference. (Keep in mind that the color response of any reasonable monitor is at least somewhat linear, so basic interpolation is highly likely to land very close to the correct point. And in fact, you really don't want to get too fancy, because a response that's not smooth will result in banding artefacts. Which is why ArgyllCMS docs warn you against generating a transform that's trying to be too precise.) Quote:
Quote:
In any case, yes, I'm saying that 3DLUT interpolation can, and should, be done in real-time. That's completely trivial and can be done extremely quickly. (Just like video upscaling, except you can get away with very basic interpolation.) Interpolating a large 3DLUT from a small one is not hard nor expensive. It's generating the initial transform that's the hardest part. Everything else after that is peanuts. Last edited by e-t172; 21st February 2018 at 16:16. |
|||
21st February 2018, 16:16 | #49150 | Link |
Registered User
Join Date: May 2012
Posts: 447
|
I think what huhn is referring to here is the processing in collink - collink generates a 3DLUT of a lower resolution (determined by the quality setting) then interpolates it to 256x256x256 to produce a file compatible with madVR. But you can override the resolution using -r256 to make it produce a 256³ 3DLUT directly without interpolation (which obviously takes a while).
__________________
Test patterns: Grayscale yuv444p16le perceptually spaced gradient v2.1 (8-bit version), Multicolor yuv444p16le perceptually spaced gradient v2.1 (8-bit version) |
21st February 2018, 16:24 | #49151 | Link |
Registered User
Join Date: Jan 2008
Posts: 589
|
Ah, okay. But the ArgyllCMS docs explicitly state that you should not do that. I can't find where the exact rationale is explained but, IIRC, you don't want to generate too many points because there is a point of diminishing returns where you're basically just optimizing for measurement error and are more likely to create banding and other aberrant behavior than to actually improve color accuracy. This leads to the counter-intuitive result that interpolating provides better results than trying to achieve maximum precision, because the resulting transform is smoother.
|
21st February 2018, 16:42 | #49152 | Link |
Registered User
Join Date: May 2012
Posts: 447
|
Sure, I mean regardless of the fact that it's possible not to use interpolation, I don't think it really matters since we aren't pushing for madVR to do that. I suppose there is some value in noting that collink is slow despite not producing a 256³ 3DLUT directly, but I think that's just because it does everything to a very high standard. Whatever the reason, there are counterexamples showing that the process doesn't have to be that slow (and there's no reason not to cache the result).
__________________
Test patterns: Grayscale yuv444p16le perceptually spaced gradient v2.1 (8-bit version), Multicolor yuv444p16le perceptually spaced gradient v2.1 (8-bit version) |
21st February 2018, 16:48 | #49153 | Link | |
Registered User
Join Date: Jan 2016
Posts: 24
|
Quote:
I am mainly watching 1080P movies Should I drop Chroma Upscaling to NGU AA medium, and change downscaling to SSIM 2D? Also under “ if any more Upscaling/downscaling is needed”, I have set to Let Madvr decide. Should I change these to something else? Also, under image Upscaling, should I leave the quadrupling settings to “let madvr decide” as well as Chroma set to normal? Lastly, what do you mean if I want to stock AR on top? What settling is that for and what should it be changed to? Thanks |
|
21st February 2018, 17:33 | #49154 | Link | ||||||
Registered User
Join Date: Nov 2010
Location: Stuttgart, Germany
Posts: 17
|
Quote:
But there's a difference between just creating a transform (linking two existing ICC profiles, for example) and inverting the "natural" profile (which for display devices is inverting the device RGB -> CIE values mapping), doing complex gamut mapping and appearance modeling (what Argyll's collink does when used with the -G inverse forward lookup gamut mapping option). The latter bunch is what takes up most time, transform creation (linking) alone is relatively trivial. You could also put all that complexity in the input and/or output profile creation instead of into the link creation (less self-contained approach though). Quote:
That functionality has been around for well over half a decade now (not sure if it's a feature of EVR or MPC-HC). Quote:
Quote:
Quote:
Quote:
__________________
DisplayCAL - Graphical front-end for Argyll CMS display calibration and characterization Last edited by fhoech; 21st February 2018 at 18:26. Reason: Minor typo |
||||||
21st February 2018, 17:45 | #49155 | Link |
Registered User
Join Date: May 2012
Posts: 447
|
Hmm, I don't recall. I do know they designed qcms to be fast.. but they wrote the whole thing in C and gave it an upstream repository, which has resulted in it becoming basically unmaintained. The library works, but I wouldn't call it their finest moment. I wonder if anyone has considered rewriting it in rust..
__________________
Test patterns: Grayscale yuv444p16le perceptually spaced gradient v2.1 (8-bit version), Multicolor yuv444p16le perceptually spaced gradient v2.1 (8-bit version) |
21st February 2018, 17:56 | #49156 | Link |
Registered User
Join Date: Jan 2008
Posts: 589
|
In the case of madVR you could be even smarter and, on top of caching, also compute the transform asynchronously. The video starts playing with slightly wrong colors for maybe a few seconds while the transform is being computed, and as soon as it's done, the transform is swapped in and color correction is active for the rest of the playback session.
|
21st February 2018, 18:04 | #49157 | Link | |
Registered User
Join Date: Nov 2010
Location: Stuttgart, Germany
Posts: 17
|
Quote:
madVR can load eeColor 3D LUT files, which are 65^3 by design.
__________________
DisplayCAL - Graphical front-end for Argyll CMS display calibration and characterization |
|
21st February 2018, 18:14 | #49158 | Link | |
Registered User
Join Date: Nov 2010
Location: Stuttgart, Germany
Posts: 17
|
I have a feeling they achieved speed mostly by leaving out all the parts of a CMM that deal with complex transforms (i.e., cLUT profiles). Naturally, they could be fast when they didn't even do color management
Quote:
__________________
DisplayCAL - Graphical front-end for Argyll CMS display calibration and characterization Last edited by fhoech; 21st February 2018 at 18:21. |
|
21st February 2018, 18:32 | #49159 | Link |
Registered User
Join Date: Jan 2008
Posts: 589
|
I meant that in case someone wants to go through the full gamut mapping process (i.e. the equivalent of collink -G) on the fly. But as you said, it is debatable whether that's really that useful in the first place.
|
21st February 2018, 18:33 | #49160 | Link |
Kid for Today
Join Date: Aug 2004
Posts: 3,477
|
If all you need is gamut mapping, this script works like a charm in mVR: http://www.avsforum.com/forum/26-hom...lly-works.html
Feel free to use a Windows LUT on top if need be. IME on test patterns it outputs identical results to 3DLUT's with a perfectly calibrated TV using CMUNDIS + Color.HCFR. You can setup automatic PotPlayer profiles using different gamut mappings based on resolution and/or framerate, works like a champ and you can roll them with a mouse click too |
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
Thread Tools | Search this Thread |
Display Modes | |
|
|