Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
7th April 2018, 15:02 | #50121 | Link | |||||||||
Registered User
Join Date: Oct 2012
Posts: 7,926
|
Quote:
and the real trick of TVs is that they can do both at the same time by displaying at 120 HZ. the transition has to be smooth and that'S important so the switch between movie and CM is not a judder party. so the real trick is not switch between modes it is stay in one. and that's most likely the reason 60 HZ sonys can'T do it even though they can display 24 p correctly Quote:
and broadcast is far from bit identical and a simply comparison is not enough. Quote:
Quote:
Quote:
Quote:
TV mostly display braodcast that'S the opposite of high quality. Quote:
why would you write an algorithm for that. so yes you can do it faster by blindly following a cadance pattern and dropping/repeating frames as they pleases and ignore transitions. very limited use case at best. Quote:
the calculation for zones over drive and so many other stuff needs to be made and can be totally screwed over if you change the frame that needs to be displayed. Quote:
|
|||||||||
7th April 2018, 15:04 | #50122 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,348
|
You can however safely assume that the cadence is not going to change all the time, so that you can just analyze the cadence and act on it once you know - without any buffering. Sure, it may result in the cadence processing to take a second or so to turn on, but thats not a problem in real-world usage - because its just not changing repeatedly.
No extra latency required. Buffering requires extra hardware, this costs money, hence easy to imagine that its best avoided.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
7th April 2018, 15:07 | #50123 | Link | |
Registered User
Join Date: Oct 2012
Posts: 7,926
|
Quote:
you assume the pattern doesn't change ever. this adds at least 8 ms delay this is the pure minimum. because the 3:2 part has a frame that needs is 48 ms in these 48 ms you don't even have the frame you need to display after 42 ms you have to wait for the frame with 33 ms length. |
|
7th April 2018, 15:08 | #50124 | Link | |
Registered User
Join Date: Oct 2012
Posts: 7,926
|
Quote:
and you have the buffer for frame interpolation anyway the hardware is present. |
|
7th April 2018, 15:55 | #50125 | Link | |||||||||
Registered User
Join Date: Jan 2008
Posts: 589
|
Sure, that means that it might take some time for the TV to switch into the proper cadence. Let's say that it takes 6 patterns for it to detect a cadence, i.e. 30 frames. That means it will take around 0.5 sec for the TV to detect a cadence change. That's quite acceptable for most use cases IMHO. People don't spend their time switching between 24p and other types of content constantly, and if they do, they probably don't care about it being silky smooth around transitions.
Quote:
Nowadays I would expect plenty of non-broadcast sources to output 24p over 60 Hz simply because they don't know better or can't be bothered to do better. PCs of course, but also dongles, phones/tablets, streaming apps, etc. My guess is that's why modern TVs pay extra attention to it and try to convert it to proper 24p before display. I don't think they're doing it solely because of broadcast sources. In fact, if you look at the Rtings results you'll notice that most pre-2017 TVs can't do this trick - it's only very recent TVs that are capable of doing this on-the-fly decimation thing. Which is why I find this phenomenon especially interesting and worth discussing now. Quote:
Quote:
Not really, no. The worst I've seen is a TV broadcast constantly switching between soft telecine and hard telecine (which is bonkers), but even that can be handled without buffering by making the algorithm a bit more clever. And even then I've only seen this kind of crazyness with 30i (because interlacing makes everything more fun!). 60p is usually broadcast as a perfectly clean 3:2 pattern. And again, this is not about broadcast. Because engineering tradeoffs? Because algorithms that use larger buffers require more memory and are often more complicated? Because TV manufacturers are aware that people use their devices for things other than video playback? I think that assumption is becoming weaker and weaker. This is 2018 - it's all about Netflix and friends now. Quote:
I wouldn't. I would just compute some metric for the difference. Quote:
Quote:
Quote:
No, we're not assuming that. We're assuming that the user is okay with the cadence being slightly wrong for a short period of time (<1 second) right after a cadence change occurs. Quote:
I think you're exaggerating things. Content type might change every few minutes. It certainly won't change every few seconds. Quote:
huhn: I really think this discussion is going nowhere and I'm not sure we'll be able to convince each other unless we start reverse engineering TV processing pipeline internals (which would be, well, hard). I don't even understand why we're having this discussion, because Rtings clearly demonstrated that at least a dozen 2017 TVs from a variety of manufacturers are, in fact, capable of decimating 24p@60Hz on the fly and these TVs don't have more input lag than other models (you can add "Input Lag" columns to the table if you're not convinced). It's not a question of "if they can" - there is a mountain of evidence that they, in fact, can. My original question was "how can we exploit this to simplify or improve our playback systems", which I think is the more relevant question here. Can we get back to that please? Last edited by e-t172; 7th April 2018 at 16:03. |
|||||||||
7th April 2018, 16:25 | #50126 | Link | ||
Registered User
Join Date: Oct 2012
Posts: 7,926
|
Quote:
outside of gaming mode they have something like 40-160 ms input lag exactly my point. and to top it of they even show you that you have to use setting from interpolation. (i'm not saying you are getting a soap opera on your screen). i say it out laud now this feature comes from frame interpolation because it is needed for it. they got it for free. the reason this is hard to use for a HTPC user is here i repeat my self. high input lag no PC mode. and here some new ones it will die as soon as you move your mouse. usually loss of chroma resolution lot's of other processing quirks. Quote:
|
||
7th April 2018, 17:05 | #50127 | Link |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
|
I know my 2017 LG OLED (touted as low input lag) only has low input lag in PC or Game mode, and in that mode it switches off most processing. It even does worse tone mapping in PC HDR modes, though I think this might be due to not wanting to over-saturate sRGB games rather than not having enough time.
Measuring input lag in game or PC mode but then using other modes to test things like 24p in 60Hz can give misleading impressions. In its other modes my TV's input lag is very high, the mouse cursor feels connected by long rubber bands. I haven't measured it but it is too high to use it as a monitor.
__________________
madVR options explained |
7th April 2018, 17:34 | #50128 | Link | ||||||
Registered User
Join Date: Jan 2008
Posts: 589
|
Quote:
My point was not really that TVs can do this with no input lag, just that the non-gaming-mode input lag is similar between TVs that have this "24p@60Hz" feature and those that do not. I should have been clearer about this. Quote:
"You will only get true 24p from 60 Hz with these TVs if you use them outside low-input-lag mode. Therefore you cannot use that approach and have low input lag at the same time." I agree with that statement. I will add, however, that this only matters if you care about input lag. (That said, I have not seen actual evidence that TVs are unable to recover the 24p stream when running in low-input-lag mode. I agree that it makes sense though, so I'm happy to accept this assumption.) Can we move past this now? Quote:
Quote:
Quote:
I think moving a very small blob of pixels like the mouse is unlikely to confuse the decimation process. Also the issue will disappear as soon as the mouse stops moving. But I get your point - any use of the UI can cause this problem. On the other hand, you wouldn't want to mess with the UI when playing a movie anyway - at native 24 Hz that would be painful too. The only scenarios where the UI is perfectly usable during playback is when running at 48/72/120/etc Hz or when using Smooth Motion. Quote:
I think we're going somewhere |
||||||
7th April 2018, 17:42 | #50129 | Link |
Registered User
Join Date: Mar 2016
Posts: 27
|
madVR - high quality video renderer (GPU assisted)
Do you guys think it is worth investing in a CPU that can (software) decode HEVC 4K in order to leave some room for madVR GPU processing? I tested software decoding with my Threadripper and it handled it very easily thus I think with a Coffee Lake i5 or i7 I could use software decoding for not a huge pile of money. Probably better to get a faster GPU but I already got a GTX 1080 so not much room in that regard (until GTX 11xx maybe). Also I’m a big fan of software decode as I feel it provides a more smooth experience.
|
7th April 2018, 18:14 | #50130 | Link | ||||||||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Quote:
The reason for all this is that without FSE everything runs through DWM (desktop window manager), and DWM is limited to 8bit. However, if you turn the OS HDR switch on, suddenly DWM runs in 10bit. Whether or not you switch the GPU control panel to 12bit doesn't make any difference here. The limitation to 8bit is in DWM, switching GPU control panel options doesn't help with that. That said, dithered 8bit should work very well, even for HDR. Quote:
IMHO the OS HDR switch is pure evil, and should be avoided at all cost, until Microsoft gets off their high horse and finally learns how to do things right. But that's just my personal opinion, of course. Quote:
I changed some "let madVR decide" settings for better performance, but the algos themselves shouldn't have changed their speed. If you can use NGU Medium now instead of Low, that's probably an improvement in the GPU drivers, I would guess... Quote:
Quote:
Quote:
Quote:
Anyway... Quote:
Quote:
Decoding on the GPU is usually done on a dedicated hardware circuit, so it shouldn't slow down pixel shader processing at all. The only problem with hardware decoding atm is that DXVA native decoding has all sorts of technical limitations, and D3D11 native decoding is still not in great shape in madVR. But hopefully D3D11 native decoding will improve in a future madVR build. |
||||||||||
7th April 2018, 18:47 | #50131 | Link | |
Registered User
Join Date: Dec 2014
Posts: 1,127
|
Quote:
I was talking to one user who has HDR passthrough set like this with an AMD card: madVR (10-bits, HDR passthrough) -> GPU (10-bits) -> projector His projector reports it is receiving 10-bits, RGB, HDR. If he changes the GPU to 12-bits, it reports 12-bits, RGB, HDR. So what is going wrong with this signal chain? He doesn't appear to be having any issues with banding, either. Also, is it now possible to output 8-bit HDR passthrough with AMD cards, or are they still forced to use a complete 10-bit pipeline to get the HDR signal to the display? I find I am always providing advice at two other forums, so it would be good to get this clear. Problems with FSE make windowed mode necessary for some users.
__________________
HOW TO - Set up madVR for Kodi DSPlayer & External Media Players |
|
7th April 2018, 19:09 | #50132 | Link | |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
So yes, 10bit output is supposed to be working with Windows 10 in windowed mode, regardless of whether the OS HDR switch is on or off. However, it only works if madVR is in fullscreen mode, because then the OS switches the GPU driver into "direct scanout" mode, which bypasses DWM. Thanks to that, the DWM 8bit limitation is no longer an issue. This "direct scanout" mode is currently only supported by Nvidia and AMD GPU drivers, though, but not by Intel GPUs, AFAIK. |
|
7th April 2018, 20:35 | #50133 | Link |
Registered User
Join Date: Sep 2016
Posts: 89
|
It seems there are some bugs in madVR 0.92.11 and 0.92.12.
Ranpha downgraded madVR in his latest LAV Filters Megamix setup. https://www.videohelp.com/software/L...comments#13925 |
7th April 2018, 20:53 | #50134 | Link | |
Soul Seeker
Join Date: Sep 2013
Posts: 716
|
Quote:
|
|
7th April 2018, 21:12 | #50136 | Link | |
Registered User
Join Date: Dec 2014
Posts: 1,127
|
Quote:
The only AMD user I've spoke to says 10-bit HDR passthrough seems to be fine without any banding, but I think he needs to run some tests to be certain.
__________________
HOW TO - Set up madVR for Kodi DSPlayer & External Media Players Last edited by Warner306; 7th April 2018 at 22:03. |
|
7th April 2018, 21:57 | #50137 | Link | |||
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
|
Quote:
I do see the rational of being more willing to use direct 4x, compared to double twice. However the performance difference is still significant enough that "if any upscaling required" is not a great default. On my GPU (a Titan XP) the time difference between direct 4x and 2x is similar to that between double twice and direct 4x. It is easy to tune a profile that works for doubling 1080p and quadrupling 720p, but then doesn't work for quadrupling slightly cropped 1080p. SSIM 1D100 downscaling exacerbates this issue. Maybe set double again to 3.0x for "let madVR decide" to maintain the delta? Not that consistency is necessarily a bad thing. Quote:
However, hardware decoding adds almost nothing to the GPU's workload and the power/heat difference is significant. Also, both startup and seeking are more responsive when using hardware decoding, even without "delay playback start until render queue is full." On modern GPUs, like Nvidia's 10 series, 10-bit HEVC has pure hardware decoding, dedicated silicon for everything required, so the shaders madVR uses are idle. The only resources shared are the PCI-bus and some GPU memory bandwidth, both of with are usually not the bottleneck for madVR. Checking just now it seems my rendering times are identical between the two. Quote:
I should also mention that the OSD reports it is using my SDR BT.2020 3DLUT when using "passthrough HDR content to the display" but it is not actually doing so (changing the 3DLUT does not change the output).
__________________
madVR options explained Last edited by Asmodian; 7th April 2018 at 22:19. |
|||
8th April 2018, 09:56 | #50138 | Link |
Registered User
Join Date: Jul 2016
Posts: 52
|
I noticed yesterday too watching the film ALLIED, in the opening scene of the desert, looking at the blue sky the banding but coming out at 12 bit ... I use an AMD RX480 ....... something else like never when I get the subtitles I have a slowing down of the image and the two renderings rise in value?
|
8th April 2018, 10:03 | #50139 | Link | ||||
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Quote:
Quote:
So as far as I can tell, it works perfectly on my PC. This is with 390.65 drivers. Maybe there's a bug in newer drivers? I don't know. P.S: I've only tested this with my SDR test pattern. Maybe things are different in HDR passthrough mode? I don't really know how to test it there, though, because banding problems are easy to see with my test pattern, but harder to see with true HDR content. Quote:
Quote:
Could be anything, could be hard coded into the movie. Try setting madVR to 8bit, does the banding go away? |
||||
8th April 2018, 10:20 | #50140 | Link |
Registered User
Join Date: Jul 2016
Posts: 52
|
Actually I've tried only 12 bit today I will go to 10bit and 8bit ....... Madshi you have a solution to the problem that I have subtitles? In practice when forced subtitles come out in a scene or when I use subtitles normally the image slows down and the ms of the two renderings increase from 30ms up to 110ms, when the subtitles disappear it returns all right.....my player is JRIVER.
Last edited by stefanelli73; 8th April 2018 at 10:22. |
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
|
|