Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 31st October 2018, 22:24   #53501  |  Link
SirSwede
Registered User
 
Join Date: Nov 2017
Posts: 67
Quote:
Originally Posted by Warner306 View Post
Try these:

4K UHD:
  • Chroma: NGU Anti-Alias (medium)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Jinc + AR
  • Image doubling: Off
  • Upscaling refinement: Off
  • Artifact removal - Debanding: Off
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

1080p:
  • Chroma: NGU Anti-Alias (low)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Sharp
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Sharp (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + LL + AR)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: medium/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

DVD:
  • Chroma: NGU Anti-Alias (low)
  • Downscaling: SSIM 1D 100% + LL + AR
  • Image upscaling: Off
  • Image doubling: NGU Anti-Alias
  • <-- Luma doubling: high
  • <-- Luma quadrupling: let madVR decide (direct quadruple - NGU Anti-Alias (high))
  • <-- Chroma: let madVR decide (Bicubic60 + AR)
  • <-- Doubling: let madVR decide (scaling factor 1.2x (or bigger))
  • <-- Quadrupling: let madVR decide (scaling factor 2.4x (or bigger))
  • <-- Upscaling algo: let madVR decide (Bicubic60 + AR)
  • <-- Downscaling algo: let madVR decide (Bicubic150 + LL + AR)
  • Upscaling refinement: Off
  • Artifact removal - Debanding: medium/medium
  • Artifact removal - Deringing: Off
  • Artifact removal - Deblocking: Off
  • Artifact removal - Denoising: Off
  • Image enhancements: Off
  • Dithering: Error Diffusion 2

You may not agree with the use debanding with those profiles. If you want the image to be sharper, try adding some image enhancements/upscaling refinement.

There is more information in the link in my signature if you are interested. The choice of what to use is mostly up to you.
Wow! Great recommendation!

Would you or anyone else, be able to do the same for me for 540p and 720p material on a calibrated Panasonic TX-55FZ800E 4K OLED, with a GTX 1050 Ti 4GB OC Edition Zotac card and I'd be eternally grateful!

(I don't like my image that sharp, since I will be adding Sharpen Complex 2 in MPC-HC (post processing) anyway.


Last edited by SirSwede; 31st October 2018 at 22:27.
SirSwede is offline   Reply With Quote
Old 31st October 2018, 22:47   #53502  |  Link
Alexkral
Registered User
 
Join Date: Oct 2018
Posts: 101
Quote:
Originally Posted by huhn View Post
there is a shader version of nnedi3.
You mean Shiandow's? It's NEDI not nnedi3 and anyway it doesn't seem very useful.
Alexkral is offline   Reply With Quote
Old 31st October 2018, 22:48   #53503  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 5,880
I already wondered ... Neural Networks in a GPU implementation?! Want!
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 31st October 2018, 22:54   #53504  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,758
Quote:
Originally Posted by LigH View Post
I already wondered ... Neural Networks in a GPU implementation?! Want!
Wut? Most Neural Networks rely on GPUs to run them at any speed, because GPUs are actually pretty good at that, because it can be paralized really well. Not to mention even newer stuff with specialized hardware just for that task, ie. Tensor Cores.

We had NNEDI3 in madVR for a while, a Neural Network based scaler. But it was removed because it was ultimately decided that the added complexity in madVR was not worth it (since it was the only component to use OpenCL IIRC), and other algorithms could flat out replace it at higher speeds and quality.

Neural Networks can be really powerful, and who knows if some day one might return to madVR, but the real challenge with Neural Networks is training them. You need absolutely massive compute power to train a Neural Network. All those fancy NVIDIA demos you see for image processing, those are trained on super computers, large clusters with hundreds if not thousands of NVIDIA GPUs to train the network. Of course no mere mortal has that sort of resources to train a network.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 31st October 2018 at 23:04.
nevcairiel is offline   Reply With Quote
Old 31st October 2018, 23:00   #53505  |  Link
nussman
Registered User
 
Join Date: Nov 2010
Posts: 210
Quote:
Originally Posted by LigH View Post
I already wondered ... Neural Networks in a GPU implementation?! Want!
What are you talking about? Of course nnedi3 by madVR was done by GPU ...
nussman is offline   Reply With Quote
Old 31st October 2018, 23:02   #53506  |  Link
videoh
Registered User
 
Join Date: Jul 2014
Posts: 758
Quote:
Originally Posted by LigH View Post
I already wondered ... Neural Networks in a GPU implementation?! Want!
https://developer.nvidia.com/cudnn
videoh is offline   Reply With Quote
Old 1st November 2018, 02:24   #53507  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 5,719
https://forum.doom9.org/showpost.php...postcount=3202

yes in a shader. it wasn't slower to if i remember correctly.
huhn is offline   Reply With Quote
Old 1st November 2018, 02:57   #53508  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,168
Madshi will have a new release out soon with some big improvements to HDR tone-mapping.
ryrynz is offline   Reply With Quote
Old 1st November 2018, 03:26   #53509  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,102
Quote:
Originally Posted by SirSwede View Post
Wow! Great recommendation!

Would you or anyone else, be able to do the same for me for 540p and 720p material on a calibrated Panasonic TX-55FZ800E 4K OLED, with a GTX 1050 Ti 4GB OC Edition Zotac card and I'd be eternally grateful!

(I don't like my image that sharp, since I will be adding Sharpen Complex 2 in MPC-HC (post processing) anyway.

You could just replace every NGU Sharp entry with NGU Anti-Alias.
Warner306 is offline   Reply With Quote
Old 1st November 2018, 03:33   #53510  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,102
Quote:
Originally Posted by nevcairiel View Post
Wut? Most Neural Networks rely on GPUs to run them at any speed, because GPUs are actually pretty good at that, because it can be paralized really well. Not to mention even newer stuff with specialized hardware just for that task, ie. Tensor Cores.

We had NNEDI3 in madVR for a while, a Neural Network based scaler. But it was removed because it was ultimately decided that the added complexity in madVR was not worth it (since it was the only component to use OpenCL IIRC), and other algorithms could flat out replace it at higher speeds and quality.

Neural Networks can be really powerful, and who knows if some day one might return to madVR, but the real challenge with Neural Networks is training them. You need absolutely massive compute power to train a Neural Network. All those fancy NVIDIA demos you see for image processing, those are trained on super computers, large clusters with hundreds if not thousands of NVIDIA GPUs to train the network. Of course no mere mortal has that sort of resources to train a network.
It was mentioned a while ago that NGU is based on neural networks. madshi posted at some point that the training was done by comparing downscaled images to the original. Maybe he has an evil lair with a bunch of super computers/GPUs to do this type of training?

If you look at screenshots, you can see that NGU picks up details like eyelashes that are missed by Lanczos and super-xbr. NGU very high pushes the GPU so hard that hundreds of thousands of repeated calculations must be taking place. NGU Anti-Alias is also a very similar upscaler to NNEDI3 and most seem to prefer it.

Personally, I think all images should eventually be scaled by neural networks when Tensor Core technology is scaled down. If, like me, you are frequently comparing Kodi VideoPlayer with Lanczos3 to NGU Sharp very high, they aren't the same image. Lanczos creates some artifacts, but it also smears a lot of valid detail. All of those eyelashes add up when the image is large.

With that said, I don't think NGU Sharp was trained well with poor material. I have seen some content look like an oil painting that kind of made me drunk while watching it.
Warner306 is offline   Reply With Quote
Old 1st November 2018, 03:45   #53511  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,102
Quote:
Originally Posted by ryrynz View Post
Madshi will have a new release out soon with some big improvements to HDR tone-mapping.
The level of nerdiness with the tone mapping is becoming impressive. Even though most projector owners can't see past 100 nits, they are now analyzing real-time graphs of brightness histograms and preparing weighted averages of HDR content based on the measurement files produced by LAV Filters and madVR.

There are professional TV and Blu-ray reviews out there who would love to have this information. I have seen mistakes made in reviews where a reviewer has erroneously assumed one display is not any brighter than the other while comparing content that doesn't reach past the peak brightness of either display. This data also makes it possible to cherry-pick scenes as demo material. The only downside will be the number of frustrated people who want to understand how to use it all.

I like the picture produced by HDR -> SDR, but I can't help but notice the lack of visible steps when you lower the target nits and brightness excessively. But what can you do...but buy a brighter display or a laser projector. HDR still has a long ways to go to live up to the promises made by Dolby. It should have happened a long time ago because grading content to 100 nits doesn't make much sense for almost all current displays.
Warner306 is offline   Reply With Quote
Old 1st November 2018, 03:51   #53512  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,168
Quote:
Originally Posted by Warner306 View Post
The level of nerdiness with the tone mapping is becoming impressive.
Hopefully it raises the bar enough that display manufacturers take note and improve their own tone-mapping.

Quote:
Originally Posted by Warner306 View Post
Maybe he has an evil lair with a bunch of super computers/GPUs to do this type of training?
He implied to me that he used NNEDI3 as groundwork for NGU AA as a result of it having the same picture offset as NNEDI3 does. It possible he got in touch with the elusive NNEDI3 author Tritical

Last edited by ryrynz; 1st November 2018 at 04:26.
ryrynz is offline   Reply With Quote
Old 1st November 2018, 04:20   #53513  |  Link
Warner306
Registered User
 
Join Date: Dec 2014
Posts: 1,102
I’ve referenced a link that says NGU Sharp was trained with downscaled photos. Anti-Alias is, as you said, different.
Warner306 is offline   Reply With Quote
Old 1st November 2018, 08:02   #53514  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,758
Quote:
Originally Posted by Warner306 View Post
With that said, I don't think NGU Sharp was trained well with poor material. I have seen some content look like an oil painting that kind of made me drunk while watching it.
If it was truely trained on taking high-res originals and downscales of those, then that is entirely expected. If you do such a training, the filter doesn't learn to "upscale", it learns to un-do the downscale - which in theory sounds similar, but in practice can end up quite different.

Low-quality content does not qualify for that particular type, since even if it was downscaled once, those attributes were destroyed by over-compression, noise, or whatever makes it "low quality".

Thats really the hard part with training (outside the computational requirements). You need to be careful how you train it, or you might bias the algorithm. If you only train on pristine downscales of high-quality high-res images, then thats what it'll be good at, and only that.
But of course where do you get a set of medium to low quality images and high-quality upscales of those to train an algorithm? Someone would have to upscale those in the first place. Or you downscale images, and then artifically degrade them, but unless you do that very carefully, the algorithm might once again just learn to un-do your degredation, and not in a very generic sense.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 1st November 2018 at 08:05.
nevcairiel is offline   Reply With Quote
Old 1st November 2018, 10:51   #53515  |  Link
madjock
Registered User
 
Join Date: May 2018
Posts: 221
Quote:
Originally Posted by Warner306 View Post
The level of nerdiness with the tone mapping is becoming impressive.
It is, but from someone that does not really want to check all their files before watching, I hope it gets dumbed down a lot to a user level.

This sounds selfish I guess, but from an HDR to SDR point from a non 4K owner, I am unsure what all these extra settings achieves.

I guess it may be a case of using a certain rev and being happy with that.
madjock is offline   Reply With Quote
Old 1st November 2018, 11:04   #53516  |  Link
yok833
Registered User
 
Join Date: Aug 2012
Posts: 73
Hi guys,
With an LG OLED 2017, should I set 700 for the target peak nits or should I stick with 400?
I use the last test build with the measurement tool and the result is already amazing!!
yok833 is offline   Reply With Quote
Old 1st November 2018, 11:28   #53517  |  Link
kostik
Registered User
 
Join Date: Jul 2007
Posts: 128
Quote:
Originally Posted by yok833 View Post
Hi guys,
With an LG OLED 2017, should I set 700 for the target peak nits or should I stick with 400?
I use the last test build with the measurement tool and the result is already amazing!!
I have the same question but for LG OLED C8:

OLED C8:

HDR Real Scene Peak Brightness : 683 cd/mē
HDR Peak 2% Window: 944 cd/mē
HDR Peak 10% Window: 907 cd/mē
HDR Peak 25% Window: 517 cd/mē
HDR Peak 50% Window: 330 cd/mē
HDR Peak 100% Window: 161 cd/mē

HDR Sustained 2% Window : 895 cd/mē
HDR Sustained 10% Window: 872 cd/mē
HDR Sustained 25% Window: 498 cd/mē
HDR Sustained 50% Window: 317 cd/mē
HDR Sustained 100% Window: 155 cd/mē

HDR ABL : 0.106

OLED C7:

HDR Real Scene Peak Brightness : 718 cd/mē
HDR Peak 2% Window: 717 cd/mē
HDR Peak 10% Window: 733 cd/mē
HDR Peak 25% Window: 447 cd/mē
HDR Peak 50% Window: 313 cd/mē
HDR Peak 100% Window: 143 cd/mē

HDR Sustained 2% Window : 695 cd/mē
HDR Sustained 10% Window: 703 cd/mē
HDR Sustained 25% Window: 429 cd/mē
HDR Sustained 50% Window: 291 cd/mē
HDR Sustained 100% Window: 137 cd/mē
__________________
TV :LG OLED 65C8 ; PC:CPU: I7-7700K OC 5.1GHz ; GPU: Gigabyte GeForceŪ GTX 1080 WINDFORCE OC 8G ;Memory: Corsair Vengeance RGB (2X8GB) DDR4 3100MHz, ;
kostik is offline   Reply With Quote
Old 1st November 2018, 12:12   #53518  |  Link
madjock
Registered User
 
Join Date: May 2018
Posts: 221
Reference what to set the NITs too, for the last two posts.

It seems to be subjective to what film you are watching and what you like yourselves brightness wise. From what I have read the brighter you make it, the more chance you have of losing details.

I think it will be another madVR to personal taste to a point.
madjock is offline   Reply With Quote
Old 1st November 2018, 13:28   #53519  |  Link
el Filou
Registered User
 
el Filou's Avatar
 
Join Date: Oct 2016
Posts: 492
Quote:
Originally Posted by Warner306 View Post
With that said, I don't think NGU Sharp was trained well with poor material. I have seen some content look like an oil painting that kind of made me drunk while watching it.
Quote:
Originally Posted by nevcairiel View Post
Low-quality content does not qualify for that particular type, since even if it was downscaled once, those attributes were destroyed by over-compression, noise, or whatever makes it "low quality".
IMHO for an equal display size, lower definition content needs higher bits/pixel for quality to ,stay the same, but very often it doesn't.
Even YouTube, which (at least with VP9) uses higher bits/pixel at definitions lower than 1080, can't compensate for that (OTOH, their downscaling is horrible... ).
Quote:
Originally Posted by madjock View Post
It is, but from someone that does not really want to check all their files before watching, I hope it gets dumbed down a lot to a user level.
Well, you're not forced to measure all your HDR videos before watching, you can still use the dynamic on-the-fly version while setting a brightness reaction time that suits you.
Quote:
Originally Posted by madjock View Post
This sounds selfish I guess, but from an HDR to SDR point from a non 4K owner, I am unsure what all these extra settings achieves.
Whether your display is 4K or not doesn't matter, it's useful even on 1080.
It avoids big/sudden brightness variations while at the same time still using the dynamic range of your display in an optimal way.
Think about it like the video version of what measuring your tracks to add ReplayGain metadata does for music instead of using an on-the-fly loudness equalizer/limiter.
__________________
HTPC: Windows 10 1809, MediaPortal 1, LAV Filters, ReClock, madVR. DVB-C TV, Panasonic GT60, 6.0 speakers Denon 2310, Core 2 Duo E7400, GeForce 1050 Ti
el Filou is offline   Reply With Quote
Old 1st November 2018, 13:36   #53520  |  Link
SamuriHL
Registered User
 
SamuriHL's Avatar
 
Join Date: May 2004
Posts: 4,190
When a new version is actually released it won't have all the options that the test builds have. Those options are there in the test builds to allow different things to be tested and a value found that everyone who is testing can live with. That value then becomes baked in. As for the measurement files they can be generated by watching a film so that the next time you watch it it'll use the data to improve the quality. The idea isn't to make a whole bunch of options that people need to mess with. But figuring all that it is exactly why they're doing a lot of testing up front between each new build.

Sent from my Pixel XL using Tapatalk
__________________
HTPC: Windows 10, I9 9900k, RTX 2070 Founder's Edition, Pioneer Elite VSX-LX303, LG C8 65" OLED
SamuriHL is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:01.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.