Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
#1 | Link |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
madVR - high quality video renderer (GPU assisted)
http://madshi.net/madVR.zip
Code:
madVR v0.91.5 * added direct 4x NGU upscalers Code:
madVR v0.91.0 * added new NGU (Next Generation Upscaling) algorithm * added "soften edges" + "add grain" upscaling refinements (atm only for NGU) * display properties can now also be profiled You can download old madVR versions here. The "madTestPatternSource" filter (madVR test patterns) is available here. The XySubFilter subtitle renderer, using the "new subtitle interface", is available here. If you want to test whether your display supports RGB in 4:2:0, 4:2:2 or 4:4:4, you can use this test image. Make sure you display it with 1:1 pixel mapping, otherwise it won't work. features: - high quality chroma upsampling - high quality scaling (bicubic, mitchell, lanczos, spline etc) - high quality YCbCr -> RGB conversion - gamut & gamma correction for display calibration - full 16bit processing queue - final 16bit processing result is dithered down to RGB output bitdepth - bypasses graphics card's video (damage) algorithms - all work is done via GPU shaders (except madVR's IVTC atm) - no shortcuts, highest quality has priority over anything else requirements: - graphics card with full D3D9 / PS3.0 hardware support - at least 128MB of dedicated graphics card memory - Windows XP or newer known problems / limitations: None known. bug tracker: http://madVR.bugs.madshi.net firewall complaints: madVR tries to automatically find other PCs in your local network which are also running madVR. If any such PCs are found, you can remotely switch audio and subtitle tracks, jump to specific chapters, etc etc. For this network functionality to work, madVR tries to access the local network, obviously. So if your firewall complains, you know why. If you don't ever plan to use madVR's network functionality, you can safely tell your firewall to block any and all madVR network accesses. Please rest assured, though, that madVR does not upload your private data to a server or anything of that sort. So allowing madVR to access the LAN should not result in any privacy or security problems. ----- MODERATOR Note: This thread has had a tendency to generate long off-topic discussions. madshi has stated that the thread is for madVR support/development and not for related general discussion. Please confine this thread to direct madVR-related support/development issues. If in doubt, start a new thread and put a simple link to it if needed. If you put madVR in the thread title, people will find it. The moderating policy being enforced, including deletion of off-topic posts and material, has been discussed with and approved by madshi. Thank you for your cooperation in keeping the thread manageable for madshi and his users. ----- Last edited by madshi; 8th January 2017 at 19:46. |
|
|
|
|
|
#2 | Link |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
FAQ:
A) FreeSync / G-SYNC Games create a virtual world in which the player moves around, and for best playing experience, we want to achieve a very high frame rate and lowest possible latency, without any tearing. As a result with FreeSync/G-SYNC the game simply renders as fast as it can and then throws each rendered frame to the display immediately. This results in very smooth motion, low latency and a very good playability. Video rendering has completely different requirements. Video was recorded at a very specific frame interval, e.g. 23.976 frames per second. When doing video playback, unlike games, we don't actually render a virtual 3D world. Instead we just send the recorded video frames to the display. Because we cannot actually re-render the video frames in a different 3D world view position, it doesn't make sense to send frames to the display as fast as we can render. The movie would play like fast forward, if we did that! For perfect motion smoothness, we want the display to show each video frame for *EXACTLY* the right amount of time, which is usually 1000 / 24.000 * 1.001 = 41.708333333333333333333333333333 milliseconds. FreeSync/G-SYNC would help with video rendering only if they had an API which allowed madVR to specify which video frame should be displayed for how long. But this is not what FreeSync/G-SYNC were made for, so such an API probably doesn't exist (I'm not 100% sure about that, though). Video renderers do not want a rendered frame to be displayed immediately. Instead they want the frames to be displayed at a specific point in time in the future, which is the opposite of what FreeSync/G-SYNC were made for. If you believe that using FreeSync/G-SYNC would be beneficial for video playback, you might be able to convince me to implement support for that by fulfilling the following 2 requirements: 1) Show me an API which allows me to define at which time in the future a specific video frame gets displayed, and for how long exactly. 2) Donate a FreeSync/G-SYNC monitor to me, so that I can actually test a possible implementation. Developing blindly without test hardware doesn't make sense. Last edited by madshi; 27th November 2016 at 23:50. |
|
|
|
|
|
#3 | Link |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
How to configure profile rules:
The madVR settings profiling logic is very flexible, but also requires a bit of scripting for best effect. Script language is pretty easy. Basically scripting is expected to be a string of "if", "else if" and "else" statements. Every "if" (or "else if") statement contains of one or more value comparisons and selects one profile to be activated. Each value comparison must be placed in brackets. By using the logical operations "and" or "or" you can check multiple values to create more complex decisions. Let's look at an example. The following script selects one of 4 profiles, depending on the source dimensions and the frame rate after deinterlacing. I think the script is pretty much self explaining: Code:
if (srcWidth <= 1050) and (srcHeight <= 768) and (deintFps < 31) "SD 24fps" else if (srcWidth <= 1050) and (srcHeight <= 768) "SD 60fps" else if (deintFps < 31) "HD 24fps" else "HD 60fps" Code:
if/else statements: "if", "else if", "elseif", "elsif", "else" logical operators: "and", "or", "&&", "||" equal check: "==", "=" unequal check: "!=", "<>", "#" bigger/smaller check: "<", ">", "<=", ">=" boolean "not" operator: "not", "!" Code:
srcWidth, srcHeight src width/height (cropping according to settings) croppedSrcWidth, croppedSrcHeight cropped src width/height uncroppedSrcWidth, uncroppedSrcHeight uncropped src width/height AR, uncroppedAR, encodedAR cropped AR (aspect ratio), uncropped AR, encoded AR, targetWidth, targetHeight width/height after scaling (cropping according to settings) croppedTargetWidth, croppedTargetHeight width/height after scaling cropped source uncroppedTargetWidth, uncroppedTargetHeight width/height after scaling uncropped source scalingFactor.x/y overall scaling factor fps, deintFps, bitDepth source frame rate, framerate after deinterlacing, bitdepth displayMode.x/y, refreshRate display mode information runtime movie runtime (in minutes) Code:
4:2:0, 4:2:2, 4:4:4, RGB which pixel format does the source have? HDR is the video HDR? srcInterlaced is the source interlaced? filmMode is film mode (IVTC) active? MPEG2, VC-1, h264 which codec is the source encoded in? fseMode, overlay, windowed rendering mode AMD, nVidia, Intel which GPU manufacturer are we rendering on? smoothMotion is smooth motion FRC active? variableAR does this video have variable ARs? hdr is the video HDR? Code:
mediaPlayer media player exe file name filePath, fileName, fileExt e.g. "c:\movie.mkv", "movie.mkv", "mkv", wildcards supported display name of the active display device Code:
if ((not 4:2:0) or (AR = 16:9)) and (fileName = "*horribleSubs*.mkv") "Weird profile" else "Normal profile" Last edited by madshi; 2nd January 2016 at 10:12. |
|
|
|
|
|
#4 | Link |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
technical discussion:
I've seen many comments about HDMI 1.3 DeepColor being useless, about 8bit being enough (since even Blu-Ray is only 8bit to start with), about dithering not being worth the effort etc. Is all of that true? It depends. If a source device (e.g. a Blu-Ray player) decodes the YCbCr source data and then passes it to the TV/projector without any further processing, HDMI 1.3 DeepColor is mostly useless. Not totally, though, because the Blu-Ray data is YCbCr 4:2:0 which HDMI cannot transport (not even HDMI 1.4). We can transport YCbCr 4:2:2 or 4:4:4 via HDMI, so the source device has to upsample the chroma information before it can send the data via HDMI. It can either upsample it in only one direction (then we get 4:2:2) or into both directions (then we get 4:4:4). Now a really good chroma upsampling algorithm outputs a higher bitdepth than what you feed it. So the 8bit source suddenly becomes more than 8bit. Do you still think passing YCbCr in 8bit is good enough? Fortunately even HDMI 1.0 supports sending YCbCr in up to 12bit, as long as you use 4:2:2 and not 4:4:4. So no problem. But here comes the big problem: Most good video processsing algorithms produce a higher bitdepth than you feed them. So if you actually change the luma (brightness) information or if you even convert the YCbCr data to RGB, the original 8bit YCbCr 4:2:0 mutates into a higher bitdepth data stream. Of course we can still transport that via HDMI 1.0-1.2, but we will have to dumb it down to the max, HDMI 1.0-1.2 supports. For us HTPC users it's even worse: The graphics cards do not offer any way for us developers to output untouched YCbCr data. Instead we have to use RGB. Ok, e.g. in ATI's control panel with some graphics cards and driver versions you can activate YCbCr output, *but* it's rather obvious that internally the data is converted to RGB first and then later back to YCbCr, which is a usually not a good idea if you care about max image quality. So the only true choice for us HTPC users is to go RGB. But converting YCbCr to RGB increases bitdepth. Not only from 8bit to maybe 9bit or 10bit. Actually YCbCr -> RGB conversion gives us floating point data! And not even HDMI 1.4 can transport that. So we have to convert the data down to some integer bitdepth, e.g. 16bit or 10bit or 8bit. The problem is that doing that means that our precious video data is violated in some way. It loses precision. And that is where dithering comes to rescue. Dithering allows to "simulate" a higher bitdepth than we really have. Using dithering means that we can go down to even 8bit without losing too much precision. However, dithering is not magic, it works by adding noise to the source. So the preserved precision comes at the cost of increased noise. Fortunately thanks to film grain we're not too sensitive to fine image noise. Furthermore the amount of noise added by dithering is so low that the noise itself is not really visible. But the added precision *is* visible, at least in specific test patterns (see image comparisons above). So does dithering help in real life situations? Does it help with normal movie watching? Well, that is a good question. I can say for sure that in most movies in most scenes dithering will not make any visible difference. However, I believe that in some scenes in some movies there will be a noticeable difference. Test patterns may exaggerate, but they rarely lie. Furthermore, preserving the maximum possible precision of the original source data is for sure a good thing, so there's not really any good reason to not use dithering. So what purpose/benefit does HDMI DeepColor have? It will allow us to lower (or even totally eliminate) the amount of dithering noise added without losing any precision. So it's a good thing. But the benefit of DeepColor over using 8bit RGB output with proper dithering will be rather small. Last edited by madshi; 23rd May 2010 at 09:25. |
|
|
|
|
|
#5 | Link |
|
Excessively jovial fellow
Join Date: Jun 2004
Location: rude
Posts: 837
|
Looks like it uses PC levels without any option to expand TV->PC. Other than that, seems to work fine in my extremely brief test. Works fine as a custom renderer in zoomplayer. Quite slow in changing window size and going between fullscreen/windowed mode, but I guess that's to be expected.
(nice job, by the way) Last edited by TheFluff; 8th April 2009 at 19:33. |
|
|
|
|
|
#6 | Link | |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
Quote:
I'm still working on the description and documentation (see "reserved" posts).YCbCr -> RGB conversion is done through a 96MB 3dlut file. The 3dlut file shipping with madVR is using PC levels. You can however create your own file with any custom setting you want. Please check out this thread to learn more about this topic: http://forum.doom9.org/showthread.php?t=139389 |
|
|
|
|
|
|
#7 | Link |
|
Kid for Today
Join Date: Aug 2004
Posts: 3,371
|
these test patterns look somewhat familiar
![]() awesome project btw! do you plan on adding jitter correction(a la HR) and means to synchronise to the VSYNC fliptime ? this is the real enemy : http://software.intel.com/en-us/arti...ynchronization 96MB seems quite a lot, why not only 5-6 frames? hopefully when it'll be mature, James won't mind adding support for it in Reclock ![]() EDIT: oh cool it reads LUT's from yesgrey's software
|
|
|
|
|
|
#9 | Link | |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
Quote:
Those 96MB are spent on the 3dlut file (for YCbCr -> RGB conversion, gamut & gamma correction), not on frame storage! I have an ATI card, so I can't test that. However, yesgrey has already tested CUDA + madVR and generally it works, although it seems that with a low range graphics card using both CUDA + madVR at the same time may be too much of a burden for the GPU. |
|
|
|
|
|
|
#11 | Link | |
|
Kid for Today
Join Date: Aug 2004
Posts: 3,371
|
Quote:
![]() they must be doing it after the RGB32 mixer, behind the scene. Use LSF in ffdshow, play a video in EVR then in HR on a large screen...they don't look quite similar, the picture in EVR is more 3D-like and its EE algorithm interferes w/ LSF, making the picture way oversharpened(VMR9 has the same problem, but less EE) a D3D renderer that'd keep the PQ untouched(like HR), use yesgrey's LUT and sync to the VSYNC fliptime is one hell of a terrific idea ![]() but on Vista, Aero forces the VSYNC constantly...which makes HR jerky to hell...hopefully madshi will find a workaround for this, making it XP compatible too
Last edited by leeperry; 8th April 2009 at 22:19. |
|
|
|
|
|
|
#12 | Link | |
|
Registered Developer
Join Date: Sep 2006
Posts: 8,620
|
Quote:
In the "Jacques Pibarot" test the MPC-HC YV12 Shader is *almost* as good as madVR. But in the other chroma upsampling test madVR is still in a class of its own. |
|
|
|
|
|
|
#15 | Link |
|
Kid for Today
Join Date: Aug 2004
Posts: 3,371
|
Edge Enhancement : www.videophile.info/Guide_EE/Page_01.htm
|
|
|
|
|
|
#17 | Link |
|
Registered User
Join Date: Jun 2006
Posts: 133
|
Many thanks Madshi for this new creation!
However, I am not sure I have understood its added value compared to VMR9/EVR for a typical usage, I mean is there any real benefit to use this renderer if no resize is needed for the playback of a 1080p movie on an 1080p screen using HDMI 1.0? On a side note I have noticed that when a player that uses the renderer (in my case zoomplayer) is in the background the CPU usage rises to 50% even in pause mode. |
|
|
|
|
|
#18 | Link |
|
Registered User
Join Date: Sep 2004
Posts: 1,295
|
Here is the link for the new reclock release that already supports madVR, I've requested it to James (reclock's current developer) and he was so nice to add it imediatelly.
![]() http://forum.slysoft.com/showthread.php?t=19931 |
|
|
|
|
|
#19 | Link |
|
Guest
Posts: n/a
|
I just knew madshi was cooking up something, but I suspected he might be the force behind the mysterious SlyPlayer.
![]() This is really amazing work, madshi, thank you very much! It proves once again that when it comes to obsessing about quality nothing beats German engineering. And I'm in total agreement, when we can get the best quality why compromise?One question: Does madVR actually support 10-bit data paths in GPU drivers? |
|
![]() |
| Tags |
| direct compute, dithering, error diffusion, madvr, nnedi3, quality, renderer, scaling, upsampling |
|
|