Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 8th April 2009, 19:15   #1  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
madVR - high quality video renderer (GPU assisted)

http://madshi.net/madVR.zip

Code:
madVR v0.92.17

* modified/simplified HDR tone mapping settings page
* small HDR tone mapping saturation improvement
* OSD now also shows the measured luminance of the current frame (in addition to the average)
* fixed: render & present queues didn't always fill in Windows 10 build 1803
* fixed: using XySubFilter sometimes resulted in black screen / freeze
* fixed: using HDR "processing" resulted in dark and red-ish image
* fixed: using BT.601/709 gamma curve with HDR tone mapping produced gray-ish image
* fixed: settings dialog sometimes crashed on display mode / custom mode tab
Code:
madVR v0.92.15

* HDR: improved overall tone mapping quality
* HDR: added "color tweaks for fire & explosions" option
* HDR: "restore details in compressed highlights" renamed to "highlight recovery"
* HDR: improved "highlight recovery" algo, now uses/requires DirectCompute
* HDR: added trade quality option "compromise on tone & gamut mapping accuracy"
* HDR: maxCLL is now used (if valid)
* HDR: added "hdrVideoPeak" profile variable
* HDR: added (limited) support for HDR OSD bitmaps
* added "report BT.2020 to display" calibration option
* added true GPU mode info (color format, bitdepth & dynamic range) to OSD (Nvidia only)
* fixed: low latency mode could result in judder/stuttering
* fixed: OSD API sometimes drew stuff in the wrong position
* fixed: madHcNet32/64.dll produced DCI-P3 3DLUTs with incorrect header
* added undocumented "ShowHdrMode" empty file/folder option
Code:
madVR v0.92.13

* added 2 new RCA quality levels, with NGU fusion support
* if "activate RCA only if it comes for free" is active, NGU quality level isn't modified, anymore
* improved "preserve hue" tone mapping quality
* added "dynamic" mode for "luminance vs saturation reduction" option
* added "dumb" tone mapping algo, optionally with color correction
* added support for Arve custom tone mapping curves
* added "hdrOutput" information field
Code:
madVR v0.92.5

* added new algorithm "reduce compression artifacts"
* added new algorithm "reduce random noise"
* added file name tag and keyboard shortcut support for new algorithms
* switched to igv's AdaptiveSharpen variant
* custom mode optimization is now available for modes not created with madVR
* fixed: mode optimization didn't work for Nvidia (introduced in v0.92.4)
* fixed: ProgDVB: DXVA deinterlacing didn't activate after channel switch
* fixed: potential freeze when freeing madVR instance
* fixed: playback trouble after switching video files (introduced in v0.92)
* fixed: screenshot memory leak
Code:
madVR v0.92.0

* added new "display modes" -> "custom modes" settings tab
* added support for native D3D11 DXVA hardware decoding (needs nightly LAV)
* added support for outputting 10bit in fullscreen windowed mode (win10)
* added optimized "let madVR decide" HDR configuration option
* added support for AMD's private HDR switching API
* added workaround for make Nvidia's private HDR switching API work better
* added full-size EDID block reading (256 bytes instead of just 128)
* added extended EDID parsing
* improved frame drop/repeat estimates for xx/1.001 hz modes
* fixed: deinterlacing of P010 software decoded videos was broken
download links:

You can download old madVR versions here. The "madTestPatternSource" filter (madVR test patterns) is available here.

The XySubFilter subtitle renderer, using the "new subtitle interface", is available here.

If you want to test whether your display supports RGB in 4:2:0, 4:2:2 or 4:4:4, you can use this test image. Make sure you display it with 1:1 pixel mapping, otherwise it won't work.

3rd party tools (for HDR):

1) "madMeasureHDR Optimizer" created by Anna & Flo:
https://www.avsforum.com/forum/26-ho...rget-nits.html

2) "Simple Analyzer for madMeasureHDR" created by pandm1967 (use download link in pandm1967's signature):
https://www.avsforum.com/forum/26-ho...l#post57433788

3) "Simple AutoMeasure Tools for UHD clones" created by pandm1967 (use download link in pandm1967's signature):
https://www.avsforum.com/forum/24-di...l#post57434744

features:

- high quality chroma upsampling
- high quality scaling (bicubic, mitchell, lanczos, spline etc)
- high quality YCbCr -> RGB conversion
- gamut & gamma correction for display calibration
- full 16bit processing queue
- final 16bit processing result is dithered down to RGB output bitdepth
- bypasses graphics card's video (damage) algorithms
- all work is done via GPU shaders (except madVR's IVTC atm)
- no shortcuts, highest quality has priority over anything else

requirements:

- graphics card with full D3D9 / PS3.0 hardware support
- at least 128MB of dedicated graphics card memory
- Windows XP or newer

known problems / limitations:

None known.

bug tracker:

http://madVR.bugs.madshi.net

firewall complaints:

madVR tries to automatically find other PCs in your local network which are also running madVR. If any such PCs are found, you can remotely switch audio and subtitle tracks, jump to specific chapters, etc etc. For this network functionality to work, madVR tries to access the local network, obviously. So if your firewall complains, you know why. If you don't ever plan to use madVR's network functionality, you can safely tell your firewall to block any and all madVR network accesses. Please rest assured, though, that madVR does not upload your private data to a server or anything of that sort. So allowing madVR to access the LAN should not result in any privacy or security problems.

custom mode tutorial:

http://madvr.com/crt/CustomResTutorial.html

-----
MODERATOR Note: This thread has had a tendency to generate long off-topic discussions. madshi has stated that the thread is for madVR support/development and not for related general discussion. Please confine this thread to direct madVR-related support/development issues. If in doubt, start a new thread and put a simple link to it if needed. If you put madVR in the thread title, people will find it. The moderating policy being enforced, including deletion of off-topic posts and material, has been discussed with and approved by madshi. Thank you for your cooperation in keeping the thread manageable for madshi and his users.
-----

Last edited by madshi; 7th February 2019 at 20:09.
madshi is offline   Reply With Quote
Old 8th April 2009, 19:16   #2  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
FAQ:

A) Which output format (RGB vs YCbCr, 0-255 vs 16-235) should I activate in my GPU control panel?

Windows internally always "thinks" in RGB 0-255. Windows considers black to be 0 and white to be 255. That applies to the desktop, applications, games and videos. Windows itself never really thinks in terms of YCbCr or 16-235. Windows does know that videos might be YCbCr or 16-235, but still, all rendering is always done at RGB 0-255. (The exception proves the rule.)

So if you switch your GPU control panel to RGB 0-255, the GPU receives RGB 0-255 from Windows, and sends RGB 0-255 to the TV. Consequently, the GPU doesn't have to do any colorspace (RGB -> YCbCr) or range (0-255 -> 16-235) conversions. This is the best setup, because the GPU won't damage our precious pixels.

If you switch your GPU control panel to RGB 16-235, the GPU receives RGB 0-255 from Windows, but you ask the GPU to send 16-235 to the TV. Consequently, the GPU has to stretch the pixel data behind Windows' back in such a way that a black pixel is no longer 0, but now 16. And a white pixel is no longer 255, but now 235. So the pixel data is condensed from 0-255 to 16-235, and all the values between 0-15 and 236-255 are basically unused. Some GPU drivers might do this in high bitdepth with dithering, which may produce acceptable results. But some GPU drivers definitely do this in 8bit without any dithering which will introduce lots of nasty banding artifacts into the image. As a result I cannot recommend this configuration.

If you switch your GPU control panel to YCbCr, the GPU receives RGB from Windows, but you ask the GPU to send YCbCr to the TV. Consequently, the GPU has to convert the RGB pixels behind Windows' back to YCbCr. Some GPU drivers might do this in high bitdepth with dithering, which may produce acceptable results. But some GPU drivers definitely do this in 8bit without any dithering which will introduce lots of nasty banding artifacts into the image. Furthermore, there are various different RGB <-> YCbCr matrixes available. E.g. there's one each for BT.601, BT.709 and BT.2020. Now which of these will the GPU use for the conversion? And which will the TV use to convert back to RGB? If the GPU and the TV use different matrixes, color errors will be introduced. As a result I cannot recommend this configuration.

Summed up: In order to get the best possible image quality, I strongly recommend to set your GPU control panel to RGB Full (0-255).

There's one problem with this approach: If your TV doesn't have an option to switch between 0-255 and 16-235, it may always expect black to be 16 (TVs usually default to 16-235 while computer monitors usually default to 0-255). But we've just told the GPU to output black at 0! That can't work, can it? Actually, it can, surprisingly - but only for video content. You can tell madVR to render to 16-235 instead of 0-255. This way madVR will make sure that black pixels get a pixel value of 16, but the GPU doesn't know about it, so the GPU can't screw image quality up for us. So if your TV absolutely requires to receive black as 16, then still set your GPU control panel to RGB 0-255 and set madVR to 16-235. If your GPU supports 0-255, then set everything (GPU control panel, TV and madVR) to 0-255.

Unfortunately, if you want application and games to have correct black & white levels, too, all the above advice might not work out for you. If your TV doesn't support RGB 0-255, then somebody somewhere has to convert applications and games from 0-255 to 16-235, so your TV displays black & white correctly. madVR can only do this for videos, but madVR can't magically convert applications and games for you. So in this situation you may have no other choice than to set your GPU control panel to RGB 16-235 or to YCbCr. But please be aware of that you might get lower image quality this way, because the GPU will have to convert the pixels behind the back of both Windows and madVR, and GPU drivers often do this in inferior quality.

B) FreeSync / G-SYNC

Games create a virtual world in which the player moves around, and for best playing experience, we want to achieve a very high frame rate and lowest possible latency, without any tearing. As a result with FreeSync/G-SYNC the game simply renders as fast as it can and then throws each rendered frame to the display immediately. This results in very smooth motion, low latency and a very good playability.

Video rendering has completely different requirements. Video was recorded at a very specific frame interval, e.g. 23.976 frames per second. When doing video playback, unlike games, we don't actually render a virtual 3D world. Instead we just send the recorded video frames to the display. Because we cannot actually re-render the video frames in a different 3D world view position, it doesn't make sense to send frames to the display as fast as we can render. The movie would play like fast forward, if we did that! For perfect motion smoothness, we want the display to show each video frame for *EXACTLY* the right amount of time, which is usually 1000 / 24.000 * 1.001 = 41.708333333333333333333333333333 milliseconds.

FreeSync/G-SYNC would help with video rendering only if they had an API which allowed madVR to specify which video frame should be displayed for how long. But this is not what FreeSync/G-SYNC were made for, so such an API probably doesn't exist (I'm not 100% sure about that, though). Video renderers do not want a rendered frame to be displayed immediately. Instead they want the frames to be displayed at a specific point in time in the future, which is the opposite of what FreeSync/G-SYNC were made for.

If you believe that using FreeSync/G-SYNC would be beneficial for video playback, you might be able to convince me to implement support for that by fulfilling the following 2 requirements:

1) Show me an API which allows me to define at which time in the future a specific video frame gets displayed, and for how long exactly.
2) Donate a FreeSync/G-SYNC monitor to me, so that I can actually test a possible implementation. Developing blindly without test hardware doesn't make sense.

Last edited by madshi; 18th September 2017 at 17:10.
madshi is offline   Reply With Quote
Old 8th April 2009, 19:16   #3  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
How to configure profile rules:

The madVR settings profiling logic is very flexible, but also requires a bit of scripting for best effect. Script language is pretty easy. Basically scripting is expected to be a string of "if", "else if" and "else" statements. Every "if" (or "else if") statement contains of one or more value comparisons and selects one profile to be activated. Each value comparison must be placed in brackets. By using the logical operations "and" or "or" you can check multiple values to create more complex decisions.

Let's look at an example. The following script selects one of 4 profiles, depending on the source dimensions and the frame rate after deinterlacing. I think the script is pretty much self explaining:
Code:
if      (srcWidth <= 1050) and (srcHeight <= 768) and (deintFps < 31) "SD 24fps"
else if (srcWidth <= 1050) and (srcHeight <= 768)                     "SD 60fps"
else if                                               (deintFps < 31) "HD 24fps"
else                                                                  "HD 60fps"
Supported keywords and operators:
Code:
if/else statements:     "if", "else if", "elseif", "elsif", "else"
logical operators:      "and", "or", "&&", "||"
equal check:            "==", "="
unequal check:          "!=", "<>", "#"
bigger/smaller check:   "<", ">", "<=", ">="
boolean "not" operator: "not", "!"
Supported numerical values:
Code:
srcWidth, srcHeight                              src width/height (cropping according to settings)
croppedSrcWidth, croppedSrcHeight                cropped   src width/height
uncroppedSrcWidth, uncroppedSrcHeight            uncropped src width/height
AR, uncroppedAR, encodedAR                       cropped AR (aspect ratio), uncropped AR, encoded AR, 
targetWidth, targetHeight                        width/height after scaling (cropping according to settings)
croppedTargetWidth, croppedTargetHeight          width/height after scaling cropped   source
uncroppedTargetWidth, uncroppedTargetHeight      width/height after scaling uncropped source
scalingFactor.x/y                                overall scaling factor
fps, deintFps, bitDepth                          source frame rate, framerate after deinterlacing, bitdepth
displayMode.x/y, refreshRate                     display mode information
runtime                                          movie runtime (in minutes)
hdrVideoPeak                                     video peak luminance (in Nits) -> maxCLL (if valid) or SMPTE 2086
Supported boolean values:
Code:
4:2:0, 4:2:2, 4:4:4, RGB     which pixel format does the source have?
HDR                          is the video HDR?
srcInterlaced                is the source interlaced?
filmMode                     is film mode (IVTC) active?
MPEG2, VC-1, h264            which codec is the source encoded in?
exclusive, overlay, windowed rendering mode
fullscreen                   is playback borderless fullscreen (can be windowed or exclusive)
AMD, nVidia, Intel           which GPU manufacturer are we rendering on?
smoothMotion                 is smooth motion FRC active?
variableAR                   does this video have variable ARs?
hdr                          is the video HDR?
battery                      is this a laptop running on battery?
Supported string values:
Code:
mediaPlayer                   media player exe file name
filePath, fileName, fileExt   e.g. "c:\movie.mkv", "movie.mkv", "mkv", wildcards supported
display                       name of the active display device
One more example to show how to use numerical, boolean and string values:
Code:
if ((not 4:2:0) or (AR = 16:9)) and (fileName = "*horribleSubs*.mkv") "Weird profile" else "Normal profile"

Last edited by madshi; 19th September 2018 at 13:28.
madshi is offline   Reply With Quote
Old 8th April 2009, 19:17   #4  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
technical discussion:

I've seen many comments about HDMI 1.3 DeepColor being useless, about 8bit being enough (since even Blu-Ray is only 8bit to start with), about dithering not being worth the effort etc. Is all of that true?

It depends. If a source device (e.g. a Blu-Ray player) decodes the YCbCr source data and then passes it to the TV/projector without any further processing, HDMI 1.3 DeepColor is mostly useless. Not totally, though, because the Blu-Ray data is YCbCr 4:2:0 which HDMI cannot transport (not even HDMI 1.4). We can transport YCbCr 4:2:2 or 4:4:4 via HDMI, so the source device has to upsample the chroma information before it can send the data via HDMI. It can either upsample it in only one direction (then we get 4:2:2) or into both directions (then we get 4:4:4). Now a really good chroma upsampling algorithm outputs a higher bitdepth than what you feed it. So the 8bit source suddenly becomes more than 8bit. Do you still think passing YCbCr in 8bit is good enough? Fortunately even HDMI 1.0 supports sending YCbCr in up to 12bit, as long as you use 4:2:2 and not 4:4:4. So no problem.

But here comes the big problem: Most good video processsing algorithms produce a higher bitdepth than you feed them. So if you actually change the luma (brightness) information or if you even convert the YCbCr data to RGB, the original 8bit YCbCr 4:2:0 mutates into a higher bitdepth data stream. Of course we can still transport that via HDMI 1.0-1.2, but we will have to dumb it down to the max, HDMI 1.0-1.2 supports.

For us HTPC users it's even worse: The graphics cards do not offer any way for us developers to output untouched YCbCr data. Instead we have to use RGB. Ok, e.g. in ATI's control panel with some graphics cards and driver versions you can activate YCbCr output, *but* it's rather obvious that internally the data is converted to RGB first and then later back to YCbCr, which is a usually not a good idea if you care about max image quality. So the only true choice for us HTPC users is to go RGB. But converting YCbCr to RGB increases bitdepth. Not only from 8bit to maybe 9bit or 10bit. Actually YCbCr -> RGB conversion gives us floating point data! And not even HDMI 1.4 can transport that. So we have to convert the data down to some integer bitdepth, e.g. 16bit or 10bit or 8bit. The problem is that doing that means that our precious video data is violated in some way. It loses precision. And that is where dithering comes to rescue. Dithering allows to "simulate" a higher bitdepth than we really have. Using dithering means that we can go down to even 8bit without losing too much precision. However, dithering is not magic, it works by adding noise to the source. So the preserved precision comes at the cost of increased noise. Fortunately thanks to film grain we're not too sensitive to fine image noise. Furthermore the amount of noise added by dithering is so low that the noise itself is not really visible. But the added precision *is* visible, at least in specific test patterns (see image comparisons above).

So does dithering help in real life situations? Does it help with normal movie watching?

Well, that is a good question. I can say for sure that in most movies in most scenes dithering will not make any visible difference. However, I believe that in some scenes in some movies there will be a noticeable difference. Test patterns may exaggerate, but they rarely lie. Furthermore, preserving the maximum possible precision of the original source data is for sure a good thing, so there's not really any good reason to not use dithering.

So what purpose/benefit does HDMI DeepColor have? It will allow us to lower (or even totally eliminate) the amount of dithering noise added without losing any precision. So it's a good thing. But the benefit of DeepColor over using 8bit RGB output with proper dithering will be rather small.

Last edited by madshi; 23rd May 2010 at 09:25.
madshi is offline   Reply With Quote
Old 8th April 2009, 19:26   #5  |  Link
TheFluff
Excessively jovial fellow
 
Join Date: Jun 2004
Location: rude
Posts: 1,100
Looks like it uses PC levels without any option to expand TV->PC. Other than that, seems to work fine in my extremely brief test. Works fine as a custom renderer in zoomplayer. Quite slow in changing window size and going between fullscreen/windowed mode, but I guess that's to be expected.

(nice job, by the way)

Last edited by TheFluff; 8th April 2009 at 19:33.
TheFluff is offline   Reply With Quote
Old 8th April 2009, 19:31   #6  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by TheFluff View Post
Looks like it uses PC levels without any option to expand TV->PC. Other than that, seems to work fine in my extremely brief test. Works fine as a custom renderer in zoomplayer.
You're too quick!! I'm still working on the description and documentation (see "reserved" posts).

YCbCr -> RGB conversion is done through a 96MB 3dlut file. The 3dlut file shipping with madVR is using PC levels. You can however create your own file with any custom setting you want. Please check out this thread to learn more about this topic:

http://forum.doom9.org/showthread.php?t=139389
madshi is offline   Reply With Quote
Old 8th April 2009, 20:04   #7  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,482
these test patterns look somewhat familiar

awesome project btw! do you plan on adding jitter correction(a la HR) and means to synchronise to the VSYNC fliptime ?

this is the real enemy : http://software.intel.com/en-us/arti...ynchronization

96MB seems quite a lot, why not only 5-6 frames?

hopefully when it'll be mature, James won't mind adding support for it in Reclock

EDIT: oh cool it reads LUT's from yesgrey's software
leeperry is offline   Reply With Quote
Old 8th April 2009, 20:08   #8  |  Link
Rectal Prolapse
Registered User
 
Join Date: Mar 2005
Posts: 433
madshi, have you tested this with neuron2's VC1/AVC CUDA decoder, or coreavc's CUDA? That can get you back some h/w acceleration.
Rectal Prolapse is offline   Reply With Quote
Old 8th April 2009, 20:32   #9  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by leeperry View Post
do you plan on adding jitter correction(a la HR) and means to synchronise to the VSYNC fliptime ?
I'm not sure yet which exact method I'll use to get motion display smooth, but I do plan to make sure that at least with a 1:1 match between source frame and display refresh rate motion display will be smooth. Such a feature is not implemented yet, though. After all, this is just a very first beta release to get some feedback from users with different graphics cards.

Quote:
Originally Posted by leeperry View Post
96MB seems quite a lot, why not only 5-6 frames?
Those 96MB are spent on the 3dlut file (for YCbCr -> RGB conversion, gamut & gamma correction), not on frame storage!

Quote:
Originally Posted by Rectal Prolapse View Post
madshi, have you tested this with neuron2's VC1/AVC CUDA decoder, or coreavc's CUDA? That can get you back some h/w acceleration.
I have an ATI card, so I can't test that. However, yesgrey has already tested CUDA + madVR and generally it works, although it seems that with a low range graphics card using both CUDA + madVR at the same time may be too much of a burden for the GPU.
madshi is offline   Reply With Quote
Old 8th April 2009, 21:21   #10  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,886
@madshi
Could you also post screenshot from MPC-HC with enabled shader YV12 Chroma Upscaling (EVR Custom)
Atak_Snajpera is online now   Reply With Quote
Old 8th April 2009, 22:02   #11  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,482
Quote:
Originally Posted by Atak_Snajpera View Post
@madshi
Could you also post screenshot from MPC-HC with enabled shader YV12 Chroma Upscaling (EVR Custom)
the biggest problem w/ EVR is the ugly EE it adds, yet it doesn't show on screenshots
they must be doing it after the RGB32 mixer, behind the scene. Use LSF in ffdshow, play a video in EVR then in HR on a large screen...they don't look quite similar, the picture in EVR is more 3D-like and its EE algorithm interferes w/ LSF, making the picture way oversharpened(VMR9 has the same problem, but less EE)

a D3D renderer that'd keep the PQ untouched(like HR), use yesgrey's LUT and sync to the VSYNC fliptime is one hell of a terrific idea

but on Vista, Aero forces the VSYNC constantly...which makes HR jerky to hell...hopefully madshi will find a workaround for this, making it XP compatible too

Last edited by leeperry; 8th April 2009 at 22:19.
leeperry is offline   Reply With Quote
Old 8th April 2009, 22:04   #12  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Atak_Snajpera View Post
Could you also post screenshot from MPC-HC with enabled shader YV12 Chroma Upscaling (EVR Custom)
That's a gone one. I've updated the screenshots. You may need to press F5 to reload the images.

In the "Jacques Pibarot" test the MPC-HC YV12 Shader is *almost* as good as madVR. But in the other chroma upsampling test madVR is still in a class of its own.
madshi is offline   Reply With Quote
Old 8th April 2009, 22:09   #13  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,886
@leeperry
What does 'EE' stand for?
Atak_Snajpera is online now   Reply With Quote
Old 8th April 2009, 22:13   #14  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Edge Enhancement, which often produces ringing (typically white ghost lines next to high contrast edges).
madshi is offline   Reply With Quote
Old 8th April 2009, 22:14   #15  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,482
Quote:
Originally Posted by Atak_Snajpera View Post
@leeperry
What does 'EE' stand for?
Edge Enhancement : www.videophile.info/Guide_EE/Page_01.htm
leeperry is offline   Reply With Quote
Old 8th April 2009, 22:30   #16  |  Link
Mark_A_W
3 eyed CRT supporter
 
Join Date: Jan 2008
Location: Or-strayl-ya
Posts: 563
Thanks Madshi

I would be awesome if you integrated it with Reclock (sync to V-sync or something). Smoothness is the most important to me (and no tearing!).
Mark_A_W is offline   Reply With Quote
Old 8th April 2009, 23:46   #17  |  Link
DeepBeepMeep
Registered User
 
Join Date: Jun 2006
Posts: 133
Many thanks Madshi for this new creation!

However, I am not sure I have understood its added value compared to VMR9/EVR for a typical usage, I mean is there any real benefit to use this renderer if no resize is needed for the playback of a 1080p movie on an 1080p screen using HDMI 1.0?

On a side note I have noticed that when a player that uses the renderer (in my case zoomplayer) is in the background the CPU usage rises to 50% even in pause mode.
DeepBeepMeep is offline   Reply With Quote
Old 9th April 2009, 00:01   #18  |  Link
yesgrey
Registered User
 
Join Date: Sep 2004
Posts: 1,295
Here is the link for the new reclock release that already supports madVR, I've requested it to James (reclock's current developer) and he was so nice to add it imediatelly.
http://forum.slysoft.com/showthread.php?t=19931
yesgrey is offline   Reply With Quote
Old 9th April 2009, 00:29   #19  |  Link
honai
Guest
 
Posts: n/a
I just knew madshi was cooking up something, but I suspected he might be the force behind the mysterious SlyPlayer.

This is really amazing work, madshi, thank you very much! It proves once again that when it comes to obsessing about quality nothing beats German engineering. And I'm in total agreement, when we can get the best quality why compromise?

One question: Does madVR actually support 10-bit data paths in GPU drivers?
  Reply With Quote
Old 9th April 2009, 00:51   #20  |  Link
jmone
Registered User
 
Join Date: Dec 2007
Posts: 652
Is there a list of what apps work with this renderer so far? I'm particulary intereted if anyone has it working with:
- JR Media Center
- Arcsoft TMT V3

Thanks
Nathan
jmone is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:11.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.