View Single Post
Old 1st June 2009, 16:42   #1270  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by pie1394 View Post
If the thread which controls next displayed frame is triggered by GPU's ISR each time the displayed buffer has been flipped, 3 video frame buffers are usually more than enough to provide smoothy presentation.

Large Queue Depth at the decoder and video post-processing / rendering stages often yield bad user experience on trick play actions, especially for some stream contents which are not well muxed.
I don't believe queue depth will be an issue for trick play, once madVR is a bit better optimized. E.g. it would be easy enough to instantly clear all queues when trick play is initiated.

Quote:
Originally Posted by pie1394 View Post
The better design on the resource limited embedded system often lets demuxer to control the sending sequence to video / audio decoders so that most filters in the decoding chains do not need too ridiculously many buffers in the queue. (Usually 2 ~ 3 are more than enough, 0 for the processing chain by the same thread)
In the madVR logs I've seen that the highest priority thread in madVR sometimes doesn't get called for up to 2 frames in a row. That's why I chose a queue depth of 8 frames, just to be sure.

Quote:
Originally Posted by pie1394 View Post
Due to the fact that different thread filters often have the different CPU priorities, it makes the situation worse with too large queue depth. Some CPU time consuming threads could eat out most CPU time before other lower priority threads gain the CPU time again.
The rendering thread has a lower priority than the presentation thread, so I don't see why a large queue depth should be a problem. Anyway, it will be easy enough later to reduce queue depth, if that should prove to be better (which I don't believe).

Quote:
Originally Posted by wayland View Post
mpc-hc's internal vc1 decoder produces images like this when playing vc1 in m2ts files
That's a bug in older MPC-HC versions. Please use a newer MPC-HC build.

Quote:
Originally Posted by STaRGaZeR View Post
If you feed it with RGB32, the GPU can't do any post processing AFAIK.
Then why does VMR9 not output the RGB32 values I feed to it on my PC?

Quote:
Originally Posted by Neeto View Post
I'm seeking guidance on the mimimum ATI graphics card that "should" work with the stable madVR. I know there is a bit of "looking in the crystal ball" about this
Yes, it's more guessing than knowing right now.

Quote:
Originally Posted by Neeto View Post
I'd also like to clarify if the plan is to have de-interlacing, EE, de-noise, etc still work on the ATI cards once madVR is complete. I think the answer is yet, but....
I don't like to talk about future plans. But I can tell you that deinterlacing is definitely not planned right now. Noise reduction is also rather unlikely because both deinterlacing and noise reduction can easily be done via CPU before sending the images to madVR.

Quote:
Originally Posted by cyberbeing View Post
With 0.10, what have seen is that if the decode and render buffers/queues are not completely full, madVR has the potential to drop/delay frames.

[...]

you may want to look into dynamically limiting the maximum buffer/queue size to what the GPU can always keep full. I wouldn't be surprised if that fixed the delayed/dropped frame problem I have occasionally run into when the render queue drops to 7/8.
Increasing the queue size won't help, unless your queue goes lower than 3-4.

Quote:
Originally Posted by ajp_anton View Post
What kind of resize is most demanding, maximum upscale, maximum downscale, or some "odd" non-integer resizes?
Higher input resolutions are more demanding than lower res.
Higher output resolutions are more demanding than lower res.
Downscaling is more demanding than upscaling.

Quote:
Originally Posted by lunkens View Post
will there be a x64 version?
Quote:
Originally Posted by t3nk3n View Post
Is there plans for a x64 version?
Why don't you guys search this thread for "x64"?

Quote:
Originally Posted by Blight View Post
there is no reason to delay playback until the refresh rate is properly determined. At least it should be optional as most users would rather have playback start as soon as possible and take these few seconds to detect the refresh rate while the video is already playing.
madVR 0.10 is unpolished in many ways. Things like delays, crashes, trick play etc will get better in future versions.

Quote:
Originally Posted by ikarad View Post
There is a very big problem (I think it's a bad limitation of beta progress) with MadVR. With VMR9 or EVR color profile (.icm) works but with madvr color profile doesn't work (it's like if overlay is selected all the time).

I could never use madvr if this problem is not corrected because by default my monitors is not calibrated and default colors are bad.
I don't know whether .icm color profiles work or don't work. If they work in games they should also work in madVR, cause madVR basically behaves like a game.

However, I'd not be sad at all if the .icm color profiles didn't work because the plan is to use cr3dlut for complete display calibration.

Quote:
Originally Posted by leeperry View Post
but anyway, yeah my CRT is way off...so I use a CLUT in the graphic card to get it to D65/2.2, then I do gamut conversion on top of it
Why would you want to use CLUT + gamut conversion in 2 different steps? I'm not really an expert in this area, but according to my understanding in the end cr3dlut is supposed to do *all* calibration work in one step.

Quote:
Originally Posted by leeperry View Post
cr3dlut works in 8bit, the graphic card's CLUT is 10 bit solid
Careful. Are you talking about input or output bitdepth? cr3dlut uses 8bit input and 16bit output! I doubt CLUT can compete with that. Maybe eventually CLUT is 10bit input 10bit output. But I rather think it's probably 8bit input 10bit output. So worse in every way compared to cr3dlut. Also the next question would be: Does the CLUT round higher bitdepth input data down to the CLUT bitdepth? Or does it interpolate? madVR does trilinear interpolation between the 3D 8bit 3dlut. Interpolating an 8bit lut is probably better than rounding to a 10bit lut.

Quote:
Originally Posted by tetsuo55 View Post
I just saw another thread about a panasonic TV that takes any signal and interpolates that to 600hz.
This TV is not really able to display 600 different full bitdepth images per second. That's just another marketing trick. You know that plasmas have to use dithering to be able to produce subpixel color intensities other than "on" and "off", right? Panasonic marketing conveniently includes these dithering steps in their calculation. This way they can talk of 600Hz.

Quote:
Originally Posted by cheetah111 View Post
the picture on my pc2 obtained from the filter combination: Haali Media Splitter, Ffdshow, madVR, although very nice, seems a little dark compared to what I see from the filter combination: Haali Media Splitter, CyberLink H.264/AVC decoder (PDVD8), vmr9 renderless.
Try switching madVR to video levels.

Quote:
Originally Posted by Casshern View Post
The 0.10 version is much slower on my 2600 Ati card.
madVR 0.10 is optimized to achieve smooth motion on capable graphics cards. It seems that the rendering approach used by madVR 0.10 does not play nice with slower/older graphics cards, unfortunately. I don't think perfect smooth playback will ever be possible on such slower/older GPUs *in windowed mode*. I expect, however, to achieve quite good (maybe perfect) results with fullscreen exclusive mode even on some older cards.

Quote:
Originally Posted by Thunderbolt8 View Post
how do delayed frames alter the viewing experience? with dropped frames, there is stuttering, but how do only delayed frames change the look of the movie?
Depends a bit on the refresh rate of your display. A "delayed frame" simply means that madVR didn't manage to display the frame on the VSync it was planning to display it. The lower your refresh rate, the bigger the hit on smoothness will be. If you have 1:1 between source framerate and display refresh rate, every delayed frame usually also results in a dropped frame. Delayed frames usually show as motion stutter, just like dropped frames.

Quote:
Originally Posted by bur View Post
I saw the comparison screens in the first post and they looked quite impressive, but I certainly couldn't reproduce those.
As I've already explained several times, madVR is not expected to magically improve image quality by 200% in every single scene. What madVR is about is mainly:

(1) as mathematically accurate rendering as possible
(2) being independent of stupid driver behavior and driver bugs
(3) included display calibration

Some other renderers may come close to madVR image quality in specific scenes, and then fail in other scenes. In every day life scenes the difference between madVR and other renderers can be very small. But there are specific scenes where other renderers sometimes stumble. The comparison screens on the first page show such specific scenes. If you want to reproduce such big differences on your PC, try to find scenes that are similar. E.g. look for scenes with lots of red on black background ("chroma upsampling"). Or look for scenes with smooth color gradients ("dithering"). Or look for Hypernova's posts in this thread. He's posted some nice real life comparison screenshots where madVR produces visibly better results. Some of these image quality differences are harder to see in motion while other differences are actually *easier* to see in motion. E.g. banding artifacts produced by not using proper dithering can be extremely annoying in motion.

One very big annoyance factor I find with other renderers is that you never know exactly what you will get. Depending on OS, graphics card model, renderer, driver revision, connection type and even display you can get different results. E.g. I've found that sometimes simply asking PowerStrip to change refresh rate can result in the GPU switching between video <-> PC levels. Or some people seem to get good chroma upsampling quality from ATI cards, while other people don't and nobody knows why exactly. With madVR you don't have any such problems. You get 100% the same image quality on every graphics card and every OS because madVR simply doesn't leave the GPU any room for interpretation...
madshi is offline   Reply With Quote