Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 31st May 2009, 00:37   #1261  |  Link
yesgrey
Registered User
 
Join Date: Sep 2004
Posts: 1,295
Quote:
Originally Posted by Mark_A_W View Post
I use the Unofficial 1080i IVTC version of Dscaler5 MPEG2 decoder to perform IVTC on TS files
Why don't you set the output to YV12 instead of YUY2, it's not supported in the unofficial DScaler5 version?
yesgrey is offline   Reply With Quote
Old 31st May 2009, 11:43   #1262  |  Link
DeepBeepMeep
Registered User
 
Join Date: Jun 2006
Posts: 133
Quote:
Originally Posted by yesgrey3 View Post
Why don't you set the output to YV12 instead of YUY2, it's not supported in the unofficial DScaler5 version?
Well, an YUY2 output for IVTC Dscaler is supported for YV12 materials but it is less performant since in this scenario it needs to create temporary copies of intermediate YV12 frames which are used by the IVTC algorithm. I don't think this is significant for today's CPU.

Anyway, you are right there is no point outputing YUY2 with Dscaler when the input is YV12 material since we might lose CPU time and quality for nothing.
DeepBeepMeep is offline   Reply With Quote
Old 31st May 2009, 12:40   #1263  |  Link
bur
Registered User
 
Join Date: Jul 2005
Posts: 103
I saw the comparison screens in the first post and they looked quite impressive, but I certainly couldn't reproduce those. See this screenshot:



The first is Nvidia VMR9, second Haali, third MadVR. I wasn't able to spot a difference even when compating single pixels and definitely not when watching a video. So what's the benefit of different renderers? Or do I get something wrong?


Shots were taken on a slightly upscaled video, VMR9 resizing set to Bicubic A=-1.

Last edited by bur; 31st May 2009 at 12:43.
bur is offline   Reply With Quote
Old 31st May 2009, 14:36   #1264  |  Link
Mark_A_W
3 eyed CRT supporter
 
Join Date: Jan 2008
Location: Or-strayl-ya
Posts: 563
Quote:
Originally Posted by yesgrey3 View Post
Why don't you set the output to YV12 instead of YUY2, it's not supported in the unofficial DScaler5 version?

Sorry, my mistake, I just checked how I have it setup.

I do have Dscaler outputting YV12.

I just have ffdshow after it for stuff like deinterlacing when needed.
Mark_A_W is offline   Reply With Quote
Old 31st May 2009, 16:59   #1265  |  Link
Egh
Registered User
 
Join Date: Jun 2005
Posts: 630
Quote:
Originally Posted by bur View Post
I saw the comparison screens in the first post and they looked quite impressive, but I certainly couldn't reproduce those. See this screenshot:

The first is Nvidia VMR9, second Haali, third MadVR. I wasn't able to spot a difference even when compating single pixels and definitely not when watching a video. So what's the benefit of different renderers? Or do I get something wrong?

Shots were taken on a slightly upscaled video, VMR9 resizing set to Bicubic A=-1.
Are you sure about that? Any more details? Like how much was the upscale done, and most important is that settings did you use in Haali and mVR

You can have lots of blur both in Haali and mVR, depending on the settings. A=-1 iirc is the sharpest bicubic setting in VMR9.
Egh is offline   Reply With Quote
Old 31st May 2009, 18:09   #1266  |  Link
Egh
Registered User
 
Join Date: Jun 2005
Posts: 630
Let's make then a bit more methodologically sound comparison, shall we?

Same source, same frame, all shots done via print screen.

DVD untouched source, famous touhou
Presented in original resolution + upscale to apply correct A/R (i.e. around 853 width instead of 720, plus normal chroma upscale).

VMR9 3D with A=-1:

Haali with -0.8:

mVR with SoftCubic50 on both luma and chroma:

mVR with Lanc4 for luma and CMR for chroma:


Of course, since I used ffdshow for all of the snapshots, in all cases but with mVR, initial (or whole) chroma upscale is performed by the decoder.

Softcubic's version is the softest one.
In terms of filesize in PNG,
VMR9 << Haali << Softcubic50 << Lanc4+CMR
Numbers don't lie ))

Last edited by Egh; 31st May 2009 at 18:12.
Egh is offline   Reply With Quote
Old 31st May 2009, 21:26   #1267  |  Link
bur
Registered User
 
Join Date: Jul 2005
Posts: 103
I used the default settings for Haali and MadVR (whatever those are, I just enabled them in MPC and there were no settings available - maybe that's what I did wrong?).

Other than that, the frames are still nearly 100% equal. I don't know what Egh means by "more methodologically sound", but of course I used the same frames for all three shots. Pictures were saved as BMP through MPC and later converted to PNG with Paint.Net.

I have to admit though there are some differences to be seen in Egh's shots. But still nowhere as drastic as in the thread's first post where it looks like MadVR was far superior to any other renderer.

And even though there are some differences if you compare the pictures posted by Egh on a per pixel basis, I'm not sure if you would notice them when watching the video. Especially if it's not drawn, but "real" life. I doubt you'd really notice anything much between those four settings/renderers, VMR9 and MadVR with lanczos4 look very much the same. It would make an interesting experiment, having someone else set up the video with different renderers and then watch them to compare quality.



PS: Where can I change the MadVR settings?

Last edited by bur; 31st May 2009 at 21:32.
bur is offline   Reply With Quote
Old 1st June 2009, 00:48   #1268  |  Link
Hypernova
Registered User
 
Join Date: Feb 2006
Posts: 293
Quote:
Originally Posted by bur View Post
I used the default settings for Haali and MadVR (whatever those are, I just enabled them in MPC and there were no settings available - maybe that's what I did wrong?).

Other than that, the frames are still nearly 100% equal. I don't know what Egh means by "more methodologically sound", but of course I used the same frames for all three shots. Pictures were saved as BMP through MPC and later converted to PNG with Paint.Net.

I have to admit though there are some differences to be seen in Egh's shots. But still nowhere as drastic as in the thread's first post where it looks like MadVR was far superior to any other renderer.

And even though there are some differences if you compare the pictures posted by Egh on a per pixel basis, I'm not sure if you would notice them when watching the video. Especially if it's not drawn, but "real" life. I doubt you'd really notice anything much between those four settings/renderers, VMR9 and MadVR with lanczos4 look very much the same. It would make an interesting experiment, having someone else set up the video with different renderers and then watch them to compare quality.



PS: Where can I change the MadVR settings?
Maybe you shouldn't worry too much about that? If you don't see the difference, that's ok. If you're happy with what you have, then that's good. You don't have to try to see the difference if it does not gain you anything right? For me, I'm still waiting for madVR to play smoothly and have subtitle support because I can see the difference while watching the video. If you want to know what I'm talking about, look at my posts around page 42-45.

You can change madVR settings by go to Play->Filters->madVR (in MPC-HC)
__________________
Spec: Intel Core i5-3570K, 8g ram, Intel HD4000, Samsung U28D590 4k monitor+1080p Projector, Windows 10.

Last edited by Hypernova; 1st June 2009 at 00:50.
Hypernova is offline   Reply With Quote
Old 1st June 2009, 11:42   #1269  |  Link
mr.duck
quack quack
 
Join Date: Apr 2009
Posts: 259
You can also change settings with a DirectShow filter manager.
mr.duck is offline   Reply With Quote
Old 1st June 2009, 16:42   #1270  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by pie1394 View Post
If the thread which controls next displayed frame is triggered by GPU's ISR each time the displayed buffer has been flipped, 3 video frame buffers are usually more than enough to provide smoothy presentation.

Large Queue Depth at the decoder and video post-processing / rendering stages often yield bad user experience on trick play actions, especially for some stream contents which are not well muxed.
I don't believe queue depth will be an issue for trick play, once madVR is a bit better optimized. E.g. it would be easy enough to instantly clear all queues when trick play is initiated.

Quote:
Originally Posted by pie1394 View Post
The better design on the resource limited embedded system often lets demuxer to control the sending sequence to video / audio decoders so that most filters in the decoding chains do not need too ridiculously many buffers in the queue. (Usually 2 ~ 3 are more than enough, 0 for the processing chain by the same thread)
In the madVR logs I've seen that the highest priority thread in madVR sometimes doesn't get called for up to 2 frames in a row. That's why I chose a queue depth of 8 frames, just to be sure.

Quote:
Originally Posted by pie1394 View Post
Due to the fact that different thread filters often have the different CPU priorities, it makes the situation worse with too large queue depth. Some CPU time consuming threads could eat out most CPU time before other lower priority threads gain the CPU time again.
The rendering thread has a lower priority than the presentation thread, so I don't see why a large queue depth should be a problem. Anyway, it will be easy enough later to reduce queue depth, if that should prove to be better (which I don't believe).

Quote:
Originally Posted by wayland View Post
mpc-hc's internal vc1 decoder produces images like this when playing vc1 in m2ts files
That's a bug in older MPC-HC versions. Please use a newer MPC-HC build.

Quote:
Originally Posted by STaRGaZeR View Post
If you feed it with RGB32, the GPU can't do any post processing AFAIK.
Then why does VMR9 not output the RGB32 values I feed to it on my PC?

Quote:
Originally Posted by Neeto View Post
I'm seeking guidance on the mimimum ATI graphics card that "should" work with the stable madVR. I know there is a bit of "looking in the crystal ball" about this
Yes, it's more guessing than knowing right now.

Quote:
Originally Posted by Neeto View Post
I'd also like to clarify if the plan is to have de-interlacing, EE, de-noise, etc still work on the ATI cards once madVR is complete. I think the answer is yet, but....
I don't like to talk about future plans. But I can tell you that deinterlacing is definitely not planned right now. Noise reduction is also rather unlikely because both deinterlacing and noise reduction can easily be done via CPU before sending the images to madVR.

Quote:
Originally Posted by cyberbeing View Post
With 0.10, what have seen is that if the decode and render buffers/queues are not completely full, madVR has the potential to drop/delay frames.

[...]

you may want to look into dynamically limiting the maximum buffer/queue size to what the GPU can always keep full. I wouldn't be surprised if that fixed the delayed/dropped frame problem I have occasionally run into when the render queue drops to 7/8.
Increasing the queue size won't help, unless your queue goes lower than 3-4.

Quote:
Originally Posted by ajp_anton View Post
What kind of resize is most demanding, maximum upscale, maximum downscale, or some "odd" non-integer resizes?
Higher input resolutions are more demanding than lower res.
Higher output resolutions are more demanding than lower res.
Downscaling is more demanding than upscaling.

Quote:
Originally Posted by lunkens View Post
will there be a x64 version?
Quote:
Originally Posted by t3nk3n View Post
Is there plans for a x64 version?
Why don't you guys search this thread for "x64"?

Quote:
Originally Posted by Blight View Post
there is no reason to delay playback until the refresh rate is properly determined. At least it should be optional as most users would rather have playback start as soon as possible and take these few seconds to detect the refresh rate while the video is already playing.
madVR 0.10 is unpolished in many ways. Things like delays, crashes, trick play etc will get better in future versions.

Quote:
Originally Posted by ikarad View Post
There is a very big problem (I think it's a bad limitation of beta progress) with MadVR. With VMR9 or EVR color profile (.icm) works but with madvr color profile doesn't work (it's like if overlay is selected all the time).

I could never use madvr if this problem is not corrected because by default my monitors is not calibrated and default colors are bad.
I don't know whether .icm color profiles work or don't work. If they work in games they should also work in madVR, cause madVR basically behaves like a game.

However, I'd not be sad at all if the .icm color profiles didn't work because the plan is to use cr3dlut for complete display calibration.

Quote:
Originally Posted by leeperry View Post
but anyway, yeah my CRT is way off...so I use a CLUT in the graphic card to get it to D65/2.2, then I do gamut conversion on top of it
Why would you want to use CLUT + gamut conversion in 2 different steps? I'm not really an expert in this area, but according to my understanding in the end cr3dlut is supposed to do *all* calibration work in one step.

Quote:
Originally Posted by leeperry View Post
cr3dlut works in 8bit, the graphic card's CLUT is 10 bit solid
Careful. Are you talking about input or output bitdepth? cr3dlut uses 8bit input and 16bit output! I doubt CLUT can compete with that. Maybe eventually CLUT is 10bit input 10bit output. But I rather think it's probably 8bit input 10bit output. So worse in every way compared to cr3dlut. Also the next question would be: Does the CLUT round higher bitdepth input data down to the CLUT bitdepth? Or does it interpolate? madVR does trilinear interpolation between the 3D 8bit 3dlut. Interpolating an 8bit lut is probably better than rounding to a 10bit lut.

Quote:
Originally Posted by tetsuo55 View Post
I just saw another thread about a panasonic TV that takes any signal and interpolates that to 600hz.
This TV is not really able to display 600 different full bitdepth images per second. That's just another marketing trick. You know that plasmas have to use dithering to be able to produce subpixel color intensities other than "on" and "off", right? Panasonic marketing conveniently includes these dithering steps in their calculation. This way they can talk of 600Hz.

Quote:
Originally Posted by cheetah111 View Post
the picture on my pc2 obtained from the filter combination: Haali Media Splitter, Ffdshow, madVR, although very nice, seems a little dark compared to what I see from the filter combination: Haali Media Splitter, CyberLink H.264/AVC decoder (PDVD8), vmr9 renderless.
Try switching madVR to video levels.

Quote:
Originally Posted by Casshern View Post
The 0.10 version is much slower on my 2600 Ati card.
madVR 0.10 is optimized to achieve smooth motion on capable graphics cards. It seems that the rendering approach used by madVR 0.10 does not play nice with slower/older graphics cards, unfortunately. I don't think perfect smooth playback will ever be possible on such slower/older GPUs *in windowed mode*. I expect, however, to achieve quite good (maybe perfect) results with fullscreen exclusive mode even on some older cards.

Quote:
Originally Posted by Thunderbolt8 View Post
how do delayed frames alter the viewing experience? with dropped frames, there is stuttering, but how do only delayed frames change the look of the movie?
Depends a bit on the refresh rate of your display. A "delayed frame" simply means that madVR didn't manage to display the frame on the VSync it was planning to display it. The lower your refresh rate, the bigger the hit on smoothness will be. If you have 1:1 between source framerate and display refresh rate, every delayed frame usually also results in a dropped frame. Delayed frames usually show as motion stutter, just like dropped frames.

Quote:
Originally Posted by bur View Post
I saw the comparison screens in the first post and they looked quite impressive, but I certainly couldn't reproduce those.
As I've already explained several times, madVR is not expected to magically improve image quality by 200% in every single scene. What madVR is about is mainly:

(1) as mathematically accurate rendering as possible
(2) being independent of stupid driver behavior and driver bugs
(3) included display calibration

Some other renderers may come close to madVR image quality in specific scenes, and then fail in other scenes. In every day life scenes the difference between madVR and other renderers can be very small. But there are specific scenes where other renderers sometimes stumble. The comparison screens on the first page show such specific scenes. If you want to reproduce such big differences on your PC, try to find scenes that are similar. E.g. look for scenes with lots of red on black background ("chroma upsampling"). Or look for scenes with smooth color gradients ("dithering"). Or look for Hypernova's posts in this thread. He's posted some nice real life comparison screenshots where madVR produces visibly better results. Some of these image quality differences are harder to see in motion while other differences are actually *easier* to see in motion. E.g. banding artifacts produced by not using proper dithering can be extremely annoying in motion.

One very big annoyance factor I find with other renderers is that you never know exactly what you will get. Depending on OS, graphics card model, renderer, driver revision, connection type and even display you can get different results. E.g. I've found that sometimes simply asking PowerStrip to change refresh rate can result in the GPU switching between video <-> PC levels. Or some people seem to get good chroma upsampling quality from ATI cards, while other people don't and nobody knows why exactly. With madVR you don't have any such problems. You get 100% the same image quality on every graphics card and every OS because madVR simply doesn't leave the GPU any room for interpretation...
madshi is offline   Reply With Quote
Old 1st June 2009, 16:43   #1271  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Sorry guys, another weekend without a new madVR version. I'm quite busy these days. But don't be afraid, I don't plan to stop development in the long run!
madshi is offline   Reply With Quote
Old 1st June 2009, 18:00   #1272  |  Link
SpaceAgeHero
Blu-ray Fan
 
SpaceAgeHero's Avatar
 
Join Date: Jul 2008
Posts: 26
Quote:
Originally Posted by madshi View Post
Sorry guys, another weekend without a new madVR version. I'm quite busy these days. But don't be afraid, I don't plan to stop development in the long run!
Glad you're alive.
SpaceAgeHero is offline   Reply With Quote
Old 1st June 2009, 18:04   #1273  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by madshi View Post
I don't know whether .icm color profiles work or don't work. If they work in games they should also work in madVR, cause madVR basically behaves like a game.

Careful. Are you talking about input or output bitdepth? cr3dlut uses 8bit input and 16bit output! I doubt CLUT can compete with that. Maybe eventually CLUT is 10bit input 10bit output. But I rather think it's probably 8bit input 10bit output. So worse in every way compared to cr3dlut. Also the next question would be: Does the CLUT round higher bitdepth input data down to the CLUT bitdepth? Or does it interpolate? madVR does trilinear interpolation between the 3D 8bit 3dlut. Interpolating an 8bit lut is probably better than rounding to a 10bit lut.
1) simple ICM files do not work in games...they only work in color managed apps(firefox/photoshop/etc)
the only way to get a LUT calibration that works all the time is w/ a more complicated ICC 4.0 file using the <LUT> tags, or w/ ARGYLLCMS.
ARGYLLCMS will play zillions of test patterns automatically(for 25 minutes in HQ mode), and in the end will make a 256 levels 1D LUT for R/G/B that will automatically make the graphic card match perfect calibration(D65/2.2 mostly, and you can actually check the calibration through Color.HCFR afterwards...it's stellar! ΔE is usually 1 on the whole IRE scale )

if someday cr3dlut enables to import the .cal files from ARGYLLCMS, this might indeed become an all-in-one solution

2)well yeah, true...I know the CLUT works in 10bit on CRT, so mostly it's 8bit input > 10bit CLUT > 10bit output over VGA(ARGYLLCMS can measure the CLUT accuracy).

but DVI is 8bit anyway, so how do the 10bit get encoded to 8bit TMDS? no idea, but it's prolly ugly

anyway, I only use the graphic card's CLUT over VGA on my CRT, on DVI I leave the CLUT untouched.

Last edited by leeperry; 1st June 2009 at 18:13.
leeperry is offline   Reply With Quote
Old 1st June 2009, 18:24   #1274  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
Quote:
Originally Posted by cyberbeing View Post
With 0.10, what have seen is that if the decode and render buffers/queues are not completely full, madVR has the potential to drop/delay frames.

[...]

you may want to look into dynamically limiting the maximum buffer/queue size to what the GPU can always keep full. I wouldn't be surprised if that fixed the delayed/dropped frame problem I have occasionally run into when the render queue drops to 7/8.
Increasing the queue size won't help, unless your queue goes lower than 3-4.
I was actually suggesting to dynamically decrease the maximum queue size. (i.e. if the gpu is only able to keep the queue to 7/8 100% of the time full, it changes the limit to 7/7 instead, and same idea for the decoding queue, if it's only say able to keep the queue 100% of the time to 14/16, it changes the limit to 14/14). In other words a high queue limit is stressing out some GPUs which are unable to keep up.

If you have another idea of why, when the decoder and/or render queues drop to only 15/16 and/or 7/8 respectively, I always get dropped/delayed frames, I'm open for suggestions about how it can be fixed.

Edit: Also is there any chance for you to quickly make a stop-gap release (something like 0.10b) that only fixes the remembering of luma scaling settings? It's been two weeks now, and having to manually set my scaling settings for every single video I open, is getting very annoying, very fast. I hope we don't have to wait another month or something just to see this minor thing fixed.

Last edited by cyberbeing; 1st June 2009 at 18:37.
cyberbeing is offline   Reply With Quote
Old 1st June 2009, 18:59   #1275  |  Link
honai
Guest
 
Posts: n/a
Well, it doesn't annoy everyone. I'm fine with it because it's my prefered setting anyway.
  Reply With Quote
Old 1st June 2009, 19:36   #1276  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by honai View Post
Well, it doesn't annoy everyone. I'm fine with it because it's my prefered setting anyway.
Blah, SoftCubic50 is excessively soft for luma upscaling and neigh unwatchable for me. At least the chroma setting sticks because SoftCubic100 blurs away most of the chroma detail. You are very lucky if you enjoy such soft settings without doing crazy over-sharpening your source, but I respect that everyone has their own tastes...

My preferred:
Upscaling: Spline36
Downscaling: Spline64
Chroma: Mitchell-Netravali

Last edited by cyberbeing; 1st June 2009 at 19:50.
cyberbeing is offline   Reply With Quote
Old 1st June 2009, 22:46   #1277  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by leeperry View Post
ARGYLLCMS will play zillions of test patterns automatically(for 25 minutes in HQ mode), and in the end will make a 256 levels 1D LUT for R/G/B that will automatically make the graphic card match perfect calibration(D65/2.2 mostly, and you can actually check the calibration through Color.HCFR afterwards...it's stellar! ΔE is usually 1 on the whole IRE scale )
256 levels? So obviously the CLUT is only 8bit input -> 10bit output. Compared to cr3dlut which is 8bit input -> 16bit output.

But is 3x 1D LUT good enough? I think R/G/B influence each other. Why else would cr3dlut use one big 3D LUT instead of 3x 1D LUT?

Quote:
Originally Posted by leeperry View Post
if someday cr3dlut enables to import the .cal files from ARGYLLCMS, this might indeed become an all-in-one solution
Well, you have to talk to yesgrey3 about that...

Quote:
Originally Posted by cyberbeing View Post
I was actually suggesting to dynamically decrease the maximum queue size. (i.e. if the gpu is only able to keep the queue to 7/8 100% of the time full, it changes the limit to 7/7 instead, and same idea for the decoding queue, if it's only say able to keep the queue 100% of the time to 14/16, it changes the limit to 14/14). In other words a high queue limit is stressing out some GPUs which are unable to keep up.
No offense, but you don't really seem to understand how these queues work or why they sometimes go down and up again. Trust me, increasing or decreasing the queue depth would have exactly zero effect on stuttering in your case, as far as I can say from your descriptions about how your PC behaves. The key point you're missing is that once the queues have reached their max values, the CPU/GPU stress factor is exactly the same as if the queue was only 1 frame big! After all the queues are only slowly emptied. The CPU/GPU is only stressed in the very beginning of movie playback where the queues need to be filled up. Once the queues are full, stress is over.

If I decreased the queue to only 4 entries, you'd have your queue oscillating between 3/4 and 4/4. If I increased the queue to 16 entries, you'd have your queue oscillating between 15/16 and 16/16.

Quote:
Originally Posted by cyberbeing View Post
Edit: Also is there any chance for you to quickly make a stop-gap release (something like 0.10b) that only fixes the remembering of luma scaling settings?
Even posting in the forum is something I shouldn't really do right now. I've too much "real" work on my hands.

Quote:
Originally Posted by cyberbeing View Post
SoftCubic100 blurs away most of the chroma detail.
Do you have a real-world sample where the loss in chroma detail is visible? Having such a sample here on my PC might help madVR development!
madshi is offline   Reply With Quote
Old 1st June 2009, 23:18   #1278  |  Link
leeperry
Kid for Today
 
Join Date: Aug 2004
Posts: 3,477
Quote:
Originally Posted by madshi View Post
256 levels? So obviously the CLUT is only 8bit input -> 10bit output. Compared to cr3dlut which is 8bit input -> 16bit output.

But is 3x 1D LUT good enough? I think R/G/B influence each other. Why else would cr3dlut use one big 3D LUT instead of 3x 1D LUT?
1)well cr3dlut will be forced to dither to 8bit anyway? RGB32 is 8bit(and so is TMDS over DVI/HDMI <1.3), only the CLUT is 10bit..

2)3x1D LUT is perfectly fine for calibration...we just want the curves to meet properly to reach perfect D65/2.2 levels...as I understand it a 3D LUT is only necessary for gamut transfer functions(which you cannot do w/ 1D LUT's) : http://www.color.org/ICC_Chiba_07-06..._DMP_Float.pdf

this PDF is simple, yet very instructive

the graphic card's CLUT does not support 3D LUT's, only 3x1D LUT's...they always make sure to do it as sloppy as possible as you know

a .cal file looks like this :

Code:
KEYWORD "RGB_I"
NUMBER_OF_FIELDS 4
BEGIN_DATA_FORMAT
RGB_I RGB_R RGB_G RGB_B 
END_DATA_FORMAT

NUMBER_OF_SETS 256
BEGIN_DATA
0.0000 0.083189 0.021229 0.023934 
3.9216e-003 0.087305 0.047484 0.028051 
7.8431e-003 0.091516 0.061404 0.032320 
0.011765 0.095825 0.071292 0.036751 
0.015686 0.10023 0.079941 0.041354 
0.019608 0.10475 0.088256 0.046137 
0.023529 0.10937 0.096318 0.051114 
0.027451 0.11409 0.10400 0.056294 
0.031373 0.11882 0.11121 0.061668 
0.035294 0.12350 0.11782 0.067159 
0.039216 0.12807 0.12387 0.072689 
0.043137 0.13247 0.12948 0.078181 
0.047059 0.13671 0.13473 0.083563 
0.050980 0.14080 0.13966 0.088789 
0.054902 0.14475 0.14437 0.093852 
0.058824 0.14860 0.14889 0.098760 
~
ARGYLLCMS's coder is very helpful, and would be more than happy to help any CMS to improve...so if yesgrey wants to talk w/ him, he can simply sign up on his mailing list or drop him an email.

I'm sure he could even output 3dlut files if we asked nicely

that's my stock CRT :



that's after ARGYLLCMS in HQ mode for 25 mins(in a pitch black room w/ the i1d2 stuck to the CRT) :


Last edited by leeperry; 1st June 2009 at 23:58.
leeperry is offline   Reply With Quote
Old 2nd June 2009, 00:02   #1279  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by leeperry View Post
1)well cr3dlut will be forced to dither to 8bit anyway? RGB32 is 8bit(and so is TMDS over DMI/HDMI <1.3), only the CLUT is 10bit..
cr3dlut does not dither down, madVR does. However, dithering down to 8bit should produce very good results. I don't think anyone could see a difference between dithered 8bit and 10bit. But anyway, sooner or later madVR is going to support Windows 7 higher RGB bitdepths. So if your GPU and display supports it, in the future you can enjoy full 16bit per color. But again: I don't expect that many people will be able to see a difference to dithered 8bit...

Quote:
Originally Posted by leeperry View Post
ARGYLLCMS's coder is very helpful, and would be more than happy to help any CMS to improve...so if yesgrey wants to talk w/ him, he can simply sign up on his mailing list or drop him an email.
Sounds like a good idea to me, but that's really up for yesgrey3 to decide.
madshi is offline   Reply With Quote
Old 2nd June 2009, 00:46   #1280  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by madshi View Post
No offense, but you don't really seem to understand how these queues work or why they sometimes go down and up again....
Sorry, I guess I don't. I apologize since it really seems I'm misunderstanding something

I was just assuming it was something like Haali Renderer's queue, where if you set it too high and you're decoding difficult material, the jitter will go through the roof and stay there. Reducing the Haali Renderer frame queue to something more reasonable will fix these situations when they happen.

Since you wrote madVR, and you're saying it's not the case, I'll believe you. I just hope you are able to fix whatever you believe really is causing delayed/dropped with only small drops in the queues, since you seem to be implying in previous posts that small queue drops like these should not be a major issue, but they obviously are on my PC.

Quote:
Originally Posted by madshi View Post
Even posting in the forum is something I shouldn't really do right now. I've too much "real" work on my hands.
Understood. I'll continue to patently wait then.

Quote:
Originally Posted by madshi View Post
Do you have a real-world sample where the loss in chroma detail is visible? Having such a sample here on my PC might help madVR development!
I believe it was from some anime DVD source, but it was awhile ago, so I can't remember exactly which I tested it on. Some previously noisy chroma became nearly flat from the blurring when upscaled with chroma set to SoftCubic100. Mitchell-Netravali was the lowest common denominator I could find which didn't seem to overly blur the chroma, and isn't overly sharp either. I'll see if I can track down another source where it is extremely noticeable.

Last edited by cyberbeing; 2nd June 2009 at 00:50.
cyberbeing is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:08.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.