Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 5th March 2014, 05:33   #24261  |  Link
seiyafan
Registered User
 
Join Date: Feb 2014
Posts: 162
This is what I experienced:



Not sure why I was dropping flies with half of the rendering time.
seiyafan is offline   Reply With Quote
Old 5th March 2014, 06:50   #24262  |  Link
jkauff
Registered User
 
Join Date: Oct 2012
Location: Akron, OH
Posts: 491
Quote:
Originally Posted by Asmodian View Post
I prefer none for hardware decoding, software can handle almost anything while the various hardware options all have their different issues. Keep the GPU free for madVR.
There's one use case where hardware decoding makes sense. If you have an Intel iGPU as well as an NVIDIA/AMD dGPU, you can have LAV using the iGPU while madVR is using the dGPU (this is assuming you have a motherboard/BIOS that allows simultaneous use).

When I'm doing a Handbrake encode, the CPU is at or near 100%. If I want to watch a movie at the same time, off-loading the decoding to QuickSync works great. No dropped frames.
jkauff is offline   Reply With Quote
Old 5th March 2014, 08:13   #24263  |  Link
yok833
Registered User
 
Join Date: Aug 2012
Posts: 73
I would be happy to participate and send your message to the Nivdia developers.... Could you post 2 or 3 addresses where we can contact them ?

"Envoyé depuis mon GT-I9300 avec Tapatalk"
yok833 is offline   Reply With Quote
Old 5th March 2014, 09:07   #24264  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
Quote:
Originally Posted by madshi View Post
This question is asked a lot, and there's no simple answer. It all depends on which algorithms you want to use, which input/output resolutions need to be supported and which movie framerate and display refresh rate etc. Just as an example: If you need support for 60p, the GPU has to be almost 2.5x as fast as it would have to be if you limited yourself to 24p. So that shows clearly how much all of this depends on your exact requirements
My setup is a 4K projector and 99% of the content I use is 23,976fps Blu-Ray content.

I m running madVR with:
- OpenCL option in general settings disabled
- Image doubling disabled
- Luma and Chroma Upscaling @JINC 4taps with AR enabled.
- All trade quality for performance boxes disabled.
- Dithering option 2 with the 2 checkboxes disabled.
- Debanding at 'MID+HIGH'.
- GPU hardware decoding disabled (my CPU takes care of this because you want your GPU free for all other madVR options as much as possible)

My GPU is a AMD 280X oc-ed pretty heavy with the latest 14.2 beta driver. Besides this I m running a i7 2600K

With all these settings my OC-ed 280X (which is basically a HD7970 with a little more power) run @65% GPU usage.

Last edited by THX-UltraII; 5th March 2014 at 13:10.
THX-UltraII is offline   Reply With Quote
Old 5th March 2014, 09:17   #24265  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by THX-UltraII View Post
- GPU hardware decoding disabled (my CPU takes care of this because you want your GPU free for all other madVR options as much as possible)
If I'm not mistaken, with Nvidia cards there is a separate Video Engine (PureVideo) decoder processor on the GPU for that.
So everything madVR uses as GPU processing is done on the main processor of the GPU and not on the Video Engine decoder.
Isn't that the case with ATI also?

On the other hand CPU decoding is "less buggy" and is constantly updated.
Maybe nevcairiel can clarify how it works.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 5th March 2014 at 09:26.
James Freeman is offline   Reply With Quote
Old 5th March 2014, 09:27   #24266  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Shiandow View Post
You seem to assume that you would still calculate the error using the original image.
"newError = originalPixel + cumulatedErrors - drawnPixel". How else would you do it?

Quote:
Originally Posted by Shiandow View Post
Otherwise you're suggesting that changing the values of the image somehow doesn't change the resulting brightness.
You're only changing the pattern, but not the overall brightness. I'm sure about this - unless you also change the way the error is calculated.

Quote:
Originally Posted by iSunrise View Post
It was caused by the backbuffer queue that went down to 0-2/4 or 0-2/8, when smooth motion was active. I solved it by increasing the general CPU queue and the GPU queue, while also increasing the backbuffers for both of the respective modes (windowed/exclusive) to 8, while I had them at 4 before.

It seems smooth motion and 1080p120 requires the queues to be (a lot) higher in general
Yes, smooth motion does often require higher queues values. To be honest, I don't really see any benefit (worth mentioning) of using short queues.

Quote:
Originally Posted by seiyafan View Post
I found out that with deinterlacing enabled, there's massive frame drop when the rendering time is actually less than the time when I disabled deinterlacing with a more difficult upscaling algorithm but noticed no frame drop. What's happening?
Quote:
Originally Posted by Asmodian View Post
deinterlacing is probably doubling the frame rate so you require one half the max rendering times before you get dropped frames. Compare "movie frame interval" and average rendering times in madVR's OSD.
I agree with Asmodian.

Look at your OSD screenshots: With deinterlacing disabled you have rendering times of 39.57ms, with 25fps. Now if you do the math (1000 / 25fps = 40ms), the rendering times are just small enough. With deinterlacing enabled you have combined rendering times + deinterlacing times of 21.95ms, and you get 50fps out of the deinterlacer. Do the math (1000 / 50fps = 20ms) and you'll see that the rendering times are too high. The deinterlacer doubles the framerate which means rendering times must be half as high to avoid dropped frames.
madshi is offline   Reply With Quote
Old 5th March 2014, 09:42   #24267  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
madshi,
The greater the rendering time, the more the video will lag behind the audio?
Is there synchronization between them even with higher rendering time?

Or is it how much time it takes for a single frame to be rendered, and if the time is longer than the video's single frame (1000/24 = 41.7ms), there will be dropped frame?
(as understood from your last post).
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 5th March 2014 at 09:50.
James Freeman is offline   Reply With Quote
Old 5th March 2014, 10:07   #24268  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 496
Quote:
Originally Posted by madshi View Post
Yes, smooth motion does often require higher queues values. To be honest, I don't really see any benefit (worth mentioning) of using short queues.
I kind of liked it when the media player felt snappier (also the madVR OSD reacted way faster), that´s why I went with lower queues before, but I agree that after re-visiting the queue settings yesterday, the difference is almost non-existent now, because even though I upped the queues, I didn´t overdo it, so the OSD still reacts almost instantly when I enable/disable it and that test file should be a good worst case scenario to test against. Of course, I could probably just use the defaults and be done with it. Because that´s what I do when I do my bug reports, anyway.

And while we´re at it, I just got an answer from Blaire (current state of the NV madVR driver fix), who wrote me that NV is currently working on the fix. The exact phrase is "Yes. We are still working on it", so this seems to be good news for an upcoming fix in the next 1-2 driver releases hopefully.
iSunrise is offline   Reply With Quote
Old 5th March 2014, 10:21   #24269  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by James Freeman View Post
The greater the rendering time, the more the video will lag behind the audio?
Is there synchronization between them even with higher rendering time?
As long as it's not a total slide show audio/video sync should always be maintained. If the rendering times are too high, this will just result in frames being dropped.

Quote:
Originally Posted by iSunrise View Post
And while we´re at it, I just got an answer from Blaire (current state of the NV madVR driver fix), who wrote me that NV is currently working on the fix. The exact phrase is "Yes. We are still working on it", so this seems to be good news for an upcoming fix in the next 1-2 driver releases hopefully.
Great - thanks!
madshi is offline   Reply With Quote
Old 5th March 2014, 10:37   #24270  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by iSunrise View Post
And while we´re at it, I just got an answer from Blaire (current state of the NV madVR driver fix), who wrote me that NV is currently working on the fix. The exact phrase is "Yes. We are still working on it", so this seems to be good news for an upcoming fix in the next 1-2 driver releases hopefully.
High hopes, fingers crossed, let it be true.

Quote:
Originally Posted by madshi View Post
As long as it's not a total slide show audio/video sync should always be maintained. If the rendering times are too high, this will just result in frames being dropped.
Thanks.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.
James Freeman is offline   Reply With Quote
Old 5th March 2014, 11:14   #24271  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 753
Quote:
Originally Posted by madshi View Post
You're only changing the pattern, but not the overall brightness. I'm sure about this - unless you also change the way the error is calculated.
There seems to be some kind of misunderstanding. The shader is meant to change the input of the dithering algorithm, it would be very odd if this didn't affect the output.

It would be nice if we could agree on a way to make the shader 'correct' since it then gives a way of estimating how 'wrong' the dithering algorithm currently is. If the shader, as I posted it, is correct then the mistakes caused by dithering in gamma light are almost insignificant (i.e. not noticeable with 16 bits) except in some very rare cases where a pixel value is inbetween two colours that are almost black.
Shiandow is offline   Reply With Quote
Old 5th March 2014, 11:23   #24272  |  Link
Aikibana
Registered User
 
Join Date: Aug 2012
Posts: 12
Quote:
Originally Posted by iSunrise View Post
And while we´re at it, I just got an answer from Blaire (current state of the NV madVR driver fix), who wrote me that NV is currently working on the fix. The exact phrase is "Yes. We are still working on it", so this seems to be good news for an upcoming fix in the next 1-2 driver releases hopefully.
Now let's start the same initiative at AMD for fixing the OpenCL-D3D9 Interop lag

Although I have high hopes a D3D9 > D3D10/11 > OpenCL routing might improve this, when Madshi finds the time.
Aikibana is offline   Reply With Quote
Old 5th March 2014, 12:00   #24273  |  Link
toniash
Registered User
 
Join Date: Oct 2010
Posts: 131
What to look for if my render queues don't fill up completely?
toniash is offline   Reply With Quote
Old 5th March 2014, 13:12   #24274  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
Quote:
Originally Posted by James Freeman View Post
If I'm not mistaken, with Nvidia cards there is a separate Video Engine (PureVideo) decoder processor on the GPU for that.
So everything madVR uses as GPU processing is done on the main processor of the GPU and not on the Video Engine decoder.
Isn't that the case with ATI also?

On the other hand CPU decoding is "less buggy" and is constantly updated.
Maybe nevcairiel can clarify how it works.
Like you said, CPU processing is stable as hell and since I m not using SVP anymore I have loads of CPU power left.

Aren t you also using a Sony VW1000 as display device like me? What are you preferred settings for 2K to 4K Blu-Ray content upscaling with madVR? Something like the settings I use?
THX-UltraII is offline   Reply With Quote
Old 5th March 2014, 13:16   #24275  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by THX-UltraII View Post
Aren t you also using a Sony VW1000 as display device like me?
No, It wasn't me (although I really wish it was).
I still don't have the 25,000$ () to shed on a projector...
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.
James Freeman is offline   Reply With Quote
Old 5th March 2014, 13:20   #24276  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
Quote:
Originally Posted by James Freeman View Post
No, It wasn't me (although I really wish it was).
I still don't have the 25,000$ () to shed on a projector...
I must have mistaken you for someone else. My bad.

But what is your thought about my settings when thinking '2K to 4K' upscaling.
THX-UltraII is offline   Reply With Quote
Old 5th March 2014, 13:34   #24277  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by THX-UltraII View Post
But what is your thought about my settings when thinking '2K to 4K' upscaling.
Debanding (MID), softens your image.
If you watch original (non-internet) Blu-Rays, disable debanding (or set to Low if its a 10GB+ internet content).

JINC is good but, I still prefect Lanczos because it preserves the detail at higher frequencies, whether Jinc blurs them a little (as I see it).


One more (important) thing,
Use this (MPC LumaSharpen) shader Its the best sharpenning shader available (IMO), Yes way better (and smarter) than Sharpen Complex 2.
It will not touch the finest detail (add contrast) but will sharpen the not sharp enough detail/textures.

Copy into a .hlsl (if using MPC-HC) file and activate the pixel shader,

Code:
/*
_____________________

LumaSharpen 1.4.1
_____________________

by Christian Cann Schuldt Jensen ~ CeeJay.dk

It blurs the original pixel with the surrounding pixels and then subtracts this blur to sharpen the image.
It does this in luma to avoid color artifacts and allows limiting the maximum sharpning to avoid or lessen halo artifacts.

This is similar to using Unsharp Mask in Photoshop.

Compiles with 3.0
*/

/*-----------------------------------------------------------.
/                      User settings                          /
'-----------------------------------------------------------*/

#define sharp_strength 0.65
#define sharp_clamp 0.035
#define pattern 8
#define offset_bias 1.0
#define show_sharpen 0

/*-----------------------------------------------------------.
/                      Developer settings                     /
'-----------------------------------------------------------*/
#define CoefLuma float3(0.2126, 0.7152, 0.0722)      // BT.709 & sRBG luma coefficient (Monitors and HD Television)
//#define CoefLuma float3(0.299, 0.587, 0.114)       // BT.601 luma coefficient (SD Television)
//#define CoefLuma float3(1.0/3.0, 1.0/3.0, 1.0/3.0) // Equal weight coefficient

/*-----------------------------------------------------------.
/                          Main code                          /
'-----------------------------------------------------------*/

float4 p0 :  register(c0);
sampler s0 : register(s0);

#define px (1.0 / p0[0])
#define py (1.0 / p0[1])

float4 main(float2 tex : TEXCOORD0) : COLOR0
{
    // -- Get the original pixel --
    float3 ori = tex2D(s0, tex).rgb;       // ori = original pixel
    float4 inputcolor = tex2D(s0, tex);

        // -- Combining the strength and luma multipliers --
        float3 sharp_strength_luma = (CoefLuma * sharp_strength); //I'll be combining even more multipliers with it later on

        /*-----------------------------------------------------------.
        /                       Sampling patterns                     /
        '-----------------------------------------------------------*/
        //   [ NW,   , NE ] Each texture lookup (except ori)
        //   [   ,ori,    ] samples 4 pixels
        //   [ SW,   , SE ]

        // -- Pattern 1 -- A (fast) 7 tap gaussian using only 2+1 texture fetches.
#if pattern == 1

        // -- Gaussian filter --
        //   [ 1/9, 2/9,    ]     [ 1 , 2 ,   ]
        //   [ 2/9, 8/9, 2/9]  =  [ 2 , 8 , 2 ]
        //   [    , 2/9, 1/9]     [   , 2 , 1 ]

        float3 blur_ori = tex2D(s0, tex + (float2(px, py) / 3.0) * offset_bias).rgb;  // North West
        blur_ori += tex2D(s0, tex + (float2(-px, -py) / 3.0) * offset_bias).rgb; // South East

    //blur_ori += tex2D(s0, tex + float2(px,py) / 3.0 * offset_bias); // North East
    //blur_ori += tex2D(s0, tex + float2(-px,-py) / 3.0 * offset_bias); // South West

    blur_ori /= 2;  //Divide by the number of texture fetches

    sharp_strength_luma *= 1.5; // Adjust strength to aproximate the strength of pattern 2

#endif

    // -- Pattern 2 -- A 9 tap gaussian using 4+1 texture fetches.
#if pattern == 2

    // -- Gaussian filter --
    //   [ .25, .50, .25]     [ 1 , 2 , 1 ]
    //   [ .50,   1, .50]  =  [ 2 , 4 , 2 ]
    //   [ .25, .50, .25]     [ 1 , 2 , 1 ]


    float3 blur_ori = tex2D(s0, tex + float2(px, -py) * 0.5 * offset_bias).rgb; // South East
        blur_ori += tex2D(s0, tex + float2(-px, -py) * 0.5 * offset_bias).rgb;  // South West
    blur_ori += tex2D(s0, tex + float2(px, py) * 0.5 * offset_bias).rgb; // North East
    blur_ori += tex2D(s0, tex + float2(-px, py) * 0.5 * offset_bias).rgb; // North West

    blur_ori *= 0.25;  // ( /= 4) Divide by the number of texture fetches

#endif

    // -- Pattern 3 -- An experimental 17 tap gaussian using 4+1 texture fetches.
#if pattern == 3

    // -- Gaussian filter --
    //   [   , 4 , 6 ,   ,   ]
    //   [   ,16 ,24 ,16 , 4 ]
    //   [ 6 ,24 ,   ,24 , 6 ]
    //   [ 4 ,16 ,24 ,16 ,   ]
    //   [   ,   , 6 , 4 ,   ]

    float3 blur_ori = tex2D(s0, tex + float2(0.4*px, -1.2*py)* offset_bias).rgb;  // South South East
        blur_ori += tex2D(s0, tex + float2(-1.2*px, -0.4*py) * offset_bias).rgb; // West South West
    blur_ori += tex2D(s0, tex + float2(1.2*px, 0.4*py) * offset_bias).rgb; // East North East
    blur_ori += tex2D(s0, tex + float2(-0.4*px, 1.2*py) * offset_bias).rgb; // North North West

    blur_ori *= 0.25;  // ( /= 4) Divide by the number of texture fetches

    sharp_strength_luma *= 0.51;
#endif

    // -- Pattern 4 -- A 9 tap high pass (pyramid filter) using 4+1 texture fetches.
#if pattern == 4

    // -- Gaussian filter --
    //   [ .50, .50, .50]     [ 1 , 1 , 1 ]
    //   [ .50,    , .50]  =  [ 1 ,   , 1 ]
    //   [ .50, .50, .50]     [ 1 , 1 , 1 ]

    float3 blur_ori = tex2D(s0, tex + float2(0.5 * px, -py * offset_bias)).rgb;  // South South East
        blur_ori += tex2D(s0, tex + float2(offset_bias * -px, 0.5 * -py)).rgb; // West South West
    blur_ori += tex2D(s0, tex + float2(offset_bias * px, 0.5 * py)).rgb; // East North East
    blur_ori += tex2D(s0, tex + float2(0.5 * -px, py * offset_bias)).rgb; // North North West

    //blur_ori += (2 * ori); // Probably not needed. Only serves to lessen the effect.

    blur_ori /= 4.0;  //Divide by the number of texture fetches

    sharp_strength_luma *= 0.666; // Adjust strength to aproximate the strength of pattern 2
#endif

    // -- Pattern 8 -- A (slower) 9 tap gaussian using 9 texture fetches.
#if pattern == 8

    // -- Gaussian filter --
    //   [ 1 , 2 , 1 ]
    //   [ 2 , 4 , 2 ]
    //   [ 1 , 2 , 1 ]

    half3 blur_ori = tex2D(s0, tex + float2(-px, py) * offset_bias).rgb; // North West
        blur_ori += tex2D(s0, tex + float2(px, -py) * offset_bias).rgb;     // South East
    blur_ori += tex2D(s0, tex + float2(-px, -py)  * offset_bias).rgb;  // South West
    blur_ori += tex2D(s0, tex + float2(px, py) * offset_bias).rgb;    // North East

    half3 blur_ori2 = tex2D(s0, tex + float2(0, py) * offset_bias).rgb; // North
        blur_ori2 += tex2D(s0, tex + float2(0, -py) * offset_bias).rgb;    // South
    blur_ori2 += tex2D(s0, tex + float2(-px, 0) * offset_bias).rgb;   // West
    blur_ori2 += tex2D(s0, tex + float2(px, 0) * offset_bias).rgb;   // East
    blur_ori2 *= 2.0;

    blur_ori += blur_ori2;
    blur_ori += (ori * 4); // Probably not needed. Only serves to lessen the effect.

    // dot()s with gaussian strengths here?

    blur_ori /= 16.0;  //Divide by the number of texture fetches

    //sharp_strength_luma *= 0.75; // Adjust strength to aproximate the strength of pattern 2
#endif

    // -- Pattern 9 -- A (slower) 9 tap high pass using 9 texture fetches.
#if pattern == 9

    // -- Gaussian filter --
    //   [ 1 , 1 , 1 ]
    //   [ 1 , 1 , 1 ]
    //   [ 1 , 1 , 1 ]

    float3 blur_ori = tex2D(s0, tex + float2(-px, py) * offset_bias).rgb; // North West
        blur_ori += tex2D(s0, tex + float2(px, -py) * offset_bias).rgb;     // South East
    blur_ori += tex2D(s0, tex + float2(-px, -py)  * offset_bias).rgb;  // South West
    blur_ori += tex2D(s0, tex + float2(px, py) * offset_bias).rgb;    // North East

    blur_ori += ori.rgb; // Probably not needed. Only serves to lessen the effect.

    blur_ori += tex2D(s0, tex + float2(0, py) * offset_bias).rgb;    // North
    blur_ori += tex2D(s0, tex + float2(0, -py) * offset_bias).rgb;  // South
    blur_ori += tex2D(s0, tex + float2(-px, 0) * offset_bias).rgb; // West
    blur_ori += tex2D(s0, tex + float2(px, 0) * offset_bias).rgb; // East

    blur_ori /= 9;  //Divide by the number of texture fetches

    //sharp_strength_luma *= (8.0/9.0); // Adjust strength to aproximate the strength of pattern 2
#endif


    /*-----------------------------------------------------------.
    /                            Sharpen                          /
    '-----------------------------------------------------------*/

    // -- Calculate the sharpening --
    float3 sharp = ori - blur_ori;  //Subtracting the blurred image from the original image

#if 0 //New experimental limiter .. not yet finished
        float sharp_luma = dot(sharp, sharp_strength_luma); //Calculate the luma
    sharp_luma = (abs(sharp_luma)*8.0) * exp(1.0 - (abs(sharp_luma)*8.0)) * sign(sharp_luma) / 16.0; //I should probably move the strength modifier here

#elif 0 //SweetFX 1.4 code
        // -- Adjust strength of the sharpening --
        float sharp_luma = dot(sharp, sharp_strength_luma); //Calculate the luma and adjust the strength

    // -- Clamping the maximum amount of sharpening to prevent halo artifacts --
    sharp_luma = clamp(sharp_luma, -sharp_clamp, sharp_clamp);  //TODO Try a curve function instead of a clamp

#else //SweetFX 1.5.1 code
        // -- Adjust strength of the sharpening and clamp it--
        float4 sharp_strength_luma_clamp = float4(sharp_strength_luma * (0.5 / sharp_clamp), 0.5); //Roll part of the clamp into the dot

        //sharp_luma = saturate((0.5 / sharp_clamp) * sharp_luma + 0.5); //scale up and clamp
        float sharp_luma = saturate(dot(float4(sharp, 1.0), sharp_strength_luma_clamp)); //Calculate the luma, adjust the strength, scale up and clamp
    sharp_luma = (sharp_clamp * 2.0) * sharp_luma - sharp_clamp; //scale down
#endif

    // -- Combining the values to get the final sharpened pixel	--
    //float4 done = ori + sharp_luma;    // Add the sharpening to the original.
    inputcolor.rgb = inputcolor.rgb + sharp_luma;    // Add the sharpening to the input color.

    /*-----------------------------------------------------------.
    /                     Returning the output                    /
    '-----------------------------------------------------------*/
#if show_sharpen == 1
    //inputcolor.rgb = abs(sharp * 4.0);
    inputcolor.rgb = saturate(0.5 + (sharp_luma * 4)).rrr;
#endif

    return saturate(inputcolor);
}
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 5th March 2014 at 13:51.
James Freeman is offline   Reply With Quote
Old 5th March 2014, 13:44   #24278  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
Quote:
Debanding (MID), softens your image. If you watch original (non-internet) Blu-Rays, disable debanding.
I thought that uncompressed Blu-Ray content can have debanding too?

Quote:
JINC is good but, I still prefect Lanczos because it preserves the detail at higher frequencies, whether Jinc blurs them a little (as I see it).
I took JINC because it is a newer algo . What do you think about using NNEDI3 for Chroma Upscaling and Lanczos for Luma Upscaling? Or is it prefferable to always use the same algorithm for both luma and chroma upscaling? (why is NNEDI3 not available for Luma Upscaling btw?)

Quote:
One more (important) thing,
Use this (MPC LumaSharpen) shader (Its the best sharpenning shader available (IMO)
You say to use this on top of the scaling algorithms in madVR? Won t that cause any major artifacts?
THX-UltraII is offline   Reply With Quote
Old 5th March 2014, 13:56   #24279  |  Link
James Freeman
Registered User
 
Join Date: Sep 2013
Posts: 919
Quote:
Originally Posted by THX-UltraII View Post
I thought that uncompressed Blu-Ray content can have debanding too?
Seldom.
Not deserving MID Debaning treatment.

Quote:
Originally Posted by THX-UltraII View Post
I took JINC because it is a newer algo . What do you think about using NNEDI3 for Chroma Upscaling and Lanczos for Luma Upscaling?
Or is it preferable to always use the same algorithm for both luma and chroma upscaling? (why is NNEDI3 not available for Luma Upscaling btw?)
I use Lanczos 3 AR for all, NNEDI3 still does not work for me.
Maybe someone else can answer you that more specifically.

Quote:
Originally Posted by THX-UltraII View Post
You say to use this on top of the scaling algorithms in madVR? Won t that cause any major artifacts?
Try it and thank me later.
I updated the code, so be sure to copy the 1.4.1 version.


@madshi
Can MadVR load pixel shaders?
I think I hear something similar somewhere...
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410.

Last edited by James Freeman; 5th March 2014 at 14:01.
James Freeman is offline   Reply With Quote
Old 5th March 2014, 13:59   #24280  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by Shiandow View Post
There seems to be some kind of misunderstanding. The shader is meant to change the input of the dithering algorithm
Oh, ok, I didn't understand it this way. Alright, it could be interesting to test this. I might release a test build which uses your pre-processing.

Quote:
Originally Posted by toniash View Post
What to look for if my render queues don't fill up completely?
Render queues? There is only one render queue. Maybe post a screenshot of the debug OSD, then we might be able to say more.

Quote:
Originally Posted by James Freeman View Post
Debanding (MID), softens your image.
If you watch original (non-internet) Blu-Rays, disable debanding (or set to Low if its a 10GB+ internet content).

[...]

One more (important) thing,
Use this (MPC LumaSharpen) shader Its the best sharpenning shader available (IMO), Yes way better (and smarter) than Sharpen Complex 2.
It will not touch the finest detail (add contrast) but will sharpen the not sharp enough detail/textures.
James, posting suggestions and recommendations is fine, but please make sure you mark these as your personal subjective opinion. You do sound a bit as if you your recommendations would be what everybody agreed on being the best settings. Which is definitely not the case here.

IMHO, using a deband setting of "low" can still make sense for Blu-Ray, but that's only my personal opinion and I know that some people will disagree. Sharpening is also very controversial. Some people hate it. Some people love it.

Quote:
Originally Posted by THX-UltraII View Post
I thought that uncompressed Blu-Ray content can have debanding too?
Yes, it can happen.

Quote:
Originally Posted by THX-UltraII View Post
I took JINC because it is a newer algo
FYI, from what I've seen, most people prefer Jinc. A minority of people prefer Lanczos. You may want to make up your own mind which you prefer.

Quote:
Originally Posted by James Freeman View Post
Can MadVR load pixel shaders?
I think I hear something similar somewhere...
I don't understand what you mean. madVR does support custom pixel shaders, if that is your question.
madshi is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 11:33.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.