Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#24281 | Link | |
Registered User
Join Date: Aug 2008
Location: the Netherlands
Posts: 850
|
Quote:
Aren t you also using a Sony VW1000 as display device like me? What are you preferred settings for 2K to 4K Blu-Ray content upscaling with madVR? Something like the settings I use? |
|
![]() |
![]() |
![]() |
#24282 | Link | |
Registered User
Join Date: Sep 2013
Posts: 919
|
Quote:
![]() I still don't have the 25,000$ ( ![]()
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. |
|
![]() |
![]() |
![]() |
#24284 | Link | |
Registered User
Join Date: Sep 2013
Posts: 919
|
Quote:
If you watch original (non-internet) Blu-Rays, disable debanding (or set to Low if its a 10GB+ internet content). JINC is good but, I still prefect Lanczos because it preserves the detail at higher frequencies, whether Jinc blurs them a little (as I see it). One more (important) thing, Use this (MPC LumaSharpen) shader Its the best sharpenning shader available (IMO), Yes way better (and smarter) than Sharpen Complex 2. It will not touch the finest detail (add contrast) but will sharpen the not sharp enough detail/textures. Copy into a .hlsl (if using MPC-HC) file and activate the pixel shader, Code:
/* _____________________ LumaSharpen 1.4.1 _____________________ by Christian Cann Schuldt Jensen ~ CeeJay.dk It blurs the original pixel with the surrounding pixels and then subtracts this blur to sharpen the image. It does this in luma to avoid color artifacts and allows limiting the maximum sharpning to avoid or lessen halo artifacts. This is similar to using Unsharp Mask in Photoshop. Compiles with 3.0 */ /*-----------------------------------------------------------. / User settings / '-----------------------------------------------------------*/ #define sharp_strength 0.65 #define sharp_clamp 0.035 #define pattern 8 #define offset_bias 1.0 #define show_sharpen 0 /*-----------------------------------------------------------. / Developer settings / '-----------------------------------------------------------*/ #define CoefLuma float3(0.2126, 0.7152, 0.0722) // BT.709 & sRBG luma coefficient (Monitors and HD Television) //#define CoefLuma float3(0.299, 0.587, 0.114) // BT.601 luma coefficient (SD Television) //#define CoefLuma float3(1.0/3.0, 1.0/3.0, 1.0/3.0) // Equal weight coefficient /*-----------------------------------------------------------. / Main code / '-----------------------------------------------------------*/ float4 p0 : register(c0); sampler s0 : register(s0); #define px (1.0 / p0[0]) #define py (1.0 / p0[1]) float4 main(float2 tex : TEXCOORD0) : COLOR0 { // -- Get the original pixel -- float3 ori = tex2D(s0, tex).rgb; // ori = original pixel float4 inputcolor = tex2D(s0, tex); // -- Combining the strength and luma multipliers -- float3 sharp_strength_luma = (CoefLuma * sharp_strength); //I'll be combining even more multipliers with it later on /*-----------------------------------------------------------. / Sampling patterns / '-----------------------------------------------------------*/ // [ NW, , NE ] Each texture lookup (except ori) // [ ,ori, ] samples 4 pixels // [ SW, , SE ] // -- Pattern 1 -- A (fast) 7 tap gaussian using only 2+1 texture fetches. #if pattern == 1 // -- Gaussian filter -- // [ 1/9, 2/9, ] [ 1 , 2 , ] // [ 2/9, 8/9, 2/9] = [ 2 , 8 , 2 ] // [ , 2/9, 1/9] [ , 2 , 1 ] float3 blur_ori = tex2D(s0, tex + (float2(px, py) / 3.0) * offset_bias).rgb; // North West blur_ori += tex2D(s0, tex + (float2(-px, -py) / 3.0) * offset_bias).rgb; // South East //blur_ori += tex2D(s0, tex + float2(px,py) / 3.0 * offset_bias); // North East //blur_ori += tex2D(s0, tex + float2(-px,-py) / 3.0 * offset_bias); // South West blur_ori /= 2; //Divide by the number of texture fetches sharp_strength_luma *= 1.5; // Adjust strength to aproximate the strength of pattern 2 #endif // -- Pattern 2 -- A 9 tap gaussian using 4+1 texture fetches. #if pattern == 2 // -- Gaussian filter -- // [ .25, .50, .25] [ 1 , 2 , 1 ] // [ .50, 1, .50] = [ 2 , 4 , 2 ] // [ .25, .50, .25] [ 1 , 2 , 1 ] float3 blur_ori = tex2D(s0, tex + float2(px, -py) * 0.5 * offset_bias).rgb; // South East blur_ori += tex2D(s0, tex + float2(-px, -py) * 0.5 * offset_bias).rgb; // South West blur_ori += tex2D(s0, tex + float2(px, py) * 0.5 * offset_bias).rgb; // North East blur_ori += tex2D(s0, tex + float2(-px, py) * 0.5 * offset_bias).rgb; // North West blur_ori *= 0.25; // ( /= 4) Divide by the number of texture fetches #endif // -- Pattern 3 -- An experimental 17 tap gaussian using 4+1 texture fetches. #if pattern == 3 // -- Gaussian filter -- // [ , 4 , 6 , , ] // [ ,16 ,24 ,16 , 4 ] // [ 6 ,24 , ,24 , 6 ] // [ 4 ,16 ,24 ,16 , ] // [ , , 6 , 4 , ] float3 blur_ori = tex2D(s0, tex + float2(0.4*px, -1.2*py)* offset_bias).rgb; // South South East blur_ori += tex2D(s0, tex + float2(-1.2*px, -0.4*py) * offset_bias).rgb; // West South West blur_ori += tex2D(s0, tex + float2(1.2*px, 0.4*py) * offset_bias).rgb; // East North East blur_ori += tex2D(s0, tex + float2(-0.4*px, 1.2*py) * offset_bias).rgb; // North North West blur_ori *= 0.25; // ( /= 4) Divide by the number of texture fetches sharp_strength_luma *= 0.51; #endif // -- Pattern 4 -- A 9 tap high pass (pyramid filter) using 4+1 texture fetches. #if pattern == 4 // -- Gaussian filter -- // [ .50, .50, .50] [ 1 , 1 , 1 ] // [ .50, , .50] = [ 1 , , 1 ] // [ .50, .50, .50] [ 1 , 1 , 1 ] float3 blur_ori = tex2D(s0, tex + float2(0.5 * px, -py * offset_bias)).rgb; // South South East blur_ori += tex2D(s0, tex + float2(offset_bias * -px, 0.5 * -py)).rgb; // West South West blur_ori += tex2D(s0, tex + float2(offset_bias * px, 0.5 * py)).rgb; // East North East blur_ori += tex2D(s0, tex + float2(0.5 * -px, py * offset_bias)).rgb; // North North West //blur_ori += (2 * ori); // Probably not needed. Only serves to lessen the effect. blur_ori /= 4.0; //Divide by the number of texture fetches sharp_strength_luma *= 0.666; // Adjust strength to aproximate the strength of pattern 2 #endif // -- Pattern 8 -- A (slower) 9 tap gaussian using 9 texture fetches. #if pattern == 8 // -- Gaussian filter -- // [ 1 , 2 , 1 ] // [ 2 , 4 , 2 ] // [ 1 , 2 , 1 ] half3 blur_ori = tex2D(s0, tex + float2(-px, py) * offset_bias).rgb; // North West blur_ori += tex2D(s0, tex + float2(px, -py) * offset_bias).rgb; // South East blur_ori += tex2D(s0, tex + float2(-px, -py) * offset_bias).rgb; // South West blur_ori += tex2D(s0, tex + float2(px, py) * offset_bias).rgb; // North East half3 blur_ori2 = tex2D(s0, tex + float2(0, py) * offset_bias).rgb; // North blur_ori2 += tex2D(s0, tex + float2(0, -py) * offset_bias).rgb; // South blur_ori2 += tex2D(s0, tex + float2(-px, 0) * offset_bias).rgb; // West blur_ori2 += tex2D(s0, tex + float2(px, 0) * offset_bias).rgb; // East blur_ori2 *= 2.0; blur_ori += blur_ori2; blur_ori += (ori * 4); // Probably not needed. Only serves to lessen the effect. // dot()s with gaussian strengths here? blur_ori /= 16.0; //Divide by the number of texture fetches //sharp_strength_luma *= 0.75; // Adjust strength to aproximate the strength of pattern 2 #endif // -- Pattern 9 -- A (slower) 9 tap high pass using 9 texture fetches. #if pattern == 9 // -- Gaussian filter -- // [ 1 , 1 , 1 ] // [ 1 , 1 , 1 ] // [ 1 , 1 , 1 ] float3 blur_ori = tex2D(s0, tex + float2(-px, py) * offset_bias).rgb; // North West blur_ori += tex2D(s0, tex + float2(px, -py) * offset_bias).rgb; // South East blur_ori += tex2D(s0, tex + float2(-px, -py) * offset_bias).rgb; // South West blur_ori += tex2D(s0, tex + float2(px, py) * offset_bias).rgb; // North East blur_ori += ori.rgb; // Probably not needed. Only serves to lessen the effect. blur_ori += tex2D(s0, tex + float2(0, py) * offset_bias).rgb; // North blur_ori += tex2D(s0, tex + float2(0, -py) * offset_bias).rgb; // South blur_ori += tex2D(s0, tex + float2(-px, 0) * offset_bias).rgb; // West blur_ori += tex2D(s0, tex + float2(px, 0) * offset_bias).rgb; // East blur_ori /= 9; //Divide by the number of texture fetches //sharp_strength_luma *= (8.0/9.0); // Adjust strength to aproximate the strength of pattern 2 #endif /*-----------------------------------------------------------. / Sharpen / '-----------------------------------------------------------*/ // -- Calculate the sharpening -- float3 sharp = ori - blur_ori; //Subtracting the blurred image from the original image #if 0 //New experimental limiter .. not yet finished float sharp_luma = dot(sharp, sharp_strength_luma); //Calculate the luma sharp_luma = (abs(sharp_luma)*8.0) * exp(1.0 - (abs(sharp_luma)*8.0)) * sign(sharp_luma) / 16.0; //I should probably move the strength modifier here #elif 0 //SweetFX 1.4 code // -- Adjust strength of the sharpening -- float sharp_luma = dot(sharp, sharp_strength_luma); //Calculate the luma and adjust the strength // -- Clamping the maximum amount of sharpening to prevent halo artifacts -- sharp_luma = clamp(sharp_luma, -sharp_clamp, sharp_clamp); //TODO Try a curve function instead of a clamp #else //SweetFX 1.5.1 code // -- Adjust strength of the sharpening and clamp it-- float4 sharp_strength_luma_clamp = float4(sharp_strength_luma * (0.5 / sharp_clamp), 0.5); //Roll part of the clamp into the dot //sharp_luma = saturate((0.5 / sharp_clamp) * sharp_luma + 0.5); //scale up and clamp float sharp_luma = saturate(dot(float4(sharp, 1.0), sharp_strength_luma_clamp)); //Calculate the luma, adjust the strength, scale up and clamp sharp_luma = (sharp_clamp * 2.0) * sharp_luma - sharp_clamp; //scale down #endif // -- Combining the values to get the final sharpened pixel -- //float4 done = ori + sharp_luma; // Add the sharpening to the original. inputcolor.rgb = inputcolor.rgb + sharp_luma; // Add the sharpening to the input color. /*-----------------------------------------------------------. / Returning the output / '-----------------------------------------------------------*/ #if show_sharpen == 1 //inputcolor.rgb = abs(sharp * 4.0); inputcolor.rgb = saturate(0.5 + (sharp_luma * 4)).rrr; #endif return saturate(inputcolor); }
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. Last edited by James Freeman; 5th March 2014 at 13:51. |
|
![]() |
![]() |
![]() |
#24285 | Link | |||
Registered User
Join Date: Aug 2008
Location: the Netherlands
Posts: 850
|
Quote:
Quote:
![]() Quote:
|
|||
![]() |
![]() |
![]() |
#24286 | Link | |||
Registered User
Join Date: Sep 2013
Posts: 919
|
Quote:
Not deserving MID Debaning treatment. Quote:
Maybe someone else can answer you that more specifically. Quote:
![]() I updated the code, so be sure to copy the 1.4.1 version. @madshi Can MadVR load pixel shaders? I think I hear something similar somewhere...
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. Last edited by James Freeman; 5th March 2014 at 14:01. |
|||
![]() |
![]() |
![]() |
#24287 | Link | |||
Registered Developer
Join Date: Sep 2006
Posts: 9,137
|
Quote:
Render queues? There is only one render queue. Maybe post a screenshot of the debug OSD, then we might be able to say more. Quote:
IMHO, using a deband setting of "low" can still make sense for Blu-Ray, but that's only my personal opinion and I know that some people will disagree. Sharpening is also very controversial. Some people hate it. Some people love it. Quote:
FYI, from what I've seen, most people prefer Jinc. A minority of people prefer Lanczos. You may want to make up your own mind which you prefer. I don't understand what you mean. madVR does support custom pixel shaders, if that is your question. |
|||
![]() |
![]() |
![]() |
#24288 | Link | |
Registered User
Join Date: Sep 2013
Posts: 919
|
Quote:
Everything I post is IMO only. I'm thinking about putting a big disclaimer in my signiture.. Maybe... ![]() How?
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. |
|
![]() |
![]() |
![]() |
#24289 | Link |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,836
|
Just load them in MPC-HC like you would with its built-in renderer (Options -> Playback -> Shaders), madVR accepts them the same way.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders |
![]() |
![]() |
![]() |
#24290 | Link |
Registered User
Join Date: May 2011
Posts: 94
|
I really would love to use a sharpen filter but I use JRiver and can't find a way to do it. Madshi did mention one or twice that he could add a means to include a filter that wouldn't be too difficult to do. I can only hope.
|
![]() |
![]() |
![]() |
#24291 | Link | |
Registered User
Join Date: Sep 2013
Posts: 919
|
Quote:
I thought there was a way to load shaders directly to madVR. P.S madshi, I realize most people don't like the Ringing or other artifacts sharpening may give (I hate them too)... but not this one. I'm not just saying that this is the best one there is when I say try it, I really mean it and urge people to try it (if they disliked sharpening before). This post is IMO only. ![]()
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. |
|
![]() |
![]() |
![]() |
#24292 | Link | |
Registered User
Join Date: Dec 2013
Posts: 752
|
Quote:
Unfortunately it doesn't seem that it will fix the 'blinking' problem and the more I think about that the less sense it seems to make. I can't understand why it seems to work regardless of smooth motion, but only when the frame rate doesn't match the display frame rate. |
|
![]() |
![]() |
![]() |
#24293 | Link | |||
Registered User
Join Date: Oct 2012
Posts: 5,977
|
Quote:
Quote:
not sure if any gpu can handle nnedi32 neurons 1080 -> 2160 but give it a try. you find it under image doubling. Quote:
|
|||
![]() |
![]() |
![]() |
#24296 | Link | |
MPC-HC Developer
Join Date: May 2010
Location: Poland
Posts: 556
|
Quote:
You haven't even bother to test it... You can't do HIGH/LOW and others... It's a matter of taste. The same with scaling. If there were "best" option madshi wouldn't have include those settings. Tapatalk 4 @ GT-I9300 |
|
![]() |
![]() |
![]() |
#24298 | Link |
Registered User
Join Date: Sep 2013
Posts: 919
|
I have done a bunch of testing with patterns and movies,
The remaining aliasing (ever so small) with Lanczos translates to better detail at higher frequency, whether Jinc will blur the detail. Jinc will look better (smoother) on perfect diagonals only, on anything else it looks just like Lanczos. Also, Lanczos 8 uses less juice than Jinc 3. For Upscaling I use: Lanczos 3 + AR (no LL). For Downscaling I use: Catmull-Rom (no LL). This post is IMO only.
__________________
System: i7 3770K, GTX660, Win7 64bit, Panasonic ST60, Dell U2410. Last edited by James Freeman; 5th March 2014 at 15:39. |
![]() |
![]() |
![]() |
#24299 | Link | |
Registered User
Join Date: Feb 2014
Posts: 161
|
Quote:
I wonder if it's possible for Quicksync to do deinterlacing, because if I give the job to MadVR it would need twice the rendering power since it doubles the frames per second. |
|
![]() |
![]() |
![]() |
Tags |
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling |
Thread Tools | Search this Thread |
Display Modes | |
|
|