Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 20th August 2023, 19:20   #141  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
# Using shaders (mpv glsl hook) and scripts on mpv-android

shaders and scripts work the same in mpv-android as in mpv.
The build I was using only had lua script support however (no .js).
So I converted my countdown script to lua (with minimal modification the script could also be useful to pass time values to vo_gpu-next shaders for instance to launch transition effects).

My device isn't rooted. To launch shaders or scripts, you need to edit a user mpv.conf file and specify the corresponding paths. The file paths must be readable by mpv-android.
- Create a mpv folder in your android phone's internal storage and place your shaders and scripts in it.
- Using your file manager go into the properties of one of the shader files (ex: a Black&White shader such as bw.hk is useful for testing) and copy the file path.
ex, on my phone: '/storage/emulated/0/mpv/bw.hk'

- Open the mpv-android app:
Settings > Advanced > `edit mpv.conf`: paste the required paths
glsl-shaders=my_path/bw.hk
script=my_path/countdown.lua

This sets the black&white shader and countdown script permanently until disabled. If a correct path was speficied the video should now be shown in black &white.
# to disable the shader or script, you'll need to comment out the corresponding line.

It's also possible to edit input.conf and configure some double-tap gestures to perform custom actions such as toggling a shader.
butterw2 is offline   Reply With Quote
Old 11th September 2023, 15:15   #142  |  Link
Alexkral
Registered User
 
Join Date: Oct 2018
Posts: 316
Quote:
Originally Posted by butterw2 View Post
# Comparison of user shader features in the main video players for Windows

To Everyone: Please point out any mistakes/changes, so I can update this guide as needed. The reason for this thread is that the available documentation is sparse and mostly buried in very old/long threads.
You don't need to be a C++ dev to customize or write pixel shaders.
I did some tests to see how different renderers handle the alpha channel. This can be useful because the alpha channel can be used to pass data between shaders in shader chains. I used dx9 shaders in MPC-HC, not sure if other things can change the result as well. The test consisted of simply setting the alpha channel to a value in a shader, and seeing the value received by the next shader in the chain.

EVR-CP:

- Always returns 1 on the alpha channel regardless of the value passed.

MPC-VR (Very creative as you'll see):

- With 8-bit textures, always returns 1 like EVR-CP.
- With 10-bit textures, it returns 1 if you pass 1 and 0 if you pass 0, but if for example you pass 0.5, it returns 0.33, or if you pass 0.75, it returns 0.66.
- With 16-bit textures it returns the same value passed.

madVR:

- Returns the same value passed.

BTW, is there any reason for you not to consider madVR when talking about renderers? It also allows you to run shaders, and is the only one with an option (removed in the latest betas) to store the results with 32-bit precision.
__________________
AviSynth AiUpscale
Alexkral is offline   Reply With Quote
Old 11th September 2023, 18:31   #143  |  Link
v0lt
Registered User
 
Join Date: Dec 2008
Posts: 1,934
Quote:
Originally Posted by Alexkral View Post
- With 10-bit textures, it returns 1 if you pass 1 and 0 if you pass 0, but if for example you pass 0.5, it returns 0.33, or if you pass 0.75, it returns 0.66.
Only 2 bits are allocated to the alpha channel for this texture format (D3DFMT_A2R10G10B10 or DXGI_FORMAT_R10G10B10A2_UNORM). Therefore, only 4 values are possible: 0, 0.333, 0.667, 1.
v0lt is offline   Reply With Quote
Old 11th September 2023, 21:06   #144  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,853
Quote:
Originally Posted by Alexkral View Post
[B]BTW, is there any reason for you not to consider madVR when talking about renderers? It also allows you to run shaders, and is the only one with an option (removed in the latest betas) to store the results with 32-bit precision.
because it didn't age wel- can't display UHD BD correctly. quite buggy without the beta version. and inefficient.

mpcVR just works i have a hard time breaking it and if i break something it's usually the GPU driver that i broke.
huhn is offline   Reply With Quote
Old 12th October 2023, 21:34   #145  |  Link
jedi93
Registered User
 
jedi93's Avatar
 
Join Date: Mar 2016
Posts: 11
Does anyone know how to get debanding for MPC-HC/MPC-BE ? The shaders from an old thread on this site don't seem to work anymore (atleast not on my setup)...
jedi93 is offline   Reply With Quote
Old 16th October 2023, 02:56   #146  |  Link
BetA13
cosmic entity
 
BetA13's Avatar
 
Join Date: May 2011
Location: outside the Box
Posts: 258
Quote:
Originally Posted by jedi93 View Post
Does anyone know how to get debanding for MPC-HC/MPC-BE ? The shaders from an old thread on this site don't seem to work anymore (atleast not on my setup)...
probably because you are trying to use a dx9 shader with a dx11 renderer.
find a deband shader that works with DX11
BetA13 is offline   Reply With Quote
Old 19th October 2023, 15:45   #147  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
Video renderer surface/texture format (MPC-vr)

If multiple shaders are chained, it is best to avoid the 8bit integer surface/texture format in the video renderer because of precision loss. Default mpc video renderer texture format is Auto 8/10bit Integer, this means that depending on whether the source is 8 or 10bit, the internal format used will be RGBA uint8 or uint10.

Test on mpc-be/mpc-vr dx11 with intel hd graphics: There is some gpu usage overhead with using 16bit floating point (fp16) vs 8bit integer (uint8) surface/texture format (uint10 performs similar to fp16). This would only really be an issue for older/low end machines but it should be noted that this overhead applies whether shaders are used or not.
EDIT: For shaders that pass information using the alpha channel, you need to use fp16.

Last edited by butterw2; 21st December 2023 at 23:57. Reason: +Alpha
butterw2 is offline   Reply With Quote
Old 22nd October 2023, 11:44   #148  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
Classic sharpen shaders (mods)

I plan to release modded versions of the classic sharpen shaders in the different shader formats (dx9 hlsl, dx11 hlsl, glsl.hook vo_gpu and libplacebo vo_gpu-next).
https://github.com/butterw/bShaders/...r/edge/sharpen

Use case:
- improve a slightly soft/blurry video at native resolution.
- or restore sharpness to upscaled video
Ex: x264/x265 compressed (web) video or bluray rip.
video resolution: 720p, 1080p
display: 1080p or 1440p screen/monitor, windowed (100% zoom) or fullscreen.

The sharpen pack will include:
- sharpen(Sharpen_Amount). The fastest kernel is Laplacian1(5 texture, 8 arithmetic). >> uploaded initial dx9 version.
- unsharp mask (Sharpen_Amount, Threshold) based on gaussian 3x3 blur sigma: 0.85
- luma_sharpen (Sharpen_Amount, Clamp) pattern 3: Laplacian1
- sharpen_complex(Sharpen_Amount, Sharpen_Edge, Edge_Threshold)
- edge_sharpen (Sharpen_Edge, Edge_Threshold)

Shaders will have easy to use parameters (ex: Sharpen strength). It will be possible to display the edge detect/details image (Show_Edge).
Hlsl versions of the shaders will not use hardware linear sampling as this feature is disabled in mpc-hc/be (mpc-hc/be does point sampling for non-integer pixel offsets).

Last edited by butterw2; 29th October 2023 at 01:32. Reason: edit-3
butterw2 is offline   Reply With Quote
Old 25th October 2023, 14:39   #149  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
unsharp shader (Sharpen_Amount=0.8, Threshold=0.6, Show_Edge=-1)
(9 texture, 16 arithmetic), threshold: +4 ari

Sharpens using the classic unsharp mask method (available in photo editing application such as photoshop, photodemon, etc.).
Sharpening means increasing the contrast of details. In the unsharp mask method a low-pass filtered (blurred) image is substracted from the original image to obtain the high-frequency detail image.
This shader uses a 3x3 gaussian filter (sigma=0.85) to calculate the blurred image.
detail = original - blurred
sharpened = original + Sharpen_Amount*detail
Detail can have positive or negative values. This corresponds to the positive and negative overshoot on edges and causes the brightness of sharpened details to be increased or decreased.

- Sharpening shaders are typically used post-resize.
- Over-sharpening should be avoided as it will cause a halo on high constrast edges. Sharpening requires a reasonably clean source video (in particular if the source is upscaled on a large display) as it will reenforce existing artefacts such as aliasing, etc.
- A detail threshold value can be applied to only sharpen strong edges or to prevent sharpening of noise/artifacts.
- The Show_Edge parameter allows the display of the threshold mask and edges for analysis.

Last edited by butterw2; 29th October 2023 at 15:28.
butterw2 is offline   Reply With Quote
Old 29th October 2023, 17:07   #150  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
Comparing edge detection results
The two sharpening methods (sharpen and unsharp mask) are based on obtaining a detail image: sharpened = original + Sharpen_Amount*detail

Comparing detail image (from sharpener) vs edge image (from edge detector, ex: Sobel):
- both can be thresholded.
- both are obtained from convolution kernels.
- sharpening kernel (ex: Laplacian) generates thin natural looking edges. Sharpener detects texture but also noise/artifacts.
- edge detector has thicker edges but also more reliable output vs noise.
- edge (from edge detector) is a positive grayscale value usually inferior to 1.0, whereas detail is positive or negative rgb.

To compare edges (A, B) obtained from different edge detector kernels (visually results seem similar):
It is first necessary to match scales as kernel results typically have different amplitudes: adjust scaling factor k to try to best match 10*(A-k*B)==0 and 10*abs(A-k*B)==0. Display of float4(A, k*B, 0, 1) would then be yellow (Red if something is detected on A but not on B, green for detection on B but not on A).

Compared kernels:
- Sobel 3x3 (8 texture, 20 arithmetic) with 2 gradients (horizontal and vertical)
- Prewitt 3x3 (8 texture, 28 arithmetic) with 4 gradients (incl. diagonal), amplitude: 1.025*Sobel
- Adaptive-sharpen_pass1 5x5 diamond (13 texture, 45 arithmetic), amplitude: 1.7*Sobel

Conclusions:
- detected edges are very similar, especially between Prewitt and Sobel.
- Sobel is fastest.
- Adaptive-sharpen sometimes detects (weak) edges that aren't picked up by Sobel.

Last edited by butterw2; 29th October 2023 at 21:48. Reason: typos
butterw2 is offline   Reply With Quote
Old 2nd November 2023, 02:12   #151  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
Contrast adjustment via S-Curve

Contrast increase
Contrast(0.10) is an effective method to improve the viewing of hazy web-videos. But this isn't always applicable (blows-up the white/black ranges then clips them).
An S-curve (boosts midtone contrast ...at the cost of degraded contrast in highlights/shadows !) can avoid this problem if used with a suitably low blending parameter (ex: S in [0, 0.125]).

Symmetrical s-curves:
x: pixel rgb in [0, 1.0].
the max boost occurs at the midpoint (x=0.5).

- Sigmoid(0.05), out = 1.0 / ( 1.0 + exp(-14*x +7 )) ), (1 texture, 12 arithmetic) >> my sCurve shader.
max contrast boost: S*2.5
contrast degradation at the black and white points: S.

- S2(0.125), out = 0.5 + x1 / (0.5 + abs(x1)) with x1 = x - 0.5, (1 texture, 9 arithmetic)
max contrast boost: S
contrast degradation at the black and white points: S/2.
Curve #2 in sweetfx.curves.

- S3(0.10), out = x*x*(3-2*x), (1 texture, 5 arithmetic).
max contrast boost: S/2.
contrast degradation at the black and white points: S.

S-curve vs contrast(0.10) comparison: https://raw.githubusercontent.com/bu...vs_sCurves.png
- Contrast(C), out = x*(1 + C) -0.5*C, (1 texture, 1 arithmetic)
- contrast boost= C.
! But when the curve clips, the contrast is zero (contrast.10 clips below x=0.04).

Conclusion: Sigmoid sCurve(0.05) and S2(0.125) are good alternatives to the contrast.10 or expand10_240 adjustment. Contrast is only increased in the midtones, but using a s-curve means the input is not clipped.

Contrast reduction
S-curves can also be used to reduce contrast. For negative values of the parameter S, midtone contrast is decreased, and near-Black (darks) and near-White (brights) contrast is increased.

Notes:
- It is not typically necessary to increase contrast for movies (though a low Strength s-curve adjustment could be beneficial in special cases, ex: faded colors in older films).
! Because s-curves (with S positive) degrade the contrast (and lower brightness) in shadows, they can have a detrimental effect in dark scenes.

Last edited by butterw2; 11th November 2023 at 10:38. Reason: + link to sCurve shader
butterw2 is offline   Reply With Quote
Old 11th November 2023, 14:07   #152  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
#Display on a 1440p monitor using mpv v0.37(gpu-next) with igpu (intel uhd730).

mpv has recently changed its default scalers in and I've tested different sources on my 1440p monitor.
Win10> mpv --vo=gpu-next --scale=lanczos --cscale=catmull-rom --sigmoid-upscaling --dither-depth=auto

- low-res sources (low quality web videos) >> upscale 2x or 1.5x
Sources up to 480p will need scaling (the display is otherwise too small), ex: 2x upscale: --window-scale=2. The quality of the video limits how much you can upscale/sharpen.
igv\FSRCNNX_x2_16_0_4_1_distort.glsl sometimes works OK for 2x upscale with strong artefact removal. It runs for sources up to 360p on my igpu.
For 480p sources, I would probably limit the upscale to 1.5x and not apply any sharpener.

- Low quality web sources (720p, 1080p) >> watch at 1x or downscale 0.75x
Bad quality 1080p (upscaled) web videos with are quite common. Downscaling 0.75x is an option.
Use --correct-downscaling, dscale=hermite (bicubic with b=c=0) with --linear-downscaling.

- OK quality videos (720p, 1080p), ex: 1080p BDRip 2000kbps x265 >> upscale to 1440p.
Upscaling to fullscreen with lanczos or spline36 should work OK (scaling ratio: 2x, 1.33x). A little extra sharpening might be beneficial.

Current mpv spatial sharpeners/external upscalers (by igpu usage):
lumasharpen.glsl (luma) < AMD_CAS_lite_rgb.glsl < agyild/FSR.glsl (amd FSR v1.02 mod), upscale ratio: >1 to 2.0 < igv/SSimSuperRes.glsl (SSSR Scaler>1) < igv/Adaptive-sharpen.glsl (0.4)

I will be looking more at sharpeners (at 1 to 2x upscale) as they can improve perceived quality. This typically requires an OK quality source. Filmgrain shaders should also be considered.
For low quality videos, a little contrast boosting can often help, ex: sCurve(0.05).

Last edited by butterw2; 22nd November 2023 at 16:54. Reason: mpv v0.37, dscale=hermite
butterw2 is offline   Reply With Quote
Old 12th November 2023, 15:26   #153  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
Resolution info available to shaders (there are major differences)

Image resolution (W, H) info available to user shaders with different renderer:
- EVR-CP pre-resize (dx9) shader: you can access video file resolution. Shader does not apply to padding with black bars.
- EVR-CP post-resize (dx9) shader: you can only access the fullscreen resolution not the video file resolution. Shader applies to black bars. >> Not great, could still work in fullscreen.
- MPC-VR (dx11) shader (post-resize only): you can only access windowed resolution (including black bars). Shader does not apply to black bars.
- Mpv shaders (vo=gpu-next)
(LUMA, MAIN: pre-resize rgb, SCALED, OUTPUT: post-resize rgb): you can access both video file resolution (input_size) and resized resolution (target_size, doesn't include black bars). Shader does not apply to black bars.

Trying to get a consistent sharpen strength across shaders (by adjusting the detection level):
Without adjustment:
1) Sharpening depends on the sharpening kernel used.
2) for a given (fixed size) kernel, the sharpen effect depends on the resolution of the image.
- at 100% zoom the source resolution, otherwise the scaled resolution.
The higher the resolution, the lower the sharpening effect for the same sharpen strength parameter.
3) because different renderers do not provide the same resolution information, it isn't possible to make this 100% work the same across renderers. Mpv does provides the full information pre-resize and post-resize.

EDIT: A pre-resize sharpen has a consistent effect°, whatever the display resolution. For a post-resize sharpen, the idea is to normalize the detection for a picture display size of 1280 pixels. If the picture is displayed at a higher resolution, we increase the detection. The following seems to work for an unsharp shader:
detail = detail * max(W, H)/1280. with W, H scaled picture dimensions in pixels.
° In the case of a pre-resize sharpen the effect will depend on the source resolution, so it may make sense to normalize the detection vs source resolution.

Last edited by butterw2; 13th November 2023 at 20:15. Reason: edit-1a: normalize detection
butterw2 is offline   Reply With Quote
Old 18th November 2023, 19:34   #154  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
Contrast Adaptive Sharpening dx9/dx11 hlsl shader (AMD FidelityFX CAS): https://gist.github.com/butterw/ceb8...17660fb4bacaa3
(9 texture, 47 arithmetic).

The algorithm adjusts the amount of sharpening per pixel to target an even level of sharpness across the image. Areas of the input image that are already sharp are sharpened less, while areas that lack detail are sharpened more. This allows for higher overall natural visual sharpness with fewer artifacts.

parameters:
- CAS: Contrast sharpening Amount. ex: 0.35 [.. to 2.66]. Negative values are possible for lighter sharpening.
- Show_Edge: integer [0 or 1] 0: sharpened image, 1: detail.

Note: doesn't generate a bright pixel at the border with the padded black bars in post-resize fullscreen mode.
butterw2 is offline   Reply With Quote
Old 21st November 2023, 09:58   #155  |  Link
stax76
Registered User
 
stax76's Avatar
 
Join Date: Jun 2002
Location: On thin ice
Posts: 6,837
I've updated awesome-mpv today, syncing it with the user scripts list in the wiki. I noticed you added a script that is similar with an old script of mine, showing a simpler time format when seeking.

https://github.com/butterw/bShaders/...ripts/btime.js

https://github.com/stax76/mpv-script.../misc.lua#L138
https://github.com/stax76/mpv-script.../misc.lua#L276

In the past I was also using the seek event, but this caused a problem with another popular mpv feature, users were dissatisfied, so I changed it to not use the seek event. I don't remember what the problem was, unfortunately, maybe changing chapters, I checked the issue tracker, but couldn't find it. I also had a JavaScript version, that was before I learned Lua.
stax76 is offline   Reply With Quote
Old 22nd November 2023, 12:17   #156  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
I would probably never have coded btime.js without the inspiration from your original javascript code, so thanks for that.
I prefer the javascript syntax and haven't run into any performance issues with my scripts so far, but translating to lua isn't impossible. I use lua scripts on mpv-android (works on my years-old phone, but also on a weak android-tv device I have).

Using seek event you can display time each time there is a seek (screenshot). It should work perfectly work with core mpv, if not it's a bug (or a design flaw) IMO. Now that mpv v0.37 has been released (ex: https://github.com/shinchiro/mpv-win...cmake/releases) I would encourage anyone to report any such issues at github.com/mpv-player/mpv.
Obviously when mixing (complex) user scripts there is always potential for conflict, so manually binding a time display function (to every seek command) can avoid such issues.
To avoid these issues, mpv should make one alternative time-display format available by default as ${playback-time/short}. The mpc-hc format (00:10:05 / 02:15:00, 03:05 / 55:00) would be good here.

Last edited by butterw2; 5th December 2023 at 17:46. Reason: +mpv v0.37 screenshot
butterw2 is offline   Reply With Quote
Old 25th November 2023, 15:44   #157  |  Link
stax76
Registered User
 
stax76's Avatar
 
Join Date: Jun 2002
Location: On thin ice
Posts: 6,837
I think I remember now what the problem was, something with the built-in OSC, maybe changing chapters with button controls in the OSC, things overlap then.
stax76 is offline   Reply With Quote
Old 28th November 2023, 19:13   #158  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
# Filmgrain in shaders, and how to blend it.
https://github.com/butterw/bShaders/...ilmGrain_Noise

Filmgrain is a physical characteric of film media, digital camera sensors don't have grain, only noise.
Adding filmgrain can be beneficial for sources that have been heavily denoised or are completely devoid of noise/grain (ex: some generative AI outputs). A small amount of grain paradoxically increases perceived sharpness and quality. Adding too much just degrades the signal over noise ratio.

Grain shaders typically generate a dynamic luma noise pattern and blend it with the original image. Grain is calculated for every (x, y) pixel of the image using a pseudo-random hash function. This requires a number of arithmetic operations but no texture samples, so it can be handled quite easily by current gpus.
float grain = good_hash_function(tex.xy, seed); // random variable with a gaussian distribution.

The simplest way to blend the grain with the original image is by adding it:
out = c0.rgb + Strength*grain
with c0 and out pixel.rgb values in [0, 1.0].
grain, positive or negative grayscale value, ex: in [-0.125, 0.125].

The grain pattern can be displayed with: return 0.5+grain; // Show_Grain
Screenshots of the grain pattern can be analysed using the histogram feature of photo editors.
The grain pattern is dynamic, meaning it changes every frame. This is done by updating the seed of the pseudo-random hash function using a uniform variable provided by the video player ('mpc' or ''mpv''),
ex: the player 'clock' in seconds, the integer ''frame'' 'counter' or a ''random'' variable from a uniform distribution in [0, 1.0].

grain shaders:
1) per pixel pseudo-random grayscale noise (with approximate gaussian distribution).
2) additive blending method does not significantly alter the original brightness and contrast of the image.
3) the grain is applied uniformly to the image.
4) grain pattern is updated every frame.

Grain should only be added after any sharpening operations are performed (sharpening generated grain is undesirable).
When upscaling with no post-resize sharpening, grain can be added pre-resize. There are 2 benefits with regards to 1):
- any resizing (ex bicubic A=-0.60, scaling ratio=1.33x) smooths the shape of the distribution and results in a clean gaussian.
- the grain size is scaled. This may achieve a more pleasing result for scaling ratios up to 2x, as single pixels are fairly small at 1080p or higher.

With regards to 4) it can be beneficial to slow down the grain at higher framerates (30, 50 or 60fps video) vs 24 frames-per-second film. A shader using the frame counter as seed can limit the grain update to ex: fps/2 with seed = floor(0.5*counter);

We can change 3) by using a luma-based curve (ex: a parabola). The idea is to apply less grain to dark and bright areas vs the midtones.
To test this out, I used an online palette generator to generate a 9 color grayscale RGB gradient palette. The first color is black, the last white and middle color is midgray128. The palette image was converted to a x264 mp4 video with correct colors (and the desired duration and frame rate) using ffmpeg. https://github.com/butterw/bShaders/...91-223-255.mp4

Results with grain.hlsl shader (1 texture, 42 arithmetic), no resize:
grain.35 (strong grain). (top) C:1 uniform grain vs C0:C1:0.30

EDIT An alternative blend that avoids having too much grain on darks is:
c0.rgb = c0.rgb* (1 + 1.30*Strength*grain) = c0.rgb + c0.rgb*1.30*Strength*grain;
This is a simple linear shaping function (y=ax+b with a=1.30 and b=0). It has less grain on darks and midtones vs parabola(C:0.25) but higher grain in highlights (the grain will be visible there).

Last edited by butterw2; 5th December 2023 at 17:26. Reason: +linear shaping function, +grain.hlsl/glsl shaders
butterw2 is offline   Reply With Quote
Old 2nd December 2023, 21:45   #159  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
# tunable parabolic shaping function, in/out: [0, 1.0]

The grain.hlsl/.glsl shader (previous post) can apply less grain to dark and bright areas vs midtones by multiplying grain with a shaping function based on luma. grain = gshape(luma)*grain;

The shaping function is a tunable parabola centered at midgray, it has a zero-derivative at x=0.5. parabola(x:0.5)=1;,

tuning parameter C, ex: 0.30, typ in [0 to 1.0]
By default, the values in x:0 and x:1.0 are set by parameter C: Curve01. parabola(x:0)=parabola(x:1)=C
But its possible to use different values of C for x>0.5 and x<=0.5. ex: float C = (x>0.5) ? Curve1: Curve0;
for negative C, use: max(gshape(luma, C), 0)

Code:
float pow2(float x) { return x*x; }
float gshape(float x, float C) {
// tunable parabola(x, C) centered in x=0.5: C + (1-C)*(1 -(2*x-1)^2) = (C-1)*(2*x-1)^2 +1 = (C-1)*(2*(x-0.5))^2 +1 
return lerp(1 -pow2(2*x -1), 1, C);
}

Last edited by butterw2; 5th December 2023 at 16:42.
butterw2 is offline   Reply With Quote
Old 7th December 2023, 17:18   #160  |  Link
butterw2
Registered User
 
Join Date: Jun 2020
Posts: 303
# restart-mpv to recompile shaders and reload configuration

mpv-daily (v0.37.0-70-g562450f5) fixes --watch-later-options-append=glsl-shaders
This applies to input.conf: Q quit-watch-later
It also corrects the glsl-shaders property display, which now uses the platform file-list separator (";" on windows, ':' elsewhere) instead of "," previously. The separator is required when multiple shaders are used.
This does mean scripts using the glsl-shaders property may need to be updated.

Ex: here is a standalone update to my script restart-mpv.js. Use case: restart mpv to recompile shaders (and reload scripts, .conf files) after they have been modified.
v0.10 is backwards compatible: https://github.com/butterw/bShaders/...e08d4b5206a8d7. I'll provide updated versions of my mpv shader/OSD scripts in the next release of A-pack.

With mpv v0.37-dev, restart-mpv can now be done directly from input.conf without any script required (tested on windows):
F5 run mpv.com --glsl-shaders=${glsl-shaders} --start=${playback-time} --pause=${pause} "${path}"; quit #restart-mpv

# Script Keybindings in mpv:
The scripts adds a Shift+F5 script-binding by default. I would recommend setting the keybinding explicitly in input.conf: this is where all your keybindings should be defined, not in scripts. The option --no-input-default-bindings disables regular script key bindings and the built-in defaults.
mp.add_key_binding("Shift+F5", "restart-mpv", restart_mpv); // input.conf: Shift+F5 script-binding restart-mpv
script-message or script-message-to (no default keybinding defined, string parameter support) can be used instead of script-binding.
//mp.register_script_message("restart-mpv", restart_mpv); // input.conf: Shift+F5 script-message restart-mpv
restart_mpv is the name of the function that gets called when the message is received
mp.add_key_binding(null can be good (no default keybinding defined), but you should use _ rather than - in the script name, ex: X script-binding my_script/restart-mpv. Also unlike script-message-to it doesn't support parameters.
With X script-message-to my_script restart-mpv, the message "restart-mpv" is only sent to my_script, this avoids the possibility of random collisions

Last edited by butterw2; 21st January 2024 at 21:08. Reason: Script Keybindings++, +input.conf: restart mpv v0.37
butterw2 is offline   Reply With Quote
Reply

Tags
hlsl, mpc-be, mpc-hc, mpv, pixel shaders

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:36.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.