Hello everyone.
I couldn't help to attract my attention to the
"Perceptually Based Downscaling of Images (needs implementation)" post, which is linked on
avisynth's resize article.
I also started using vapoursynth, which have a ton of useful and curious functions and plugins, some of them
SSIM and SSIM downsample. The latter is a pseudo implementation based upon the original
Öztireli and Gross' paper.
Thanks to having a native SSIM, I was able to make tests for the perceptual resizing; for that, I used an image from the
DPID Supplemental Material:
Vanilla had a score of 0.78107535635354
After that, I made some kind of approximation
(which you can see here) and I've got as result 0.870382479581167, by using
box pre-resizing and a
smooth parameter of
0.4285
And this is how it should look
As you can see, muvsfunc's SSIM downsampler is really close to the intended one, but is somewhat darker... it might be useful to use gamma aware resizing, but as we don't have the original transfer information of this image, we would be guessing it and it would result on an inaccurate image.
As a side note, I also used another implementation, using the "SSimDownscaler" shader from mpv on vsplacebo, and was guided by
iamscum's mpv guide, thus using gaussian as an auxiliar resizer: 0.8378410553123103