Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
15th November 2015, 15:09 | #1 | Link |
Registered User
Join Date: Aug 2007
Posts: 79
|
Perceptually Based Downscaling of Images (needs implementation)
hi, there. I wanted to draw some attention to a fairly new developed downscaler by ETH Zurich which get pretty convincing results:
https://graphics.ethz.ch/~cengizo/imageDownscaling.htm it seems that they didn't publish the code but the algorithm is explained in the paper and presented in pseudo code, it only consists of convolutions and sums. I developed some shaders for visually enhancing retro games in the past but I don't have any experience with avisynth development. but I'd love to see an avisynth implementation for that, I think it could be a gread addition. Last edited by Sp00kyFox; 15th November 2015 at 15:20. |
17th November 2015, 23:18 | #2 | Link |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
If you've done shaders development, one option is to do a shader version and run it through AviSynthShader. If it works well, then someone may decide to implement a native AviSynth version.
Plus perhaps a better HLSL downscaler could improve the result of Shiandow's SuperRes. |
27th November 2015, 18:14 | #4 | Link | |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Quote:
Adding any kind of preprocessing or postprocessing can only distort the original image, losing or amplifying details. There is still nothing like getting just the right amount of details during the downscaling itself. Whether it's actually 'better' or not isn't an issue. It may be better in some cases and not in other cases. It's always better to have more options of filters to work with so that each can experiment on their own. If the algorithms are in the paper, that's enough to implement it. If it's patented, then that's another issue; it's only patent pending. |
|
4th December 2015, 13:16 | #8 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
I did my compares of image downscaling like:
1. Scale to the half size with various algorithms 2. Scale back to the original size with nnedi3, and see which one looks closest to the original one And bicubicresize(b=-1,c=0) won in most cases |
4th December 2015, 13:46 | #9 | Link |
The image enthusyast
Join Date: Mar 2015
Location: Brazil
Posts: 270
|
This upscaling algorithm discussed here isn`t good for keeping fine and small edges, although doesn`t add so much artifacts. Just see the paper results and you will see by your own eyes. Downscaling, IMHO, almost never is a good thing, because a large factor, i.e, 4x or more, kill the weak and little edges; and there`s no upscaling method able to recover this details.
__________________
Searching for great solutions |
4th December 2015, 14:12 | #10 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
bicubicresize(b=-1,c=0) downscaling result list of the samples here, https://graphics.ethz.ch/~cengizo/Fi...ralImages.html and bicubicresize(b=-1,c=0) looks better than "Perceptual" imho... |
4th December 2015, 14:39 | #12 | Link |
The image enthusyast
Join Date: Mar 2015
Location: Brazil
Posts: 270
|
Anyway, I will test. Is notable that Bicubic for upscaling has two extremes: aliasing or blur... A kinda bad . I preffer last update of Adobe Detail Preserving. It`s sharper than Bicubic 0,1, keeps fine and small details and has no artifacts, unless those in original images, which it doesn`t smooth.
__________________
Searching for great solutions |
4th December 2015, 14:56 | #13 | Link | |
Registered User
Join Date: Dec 2013
Posts: 753
|
Quote:
By the way, for some incomprehensible reason they have scaled the image on that page from 256x256 to 266x266, which has made them a lot blurrier, you should open them on a separate page if you want to compare quality. In my opinion their perceptual algorithm is better at preserving fine lines and texture, although it might overdo it on some of the images. |
|
4th December 2015, 15:13 | #14 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
|
|
4th December 2015, 15:28 | #15 | Link |
Retried Guesser
Join Date: Jun 2012
Posts: 1,373
|
Hmm, gotta test this for myself...
Note the small images are downsized, then enlarged x2 to exaggerate the artifacts, and to try to look like the sample. Even so, the sample images don't look right; it looks like they have been softened or something. (EDIT missed the posts above discussing this) EDIT This is not a good way to test IMHO -- unless you plan to view the final product at x2 scale. EDIT "Perceptual" is the only one besides "Point" to preserve the eye sparkle, but that can be fixed with gamma-aware resizing, I think. Feel free to use this script with any downscale method you want to test. Code:
ImageSource("faceComparisons.png") O=Crop(0, 0, 636, 0) ## face 1, all sizes Crop(0, 0, 288, 380) ## face 1, cropped scale=148.0/364.0 ## per sample image scale=Min(Max(0.1, scale), 0.5) ## downsize to 1/2 this scale, then upsize x2 wid = mod(2, Round(scale*Width)) hgt = mod(2, Round(scale*Height)) StackHorizontal( \ BlankClip(Last, color=$ffffff, width=O.Width-(4*wid)) \ , StackVertical( \ PointResize(wid/2, hgt/2).x2.sub("Point") \ , Lanczos4Resize(wid/2, hgt/2).x2.sub("Lanczos") \ , BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt)) \ ) \ , StackVertical( \ BilinearResize(wid/2, hgt/2).x2.sub("Bilinear") \ , BicubicResize(wid/2, hgt/2).x2.sub("Bicubic") \ , BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt)) \ ) \ , StackVertical( \ BicubicResize(wid/2, hgt/2, b=-1, c=0).x2.sub("Bicubic(-1, 0)") \ , BicubicResize(wid/2, hgt/2, b=0, c=0.75).x2.sub("Bicubic(0, 0.75)") \ , BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt)) \ ) \ , StackVertical( \ Spline16Resize(wid/2, hgt/2).x2.sub("Spline16") \ , Spline64Resize(wid/2, hgt/2).x2.sub("Spline64") \ , BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt)) \ ) \ ) return StackVertical(O, Last) function mod(int m, int i) { return i - i % m } function x2(clip C) { C return PointResize(2*Width, 2*Height) } function sub(clip C, string s) { return C.Subtitle(s, size=C.Height/10, align=2) } Last edited by raffriff42; 16th March 2017 at 23:54. Reason: (fixed image link) |
4th December 2015, 17:10 | #16 | Link | |
Registered User
Join Date: Dec 2013
Posts: 753
|
Quote:
By the way, bicubic(-1,0) looks pretty neat, I think I might use that for downscaling from now on. |
|
4th December 2015, 18:05 | #17 | Link | |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Quote:
How does that Bicubic.hlsl need to be updated, changing BC to -1 and 0 or in this implementation it's something else? https://github.com/mysteryx93/AviSyn...s/Bicubic.hlsl |
|
4th December 2015, 20:33 | #18 | Link | |
Registered User
Join Date: Dec 2013
Posts: 753
|
Quote:
|
|
4th December 2015, 21:06 | #19 | Link |
Registered User
Join Date: Jun 2013
Posts: 24
|
Or you can just use Shader with Param2 = "-1,0f"
IIRC, VirtualDub's documentation mentions that using B = -0.6 is mathematically more accurate, so that's something you could try out. Last edited by sqrt(9801); 4th December 2015 at 21:09. |
4th December 2015, 22:28 | #20 | Link |
Soul Architect
Join Date: Apr 2014
Posts: 2,559
|
Here are some tests with SuperRes using these different downscalers
SuperRes(Passes=2, Strength=.43) with NNEDI3(nns=4) as a prescaler Current (1/3, 1/3) -1,0 -.6,0 These clearly make the image softer. Let's try with increased Strength. -.6,0, Strength=1 My observation: -1,0 is too sort. -6.0 looks more natural than 1/3,1/3 but requires increased strength. It removes some weird darker areas of trees that happen with 1/3,1/3. Overall, -6,0 with Strength=1 still looks a bit softer but it also looks more natural. I like that one. What about an ultra-sharp version of it with Passes=3 and Strength=1? It looks pretty decent! It's worth nothing that the colors are slightly different, especially with the greens. Which one is the most accurate? The greens are already darker than the NNEDI3 version, so -6,0 is more accurate on the colors.
__________________
FrameRateConverter | AvisynthShader | AvsFilterNet | Natural Grounding Player with Yin Media Encoder, 432hz Player, Powerliminals Player and Audio Video Muxer Last edited by MysteryX; 4th December 2015 at 22:41. |
|
|