Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th November 2015, 15:09   #1  |  Link
Sp00kyFox
Registered User
 
Sp00kyFox's Avatar
 
Join Date: Aug 2007
Posts: 76
Perceptually Based Downscaling of Images (needs implementation)

hi, there. I wanted to draw some attention to a fairly new developed downscaler by ETH Zurich which get pretty convincing results:
https://graphics.ethz.ch/~cengizo/imageDownscaling.htm

it seems that they didn't publish the code but the algorithm is explained in the paper and presented in pseudo code, it only consists of convolutions and sums.

I developed some shaders for visually enhancing retro games in the past but I don't have any experience with avisynth development. but I'd love to see an avisynth implementation for that, I think it could be a gread addition.

Last edited by Sp00kyFox; 15th November 2015 at 15:20.
Sp00kyFox is offline   Reply With Quote
Old 17th November 2015, 23:18   #2  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,040
If you've done shaders development, one option is to do a shader version and run it through AviSynthShader. If it works well, then someone may decide to implement a native AviSynth version.

Plus perhaps a better HLSL downscaler could improve the result of Shiandow's SuperRes.
MysteryX is offline   Reply With Quote
Old 26th November 2015, 04:08   #3  |  Link
SSH4
Registered User
 
Join Date: Nov 2006
Posts: 87
Result is not better than BilinearResize(w/x, h/x) or XSharpen(128,128).Billinearscale(w/x,h/x) or any kind of ImagePreprocess().Billinearscale(w/x,h/x).ImagePostProcess()
And closed source + Patent Pending.
SSH4 is offline   Reply With Quote
Old 27th November 2015, 18:14   #4  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,040
Quote:
Originally Posted by SSH4 View Post
Result is not better than BilinearResize(w/x, h/x) or XSharpen(128,128).Billinearscale(w/x,h/x) or any kind of ImagePreprocess().Billinearscale(w/x,h/x).ImagePostProcess()
And closed source + Patent Pending.
The idea is to get a result as close to the source as possible.

Adding any kind of preprocessing or postprocessing can only distort the original image, losing or amplifying details.

There is still nothing like getting just the right amount of details during the downscaling itself.

Whether it's actually 'better' or not isn't an issue. It may be better in some cases and not in other cases. It's always better to have more options of filters to work with so that each can experiment on their own.

If the algorithms are in the paper, that's enough to implement it.

If it's patented, then that's another issue; it's only patent pending.
MysteryX is offline   Reply With Quote
Old 2nd December 2015, 08:26   #5  |  Link
SSH4
Registered User
 
Join Date: Nov 2006
Posts: 87
Ah, yes. You right. I thought samples on web site is results of this downscaling algorithm but this is just competitors. Inside paper them show good results.
SSH4 is offline   Reply With Quote
Old 4th December 2015, 12:33   #6  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Bicubicresize(b=-1,c=0) works fairly nice at keeping details when downscaling I think
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 4th December 2015, 12:39   #7  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 745
That's true, but 'fairly nice' doesn't exactly compare to defining what 'keeping details' means and making a downscaler that gives the optimal result according to that definition.
Shiandow is offline   Reply With Quote
Old 4th December 2015, 13:16   #8  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
I did my compares of image downscaling like:
1. Scale to the half size with various algorithms
2. Scale back to the original size with nnedi3, and see which one looks closest to the original one
And bicubicresize(b=-1,c=0) won in most cases
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 4th December 2015, 13:46   #9  |  Link
luquinhas0021
The image enthusyast
 
Join Date: Mar 2015
Location: Brazil
Posts: 267
This upscaling algorithm discussed here isn`t good for keeping fine and small edges, although doesn`t add so much artifacts. Just see the paper results and you will see by your own eyes. Downscaling, IMHO, almost never is a good thing, because a large factor, i.e, 4x or more, kill the weak and little edges; and there`s no upscaling method able to recover this details.
__________________
Searching for great solutions
luquinhas0021 is offline   Reply With Quote
Old 4th December 2015, 14:12   #10  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119


bicubicresize(b=-1,c=0) downscaling result list of the samples here, https://graphics.ethz.ch/~cengizo/Fi...ralImages.html
and bicubicresize(b=-1,c=0) looks better than "Perceptual" imho...
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 4th December 2015, 14:24   #11  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
certainly not.
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 4th December 2015, 14:39   #12  |  Link
luquinhas0021
The image enthusyast
 
Join Date: Mar 2015
Location: Brazil
Posts: 267
Anyway, I will test. Is notable that Bicubic for upscaling has two extremes: aliasing or blur... A kinda bad . I preffer last update of Adobe Detail Preserving. It`s sharper than Bicubic 0,1, keeps fine and small details and has no artifacts, unless those in original images, which it doesn`t smooth.
__________________
Searching for great solutions
luquinhas0021 is offline   Reply With Quote
Old 4th December 2015, 14:56   #13  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 745
Quote:
Originally Posted by feisty2 View Post

[...]

bicubicresize(b=-1,c=0) downscaling result list of the samples here, https://graphics.ethz.ch/~cengizo/Fi...ralImages.html
and bicubicresize(b=-1,c=0) looks better than "Perceptual" imho...
Interesting, could you upload (some of) those files separately, that would make it easier to compare.

By the way, for some incomprehensible reason they have scaled the image on that page from 256x256 to 266x266, which has made them a lot blurrier, you should open them on a separate page if you want to compare quality. In my opinion their perceptual algorithm is better at preserving fine lines and texture, although it might overdo it on some of the images.
Shiandow is offline   Reply With Quote
Old 4th December 2015, 15:13   #14  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by Shiandow View Post
By the way, for some incomprehensible reason they have scaled the image on that page from 256x256 to 266x266, which has made them a lot blurrier, you should open them on a separate page if you want to compare quality. In my opinion their perceptual algorithm is better at preserving fine lines and texture, although it might overdo it on some of the images.
well, I re-checked the downscaled images at 256x256 and it makes sense now, perceptual looks better than bicubicresize(b=-1,c=0)
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 4th December 2015, 15:28   #15  |  Link
raffriff42
Retried Guesser
 
raffriff42's Avatar
 
Join Date: Jun 2012
Posts: 1,376
Hmm, gotta test this for myself...


Note the small images are downsized, then enlarged x2 to exaggerate the artifacts, and to try to look like the sample.

Even so, the sample images don't look right; it looks like they have been softened or something. (EDIT missed the posts above discussing this)

EDIT This is not a good way to test IMHO -- unless you plan to view the final product at x2 scale.

EDIT "Perceptual" is the only one besides "Point" to preserve the eye sparkle, but that can be fixed with gamma-aware resizing, I think.

Feel free to use this script with any downscale method you want to test.

Code:
ImageSource("faceComparisons.png")
O=Crop(0, 0, 636, 0) ## face 1, all sizes
Crop(0, 0, 288, 380) ## face 1, cropped

scale=148.0/364.0 ## per sample image
scale=Min(Max(0.1, scale), 0.5)
## downsize to 1/2 this scale, then upsize x2

wid = mod(2, Round(scale*Width))
hgt = mod(2, Round(scale*Height))

StackHorizontal(
\   BlankClip(Last, color=$ffffff, width=O.Width-(4*wid))
\ , StackVertical(
\       PointResize(wid/2, hgt/2).x2.sub("Point")
\    ,  Lanczos4Resize(wid/2, hgt/2).x2.sub("Lanczos")
\    ,  BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt))
\   )
\ , StackVertical(
\       BilinearResize(wid/2, hgt/2).x2.sub("Bilinear")
\    ,  BicubicResize(wid/2, hgt/2).x2.sub("Bicubic")
\    ,  BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt))
\   )
\ , StackVertical(
\       BicubicResize(wid/2, hgt/2, b=-1, c=0).x2.sub("Bicubic(-1, 0)")
\    ,  BicubicResize(wid/2, hgt/2, b=0, c=0.75).x2.sub("Bicubic(0, 0.75)")
\    ,  BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt))
\   )
\ , StackVertical(
\       Spline16Resize(wid/2, hgt/2).x2.sub("Spline16")
\    ,  Spline64Resize(wid/2, hgt/2).x2.sub("Spline64")
\    ,  BlankClip(Last, color=$ffffff, width=wid, height=(Height-2*hgt))
\   )
\ )
return StackVertical(O, Last)

function mod(int m, int i)
{
    return i - i % m
}

function x2(clip C)
{
    C
    return PointResize(2*Width, 2*Height)
}

function sub(clip C, string s)
{
    return C.Subtitle(s, size=C.Height/10, align=2)
}

Last edited by raffriff42; 16th March 2017 at 23:54. Reason: (fixed image link)
raffriff42 is offline   Reply With Quote
Old 4th December 2015, 17:10   #16  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 745
Quote:
Originally Posted by feisty2 View Post
well, I re-checked the downscaled images at 256x256 and it makes sense now, perceptual looks better than bicubicresize(b=-1,c=0)
Yeah, still no clue why they felt the need to make their images 10 pixels bigger. They seem to make similar mistakes on other parts of the web site which makes things really confusing. It's almost as if they don't want people to use their algorithm.

By the way, bicubic(-1,0) looks pretty neat, I think I might use that for downscaling from now on.
Shiandow is offline   Reply With Quote
Old 4th December 2015, 18:05   #17  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,040
Quote:
Originally Posted by Shiandow View Post
By the way, bicubic(-1,0) looks pretty neat, I think I might use that for downscaling from now on.
Does that mean your SuperRes algorithm might work better with that?

How does that Bicubic.hlsl need to be updated, changing BC to -1 and 0 or in this implementation it's something else?

https://github.com/mysteryx93/AviSyn...s/Bicubic.hlsl
MysteryX is offline   Reply With Quote
Old 4th December 2015, 20:33   #18  |  Link
Shiandow
Registered User
 
Join Date: Dec 2013
Posts: 745
Quote:
Originally Posted by MysteryX View Post
Does that mean your SuperRes algorithm might work better with that?

How does that Bicubic.hlsl need to be updated, changing BC to -1 and 0 or in this implementation it's something else?

https://github.com/mysteryx93/AviSyn...s/Bicubic.hlsl
It might, haven't checked it yet. If you want to try it should indeed work if you just change "B" and "C" to -1 and 0 respectively.
Shiandow is offline   Reply With Quote
Old 4th December 2015, 21:06   #19  |  Link
sqrt(9801)
Registered User
 
Join Date: Jun 2013
Posts: 24
Or you can just use Shader with Param2 = "-1,0f"

IIRC, VirtualDub's documentation mentions that using B = -0.6 is mathematically more accurate, so that's something you could try out.

Last edited by sqrt(9801); 4th December 2015 at 21:09.
sqrt(9801) is offline   Reply With Quote
Old 4th December 2015, 22:28   #20  |  Link
MysteryX
Soul Architect
 
MysteryX's Avatar
 
Join Date: Apr 2014
Posts: 2,040
Here are some tests with SuperRes using these different downscalers
SuperRes(Passes=2, Strength=.43) with NNEDI3(nns=4) as a prescaler

Current (1/3, 1/3)


-1,0


-.6,0


These clearly make the image softer. Let's try with increased Strength.

-.6,0, Strength=1


My observation: -1,0 is too sort. -6.0 looks more natural than 1/3,1/3 but requires increased strength. It removes some weird darker areas of trees that happen with 1/3,1/3.

Overall, -6,0 with Strength=1 still looks a bit softer but it also looks more natural. I like that one.

What about an ultra-sharp version of it with Passes=3 and Strength=1? It looks pretty decent!


It's worth nothing that the colors are slightly different, especially with the greens. Which one is the most accurate?

The greens are already darker than the NNEDI3 version, so -6,0 is more accurate on the colors.

Last edited by MysteryX; 4th December 2015 at 22:41.
MysteryX is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:42.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2018, vBulletin Solutions Inc.