Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 Doom9's Forum on the theory of the Spline16/36/64 resizers
 Register FAQ Calendar Search Today's Posts Mark Forums Read

 17th May 2009, 13:06 #21  |  Link MfA Registered User   Join Date: Mar 2002 Posts: 1,075 Hmm? The first tangent for the Akima spline constraints on page 4 of that pdf you linked was : 1/2*(z2-z1/y2-y1+z3-z2/y3-y2), so essentially the tangent between P1 and P3 with uniform control points (y2-y1=y3-y2). Last edited by MfA; 17th May 2009 at 13:11.
 17th May 2009, 13:38 #22  |  Link MfA Registered User   Join Date: Mar 2002 Posts: 1,075 BTW, I should add that as far as linear interpolators go this is all really old hat ... stuff like this is a little more fresh : http://www-ist.massey.ac.nz/elai/Pub...eilei_2008.pdf This method is pretty cute and easy to implement (basically doing a little unsharp before interpolation) : http://www.ing.unibs.it/~marco.dalai/pub/DLM_VLBV05.pdf Last edited by MfA; 17th May 2009 at 15:39.
 19th May 2009, 19:37 #23  |  Link mikenadia Registered User   Join Date: Nov 2007 Posts: 246 Trying to demonstrate that some of the weights equal 1. I assume that all the weights are polynomial order n-1 and do not depends on the yi values. Will take then all the yi=1 and try to prove that the polynomial G(x)= (sum weights) -1 has n zeros and therefore is null. Based on this paper [http://online.redwoods.cc.ca.us/inst...kyMeg/Proj.PDF (equation 23), for i=1to n-1: ai+bi+ci=0 di=1 (h=1 and all the yi=1) Si(xi+h)=ai(h)^3 +bi(h)^2+ci(h)+di=1 because h=1 and Si(xi+1)=sum of weigths(xi) because all the yi=1 so all the xi(for 1 to n-1) are zero of G(x). Also S1(xn)=dn-1=1 so xn is also a zero of G(x). so G(x) has n zeros (polynomial order n-1) , and G(x)=0. Edit:different algorithm for Akima ( that is not a Catmull-Rom ) from the one from "fugroairbone": http://www.iue.tuwien.ac.at/phd/rottinger/node60.html They compute different derivatives at all points and interpolation is a parabole in the "end-intervals. They use five points to compute derivatives. It is probably too hard compared to Catmull-Rom (only two points)). We probably could modify Catmull-Rom to make it harder . Tangent in 2 = (s3-s1)/2.We could take (s3-s1)/k or k*ln(s3/s1) . We couls also compute first and second derivative at all points and based on a "magic formula", decide to increase radius, to favor natural cubic spline or Catmull-Rom. Edit2: I read that pixels that are in a low-gradient environment should be given more weight. May be when we interpolate along the axis X, we should keep an "indicator" of the gradient along X of the interpolated "pixel" and before interpolating along Y, take into account those "gradient along X" and normalize. And if there is value to it, is there a difference (human perception) by starting the interpolation along the X or Y axis. Last edited by mikenadia; 21st May 2009 at 02:59. Reason: Local gradient interpolation
19th May 2009, 20:54   #24  |  Link
Gavino
Avisynth language lover

Join Date: Dec 2007
Location: Spain
Posts: 3,380
Quote:
 Originally Posted by mikenadia Trying to demonstrate that some [sum!] of the weights equal 1.
Here's what I think is a simpler proof:

If all the yi are equal (to k, say), then it is clear that the solution Si(x)=k (for all i,x) satisfies the initial constraints.
(intuitively, interpolating a set of constant values gives that same value).
Since S1(x) (= sum(Wn(x)*yn)) in this case reduces to k times the sum of the weights, it follows that the sum is 1.

20th May 2009, 08:34   #25  |  Link
Registered Developer

Join Date: Sep 2006
Posts: 9,137
Quote:
 Originally Posted by MfA BTW, I should add that as far as linear interpolators go this is all really old hat ... stuff like this is a little more fresh : http://www-ist.massey.ac.nz/elai/Pub...eilei_2008.pdf This method is pretty cute and easy to implement (basically doing a little unsharp before interpolation) : http://www.ing.unibs.it/~marco.dalai/pub/DLM_VLBV05.pdf
Thanks, these papers do look interesting. Unfortunately "easy to implement" probably applies to math professors. At least me, being a programmer with only school math knowledge, I don't understand most of those papers... I guess there's no source code available for any of these techniques?

 20th May 2009, 14:59 #26  |  Link MfA Registered User   Join Date: Mar 2002 Posts: 1,075 Well I didn't say the first one was simple For the second one you simply apply a FIR filter with taps 7/720, -11/90, 49/40, -11/90, 7/720 in both dimensions and then you do bi-linear interpolation.
20th May 2009, 16:01   #27  |  Link
Registered Developer

Join Date: Sep 2006
Posts: 9,137
Quote:
 Originally Posted by MfA Well I didn't say the first one was simple

Quote:
 Originally Posted by MfA For the second one you simply apply a FIR filter with taps 7/720, -11/90, 49/40, -11/90, 7/720 in both dimensions and then you do bi-linear interpolation.
That sounds easy enough!! But are those taps identical for any up- and downscaling ratios? Normally when doing (big) downscaling a lot more taps are used than when doing upscaling...

Thanks!

 20th May 2009, 16:15 #28  |  Link MfA Registered User   Join Date: Mar 2002 Posts: 1,075 No, but as I've said before interpolation and resizing are two fundamentally different tasks. For downsizing you can simply add an "appropriate" gaussian blur before doing interpolation.
20th May 2009, 16:22   #29  |  Link
Registered Developer

Join Date: Sep 2006
Posts: 9,137
Quote:
 Originally Posted by MfA No, but as I've said before interpolation and resizing are two fundamentally different tasks. For downsizing you can simply add an "appropriate" gaussian blur before doing interpolation.
Ok, but the FIR configuration you posted would be alright for any upscaling factor, correct?

For downscaling should I also use Bilinear as the interpolation filter (after having done Gaussian blur)? To be honest, I'm not sure about doing Gaussian blur + Bilinear. I have already tried using Gaussian resampling for downscaling and the results were much too soft.

 20th May 2009, 16:31 #30  |  Link MfA Registered User   Join Date: Mar 2002 Posts: 1,075 As I said, appropriate amount ... you can make it as strong or weak as you want. Direct Gaussian resampling only makes sense for very large ratios.
20th May 2009, 17:46   #31  |  Link
Registered Developer

Join Date: Sep 2006
Posts: 9,137
Quote:
 Originally Posted by MfA BTW, I should add that as far as linear interpolators go this is all really old hat ...
Maybe it's old hat. But I've just tried the 2nd "fresh" paper which you kindly explained to me. And I don't like the results much. Old hat Spline36 looks worlds better to me. For those interested, here is a photo upsampled 425% by various resampling filters:

- Bilinear.bmp
- Prefiltering + Bilinear.bmp
- Bicubic50.bmp
- Bicubic75.bmp
- Spline36.bmp

The "Prefiltering + Bilinear" is what MfA's 2nd "fresh" paper produces. It's a lot less blurry compared to simple Bilinear resampling, but it has quite visible ringing and isn't any better in terms of aliasing compared to simple Bilinear filtering. Spline36 has a similar amount of ringing, but has *MUCH* reduced aliasing.

---------------------

Sooooo, back to Spline16/36/64/256?

20th May 2009, 18:24   #32  |  Link
Egh
Registered User

Join Date: Jun 2005
Posts: 630
Quote:
 Sooooo, back to Spline16/36/64/256?
has somebody calculated spline256 coefficients by now? I was also thinking about less computationally expensive option of spline144....

20th May 2009, 18:40   #33  |  Link
Gavino
Avisynth language lover

Join Date: Dec 2007
Location: Spain
Posts: 3,380
Quote:
 Originally Posted by Egh I was also thinking about less computationally expensive option of spline144....
Well, Spline100 (5 taps) would actually be the next one in the sequence.

20th May 2009, 18:41   #34  |  Link
Registered Developer

Join Date: Sep 2006
Posts: 9,137
Quote:
 Originally Posted by Egh has somebody calculated spline256 coefficients by now? I was also thinking about less computationally expensive option of spline144....
In my experience the more taps you use, the less obvious the difference is. For my eyes going from 3 to 4 taps is a bigger difference than going from 4 taps to 8 taps. Oh well, I'm talking about positive differences here. Negative differences (ringing) are quite obvious when going from 4 taps to 8 taps. Personally, I don't think more than 4 taps is really useful, because the positive aspects are extremely small while the negative aspects are quite noticeable. But that's just my personal opinion...

20th May 2009, 19:42   #35  |  Link
MfA
Registered User

Join Date: Mar 2002
Posts: 1,075
Quote:
 Originally Posted by madshi For those interested, here is a photo upsampled 425% by various resampling filters
That's not pure interpolation though ... with that amount of resizing you are testing the filters on how believably they can make up extra detail.

20th May 2009, 19:52   #36  |  Link
Gavino
Avisynth language lover

Join Date: Dec 2007
Location: Spain
Posts: 3,380
Quote:
 Originally Posted by MfA That's not pure interpolation though ...
Why not? What do you mean by 'pure interpolation' then?

 20th May 2009, 19:54 #37  |  Link MfA Registered User   Join Date: Mar 2002 Posts: 1,075 Small ratios, shifts, rotations. For large ratio upsizing something like Tritical's filters seem more appropriate.
20th May 2009, 20:02   #38  |  Link
Registered Developer

Join Date: Sep 2006
Posts: 9,137
Quote:
 Originally Posted by MfA That's not pure interpolation though ... with that amount of resizing you are testing the filters on how believably they can make up extra detail.
Even if I upscale by "only" 33% (anamorphic stretch) I still get similar results, the differences are just smaller. I think the main point of Spline16/36/64 (which is the topic of this thread) is doing DVD -> HD upscaling. That's still over 200% enlargement. So I think my comparison screenshots are valid.

I see in the paper that they tested by rotating images. Maybe the results would be different in that case, I don't know. But I do find it kind of funny that both papers seem to use some simple cubic interpolation (probably bicubic 0.5) for the "conventional" interpolation comparison screenshots. And then they wonder why the results are so blurry?! Are they not aware that e.g. Lanczos exists? Which usually produces much better results when doing multiple operations after each other (like rotating multiple times or upscaling/downscaling multiple times)...

20th May 2009, 20:22   #39  |  Link
MfA
Registered User

Join Date: Mar 2002
Posts: 1,075
Quote:
 Originally Posted by mikenadia Edit:different algorithm for Akima ( that is not a Catmull-Rom ) from the one from "fugroairbone"
Unfortunately this one no longer simplifies to a convolution kernel ... so it's going to be quite slow.

20th May 2009, 20:56   #40  |  Link
tetsuo55
MPC-HC Project Manager

Join Date: Mar 2007
Posts: 2,317
Quote:
Are you using that one in MadVR?