Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 8th February 2011, 11:13   #241  |  Link
absence
Registered User
 
Join Date: Jun 2009
Posts: 17
I'm toying with the idea of implementing some kind of EDI on a GPU. Not as advanced as NNEDI, more like NEDI or MEDI. The MEDI paper claims NEDI only uses the training window shown in figure 4 a (or b), but according to a more practical description of NEDI, all 4x4 pixels surrounding the purple unknown high resolution pixel dot on the blue grid are used, in a way that leaves the purple dot centred. The MEDI paper and the figures in the NEDI and MEDI papers makes it sound like the unknown pixel is off-centre and only 3x3 surrounding pixels are used, while the formulas in both are quite clear about using all the pixels surrounding the unknown one. What am I misunderstanding here?
absence is offline   Reply With Quote
Old 8th February 2011, 16:04   #242  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
I had the same discussion with madshi about a year ago about MEDI, and the conclusion was that the MEDI paper is just weird. They talk about NEDI using 1 training window, which it could be implemented that way, but never is. All the implementations I've seen use 16 (4x4) or more training windows around the current point - and in both steps the combination of all of them is always centered about the point to interpolated. Here is what I wrote to madshi:

Quote:
Originally Posted by tritical
I don't understand that paper either. Saying that nedi uses a single training window is weird, but I guess it could be implemented that way. Their point about covariance mismatch is true either way though. If you use all training windows available inside an NxN window (8x8, 16x16 etc...) around the current point, some of those windows may have the same structure (same linear relationship between the predictors and the center pixel) as the real edge and some may not. The ones that don't could cause bad coefficient estimates. My take is they are trying to select the training window that most closely resembles the window being used for interpolation, but their criteria of highest covariance signal energy doesn't seem like the best solution. Anyway, based on the results they report it doesn't seem like their method is much of an improvement over nedi.
tritical is offline   Reply With Quote
Old 8th February 2011, 16:36   #243  |  Link
absence
Registered User
 
Join Date: Jun 2009
Posts: 17
Quote:
Originally Posted by tritical View Post
I had the same discussion with madshi about a year ago about MEDI, and the conclusion was that the MEDI paper is just weird.
Glad it's not just me.

Quote:
Originally Posted by tritical View Post
Anyway, based on the results they report it doesn't seem like their method is much of an improvement over nedi.
MEDI does a better job than NEDI at connecting pixels in the spoke images in the paper, but I haven't seen it in action anywhere else and don't know if the difference matters much for "normal" images.

Thanks for the info!
absence is offline   Reply With Quote
Old 9th February 2011, 00:40   #244  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Like I said, I would not trust the MEDI paper. If you actually implemented what they describe, which is NEDI with a 2x2 window (so you have 4 training cases per pixel, which is still too few) and then select only one training case based on the energy, I would expect very large artifacts in any type of detailed area. The reason being you are using only 1 training case to estimate 4 parameters.
tritical is offline   Reply With Quote
Old 21st February 2011, 13:37   #245  |  Link
henryho_hk
Registered User
 
Join Date: Mar 2004
Posts: 889
Just curious, is the idea of gamma drift in resizing relevant to the error calculation in NNEDI training?
henryho_hk is offline   Reply With Quote
Old 21st February 2011, 20:02   #246  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
The short answer is yes. I think it would relate to all image quality metrics. It would be interesting to know whether mean squared error, mean absolute error, SSIM, etc.. correlate better with human perception when computed on luminance (Y - relative luminance - computed from linear rgb) versus luma (Y' - computed from gamma corrected rgb). Lightness (human perceived brightness) is not linear with respect to relative luminance though. It is roughly linear as Y^1/3, after normalization by the reference white (cie76).

As far as interpolation goes, it is not quite as cut and dry as that webpage makes it out to be. Even if you undo gamma correction, there is no guarantee that the function being used for interpolation will more accurately fit (approximate) the underlying data. I could easily generate an image that is a linear ramp after gamma correction... so linear interpolation would give worse results if performed on the linear values. One could argue that such images are unlikely, or that natural images are generally better modeled by standard interpolation functions after removing gamma correction, which I guess is the argument of that webpage and is probably the case.

I should clarify, if you want to compute the average (or weighted average) luminance around a point (i.e. within some area) then you would most definitely want to work with the linear values. Interpolation is a different matter.

Last edited by tritical; 21st February 2011 at 20:37.
tritical is offline   Reply With Quote
Old 24th February 2011, 17:14   #247  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
@tritical: Regarding interpolation, the ramp example isn't that relevant as the differences between the two methods would be negligible when interpolating between roughly similar brightness levels.

Where this effect shows up strongest is when interpolating between a bright pixel (or several of them) and a much darker one (or several). The effect is that small bright details (e.g. stars in the night sky) become much dimmer and less visible than they should, while small dark details (e.g. speckles of dirt on a white sheet) become more obvious than they should.
pbristow is offline   Reply With Quote
Old 24th February 2011, 22:32   #248  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Yes, your are correct that when averaging pixel values the difference between 1.) averaging the gamma corrected values directly and 2.) undoing gamma correction, averaging the linear values, and then redoing the gamma correction will be greatest when the values to be averaged are very far apart... and 1 will always result in a smaller final value than 2 (assuming the applied gamma correction factor is < 1.0). However, that is not interpolation. For example if I am given y=1.0 at x=0.0 and y=0.0 and x=1.0, and am asked to give a value for y at x=0.5, without any other knowledge about the function there is no reason to believe that averaging y at x=0.0 and x=1.0 will be anywhere close to the correct value at x=0.5. Now if I know that y is piecewise linear that is another matter. However, I would argue that most images - especially edge areas - are much closer to piecewise constant than piecewise linear... i.e. when you have two neighboring pixels that are very different, rarely in the original continuous (infinite resolution) image would the value directly between them be exactly half the luminance.

Also in my ramp example the difference between neighboring function values (that are linear after gamma correction) could be made infinitely large, resulting in as large a difference as you want. That point was only to say that the accuracy of interpolation will be limited by how well the model you use for interpolation fits the underlying data.

Actually, I did a quick test using the training data for nnedi (primarily directional edges and complex textures) and linear interpolation (so the goal is to predict the pixel value at (x,y) given (x,y+1) and (x,y-1)). Linear interpolation on the linear values (undo gamma correction, interpolate, redo gamma) vs linear interpolation on the gamma corrected values did not result in any significant reduction in absolute error or squared error - both in gamma corrected and linear value space. The same for cubic interpolation.

Last edited by tritical; 24th February 2011 at 22:50.
tritical is offline   Reply With Quote
Old 25th February 2011, 01:12   #249  |  Link
pbristow
Registered User
 
pbristow's Avatar
 
Join Date: Jun 2009
Location: UK
Posts: 263
Hmmm... perhaps I was assuming too simple a model of interpolation (thinking of the case of bilinear interp as the simplest case, and over-generalising)?

Today I've been looking at various images of faces, many where light is reflect in small highlights off of dark hair, and seeing what happens when I resize the image. Subjectively it does seem that the small highlights are getting dimmed (not just shrinking) during the resize, but then it's probably just a crude bilinear resize (whatever IE8 uses to resize images on webpages). I might do some proper testing tomorrow with various resizers, if I get time.
pbristow is offline   Reply With Quote
Reply

Tags
deinterlace, nnedi

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:02.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.