Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 9th October 2012, 21:28   #14581  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
That's quite interesting. Won't that change the look of the image a bit, though?

What happens if I simple ignore that some values may lie outside of the [0,1] range? Will I get wildly inaccurate results, like division by zero, infinity or something like that?
madshi is offline   Reply With Quote
Old 9th October 2012, 21:29   #14582  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Question: Wouldn't it be better to simple "linearize" the Y channel and only sigmoidize that?
(I need to reread these various definitions. Is Y' physically a linear light channel (luminance) or a gamma-encoded light channel (luma)?
P.S. Just reread. So, you normally work and enlarge in a perceptual space (R'G'B').
The formulas I gave assumed that I was working with physically linear light channels. Apparently you can convert to some form of linear light (light RGB with something-something-something primaries). But maybe this is a waste.
-----
Need to think.
Maybe some form of the dirty hack would turn out better in the end?
P.S.
Something like:
Y'CbCr -> R'G'B' -> negate -> pretend it's RGB -> R'G'B' -> resample -> RGB -> negate -> "retag" as R'G'B' -> Y'CbCr
?
P.S.2 How does gamma correction handle negative values? By clamping?

Last edited by NicolasRobidoux; 9th October 2012 at 21:52.
NicolasRobidoux is offline   Reply With Quote
Old 9th October 2012, 21:42   #14583  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Quote:
Originally Posted by madshi View Post
That's quite interesting. Won't that change the look of the image a bit, though?
Not really: If you scheme is interpolatory, it will still be interpolatory using the "old" version of sigmoidization or this "fixed" one. Does not matter.
For example, assuming no rounding error, nearest neighbour would give the same result with all variants.
However, it changes somewhat how sigmoidization affects the result, just like going through linear light or La*b* to upsample would affect the result. But someone could argue that it's an as good, or better, sigmoidization.
I don't really know.
Quote:
Originally Posted by madshi View Post
What happens if I simple ignore that some values may lie outside of the [0,1] range? Will I get wildly inaccurate results, like division by zero, infinity or something like that?
You can go out of the [0,1] range "a little bit" without feeding values outside of (-1,1) to atanh (no matter how it's computed) because "s", which characterizes where 0 and 1 go, provides some headroom within (-1,1). But once you hit -1 or 1 or go past, all hell breaks loose.
-----
In ImageMagick, this is fixed by clamping the argument of atanh to [-1+MagickEpsilon,1-MagickEpsilon]. You would not lose all the blacker than black and whiter than white, esp. when the contrast is low. But with high contrast, you may lose a chunk of whiter than white.
The solution works within ImageMagick because we're mostly concerned about sRGB and it converts to and from linear RGB with sRGB primaries without over- and undershoots.

Last edited by NicolasRobidoux; 9th October 2012 at 21:53.
NicolasRobidoux is offline   Reply With Quote
Old 9th October 2012, 21:42   #14584  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by NicolasRobidoux View Post
P.S. Just reread. So, you normally work and enlarge in a perceptual space (R'G'B').
The formulas I gave assumed that I was working with physically linear light channels. Apparently you can convert to some form of linear light (light RGB with something-something-something primaries). But maybe this is a waste.
-----
Need to think.
Maybe some form of the dirty hack would turn out better in the end?
It's not as complicated as it seems. After all the conversions I end up with R'G'B', usually with BT.709 primaries, which happen to be identical to sRGB primaries, IIRC. So basically I end up with the very same thing you get when you load a BMP file. The formulas you gave me are for linear light, I understood that. Of course when I implementing all this in madVR, an hour ago, I've first converted R'G'B' to RGB and then applied your sigmoidization formulas. And it's working well! So no problem here. The only thing I'm worried about are those values outside of [0,1].
madshi is offline   Reply With Quote
Old 9th October 2012, 21:45   #14585  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
P.S: Although I do end up with similar data to a BMP file, there's one small difference: The values can go outside of the [0,1] range, as discussed before.
madshi is offline   Reply With Quote
Old 9th October 2012, 21:46   #14586  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by NicolasRobidoux View Post
Not really: If you scheme is interpolatory, it will still be interpolatory using the "old" version of sigmoidization or this "fixed" one. Does not matter.
For example, assuming no rounding error, nearest neighbour would give the same result with all variants.
However, it changes somewhat how sigmoidization affects the result, just like going through linear light or La*b* to upsample would affect the result. But someone could argue that it's an as good, or better, sigmoidization.
I don't really know.
Ok, I'll give this a try tomorrow, thanks so much for your help!!
madshi is offline   Reply With Quote
Old 9th October 2012, 22:00   #14587  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Mathias:
This is exciting.
And I think that we both understand and agree that the issue is how to handle values that go "out of nominal gamut" without diminishing the ability of sigmoidization to enhance what's within the nominal gamut.
NicolasRobidoux is offline   Reply With Quote
Old 9th October 2012, 22:10   #14588  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Mathias:
I just clicked on something: It should be easy to compute a maximum allowable contrast ("a") that guarantees that things don't go to hell.
The maximum allowable contrast would depend on A and B in an elegant way.
Then, sigmoidization would be "intact". There just would be "less of it" when there is a possibility of large overshoots.
-----
The simplest fix is simply to clamp like in ImageMagick. If the clamping messes things up, we should be able to see it.
I'm a bit concerned that we are messing up sigmoidization by trying hard to preserve outliers (very large whiter than white, for example).
NicolasRobidoux is offline   Reply With Quote
Old 9th October 2012, 23:55   #14589  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Mathias: I think that my "one multiply-add at the beginning, one at the end" is probably a pretty good solution, in part because giving more intense sigmoidization near darks (which is the case since A apparently is generally quite close to zero) may match what's best given the HVS: We're less sensitive to light halo because our eyes are more sensitive to darks.
-----
Otherwise, I think that the clampling solution built into ImageMagick would be both good and simple. All it is is clamping -s+(2.0*s)*x to the interval [-1+fudgeEpsilon,1-fudgeEpsilon] before feeding to atanh. No more and no less.
Part of why I think this may work well is that even for the huge overshoot you estimated, namely 1.2143440435408780674380747705496, you'd need contrast > 3.468 before any clamping occurs.
Sure, I like something like 7.5. But we are not talking orders of magnitude.
This being said, with contrast=7.5, clamping will start at 1.024 or so.
-----
I have a third solution, but I'll leave it out until I know better whether the above two work or fail.

Last edited by NicolasRobidoux; 10th October 2012 at 00:05.
NicolasRobidoux is offline   Reply With Quote
Old 10th October 2012, 00:05   #14590  |  Link
aufkrawall
Registered User
 
Join Date: Dec 2011
Posts: 1,812
I have a little problem:
I created a custom resolution with 59,94 Hertz and added it and the native 60 Hertz resolution to the madVR display modes list (both 1280x1024).
It works well with 720p59,94 and 1080p60 videos but for a 1080p59,94 it refuses to choose 59,94 Hertz.
What's going wrong?
aufkrawall is offline   Reply With Quote
Old 10th October 2012, 00:09   #14591  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
One comment to all of you out there:
When enlarging, it may be that you like the results obtained with sigmoidization better than those obtained by explicitly sharpening.
NicolasRobidoux is offline   Reply With Quote
Old 10th October 2012, 00:40   #14592  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Quote:
Originally Posted by madshi View Post
...
To my eyes the "sigmoidal" image clearly looks best. I've still left the anti-ringing filter enabled, though. Although the "sigmoidal" stuff does reduce the ringing artifacts, it doesn't completely remove them. So image quality is still noticeably better with the added anti-ringing filter on top.

So, only 2 questions remaining:

(1) Which is the "best" sigmoidal contrast? I've seen 7.5 (named with "safe") and 11.5. The image from above is with 11.5, which I prefer with this test image. Should I always use 11.5? Or would that be a bad idea?
...
I had missed this post, so busy I was fixing the "out of gamut" issue.
Now:
Sigmoidization is a method that I invented less than 3 months ago. So I don't know everything there is to know.
Let's assume that you use the "clamp the argument of atanh" like it's done in ImageMagick. Otherwise contrast values are not really comparable.
My hunch is this:
Given that:

1) You actually "come from" Y'CbCr, not sRGB
2) Your luminance channel is better resolved. (Is it?)
3) Your video images are generally somewhat "soft" (by DSLR standards)

I would say that it is quite likely that contrast 11.5 is almost always safe. Maybe even 12.

I'd consider, however, using the slightly larger LanczosSharp deblur I discuss earlier in this thread and in my new ImageMagick recommendations: Let sigmoidization take care of "tightening things" and let EWA Lanczos take care of "smoothing things".

If people start complaining about something, it will be this:

They will notice that sometimes a colour that has a very high component (say R=255) mixed in with more midtone-like values in the G and B channels will "bleed" near sharp interfaces. Same if you have R=0.
What's happening is that the extreme colour values are "sharpened" differently by the low pass filter than the midtones, and mixed extreme/midtone interfaces thus are "rolled up" differently in the three channels.

Fixing this would be hugely complicated...

sRGB does this too. All the perceptual color spaces, actually, to the best of my understanding.

But then enlarging through linear light sucks!

-----

If you use the [A,B] gamut control "system", I'd say you may want even higher sigmoidization.

Does my answer make sense?
(You understand that I'm the one who invented EWA LanczosSharp in all its mutliple variants, yes? Was kind of an obvious extension of what people had done before, and Anthony Thyssen and Fred Weinhaus, as well as publications by Gustafson and deForest, were strongly pointing in the right direction, but still. I'm the one who finally put all the pieces together, and then added "optimizing the deblur" to the mix, mostly in response to Anthony Thyssen's knowledgeable list of shortcomings of the earlier ImageMagick implementation.)
P.S. TTBOMK I'm the one who finally got is just right. If anybody knows of predecessors I should acknowledge, by all means, please point me to them. "As usual," Paul Heckbert and associates got this going.
At some point I may take the time to carefully track down everybody who had pieces of this lying around.

Last edited by NicolasRobidoux; 10th October 2012 at 15:06.
NicolasRobidoux is offline   Reply With Quote
Old 10th October 2012, 02:14   #14593  |  Link
blackjack12
Registered User
 
blackjack12's Avatar
 
Join Date: Aug 2012
Location: Silicon Valley
Posts: 46
Quote:
Originally Posted by madshi View Post
But when using the official v0.84.2 build, the problems with 6000 series are not specific to deinterlacing, correct? They also occur with progressive content, correct? Could you please double check, just to be safe? It's important to know.

So the special test build I created for you makes everything progressive work just fine, and the only problems left are when using deinterlacing? Is that correct? How about the official v0.84.2 then? Could you please double check if maybe the official v0.84.2 also plays progressive content fine? I'm asking because the only difference between the official v0.84.2 and your special test build is deinterlacing. So for progressive content v0.84.2 and your test build should behave identical.

I'm sorry that fixing this problem takes so long. If only I could reproduce it on my PC !!!
Okay ... re-test results.

AMD Radeon 6570/1GB DDR3
Driver ver. 12.8
MPC-HC or MPC-BE (latest versions)
LAV Filters: 51.3-120
Windows 7 and Windows 8 test – identical systems (64 bit)

Official 0.84.2
Windowed Mode
Interlaced: BAD
Progressive: BAD

FSE Mode
Interlaced: GOOD
Progressive: GOOD

0.84.2 – special madVRblackjack test build
Windowed Mode
Interlaced: CRASH
Progressive: BAD (4 plus jumps = dropped frames and buffers don’t fill)

FSE Mode
Interlaced: CRASH
Progressive: GOOD

Go back to 0.82.5 and all is GOOD...
__________________
MPC-HC and MPC-BE (latest), MadVR 0.92.17, LAV 0.73.1
Intel NUC w_650 internal, Roku Ultra, Nvidia Shield, Apple TV 4K
PLEX Server with QUADRO 2000
Windows 10 Pro (all latest updates)
blackjack12 is offline   Reply With Quote
Old 10th October 2012, 06:02   #14594  |  Link
turbojet
Registered User
 
Join Date: May 2008
Posts: 1,840
Quote:
Originally Posted by madshi View Post
Well, I would much rather implement a *real* sharpening algorithm, to be honest. That's why I'm also reluctant offering so many more tweaks: It costs many hours of my time which I could instead spend on doing the "real thing". Personally, I don't like Darbee pop at all. I much prefer something like Didee's FineSharp. So if you ask me to add this and that, and a couple of other tweaks in between, basically you're keeping me busy with so-so stuff, instead of allowing me to move on to bigger and greater things. I've said it so many times before: Let me first add all the missing features, before spending my time by twiddling with 1% improvements. I'm feeling held back at the moment, by all the constant requests for small tweaks and changes everywhere. Aren't you guys interested in getting things like custom shader support, 3D support etc?


Interesting, never heard of vapoursynth yet.
I'm not trying to change your priorities but didn't think it would take hours to implement. Create a dropdown, label, changing blur parameter to a variable, set variable based on dropdown state. Without variables some copy/paste and changing one number but no clue how the code is. The first place I saw negative blur suggested was in the finesharp thread by the finesharp author. I think the reasoning is finesharp should give/take as much information as possible, blurring is counter-intuitive.

Shader support sounds exciting but my experience is avisynth can do everything shaders can do and often better, except madvr's anti-ringing, you did a great job on it! Sharpen complex is bested by simple avisynth sharpen() imo and deband shaders aren't effective at all. Shaders shift cpu load to gpu which is a plus but kind of irrelevant with madvr loading the gpu more than cpu.

3D support sounds nice for those that have 3DTV's are you talking SBS, Top/Bottom, both can be done via avisynth, or is it something else?

Here's parking meter 4x upsize test with -1 and 0 blur if you want to take a look, it's more of a difference then the pic's I posted earlier. Here's hoping you haven't given up on it.

Vapoursynth is cross platfrom and requires avisynth plugins to be rewritten.
turbojet is offline   Reply With Quote
Old 10th October 2012, 08:07   #14595  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by aufkrawall View Post
I have a little problem:
I created a custom resolution with 59,94 Hertz and added it and the native 60 Hertz resolution to the madVR display modes list (both 1280x1024).
It works well with 720p59,94 and 1080p60 videos but for a 1080p59,94 it refuses to choose 59,94 Hertz.
What's going wrong?
If you create a madVR debug log, I can tell you for sure.

Quote:
Originally Posted by blackjack12 View Post
Okay ... re-test results.

Official 0.84.2
Windowed Mode
Interlaced: BAD
Progressive: BAD

FSE Mode
Interlaced: GOOD
Progressive: GOOD

0.84.2 – special madVRblackjack test build
Windowed Mode
Interlaced: CRASH
Progressive: BAD (4 plus jumps = dropped frames and buffers don’t fill)

FSE Mode
Interlaced: CRASH
Progressive: GOOD

Go back to 0.82.5 and all is GOOD...
So in other words: The test build didn't help at all, correct? Oh well, this is going to be difficult to fix. There's one thing remaining for you to test, though: When going back to v0.82.5, are you using any of the tweak settings that I removed in v0.83.0 (and higher)? You can test this by simply resetting v0.82.5 to default settings. If it still works fine, and if v0.84.2 does not work fine with default settings then the tweak settings are not the problem.

If the tweak settings are not the problem, would you mind creating another debug log or 2 or 3? Please seek until you get those frame drops, then let it run for ca. 10-20 seconds, then close the media player. Please don't reseek after you achieved the frame drop situation. That will make it easier for me to find the right location in the monster sized log files. Please also use a progressive source, that makes the logs a bit easier because no deinterlacing is performed. Thanks!

Quote:
Originally Posted by turbojet View Post
I'm not trying to change your priorities but didn't think it would take hours to implement. Create a dropdown, label, changing blur parameter to a variable, set variable based on dropdown state. Without variables some copy/paste and changing one number but no clue how the code is.
Creating a different advanced/simple user interface would take hours, days even. Just adding a blur parameter would not take that long, of course. But just adding a dropdown and label etc isn't all there is to it: I'd have to:

(1) Extend the settings records.
(2) Modify madVR to read, write, understand and use the new setting.
(3) Probably add a toggle keyboard shortcut.
(4) Make sure the setting is only available for cubic algorithms.
(5) Change my resampling weight calculator to support custom blur.
(6) Think about how to make the dropdown work with Bicubic, SoftCubic, CatmullRom and Mitchell. Using the same blur value for all wouldn't work because they have different blur values on their own. Of course it would be possible to limit negative blur capability to Bicubic, only.

All added up it might easily take 2-3 hours. More if I'm stupid and introduce bugs. Add to that that the user interface gets more complicated again, which means that new users will ask not only "which is the best algorithm" but also "which blur factor should I use" etc.

I prefer the idea to offer a fully configurable cubic resampler, which could replace all the other cubic based algorithms (Mitchell, Bicubic, Catmull-Rom, SoftCubic). But then, that would definitely be a settings for experts, only, and will have to wait until I have separate beginner/expert settings.

Quote:
Originally Posted by turbojet View Post
Shader support sounds exciting but my experience is avisynth can do everything shaders can do and often better, except madvr's anti-ringing, you did a great job on it! Sharpen complex is bested by simple avisynth sharpen() imo and deband shaders aren't effective at all. Shaders shift cpu load to gpu which is a plus but kind of irrelevant with madvr loading the gpu more than cpu.
I somewhat agree, but you can't run avisynth scripts *after* scaling, when using madVR, so shaders have their use. Also GPUs get much faster every year, while CPUs get only slightly faster. So I think in the long run it makes sense to run these kind of algorithms all on the GPU. Furthermore: If you have a good GPU, you can run algorithms on it which avisynth scripts couldn't do in realtime on a CPU. Whether it is easily possible to convert such complex avisynth scripts to a GPU shader is another question, though.

Quote:
Originally Posted by turbojet View Post
3D support sounds nice for those that have 3DTV's are you talking SBS, Top/Bottom, both can be done via avisynth, or is it something else?
I was mainly thinking about HDMI 1.4 frame packed output. That will not be easy because all ATI, Nvidia and Intel have custom development solutions for that. But this should be the ultimate goal. Doing SBS or Top/Bottom is kind of yesterday.

Quote:
Originally Posted by turbojet View Post
Here's parking meter 4x upsize test with -1 and 0 blur if you want to take a look, it's more of a difference then the pic's I posted earlier. Here's hoping you haven't given up on it.
It is not uninteresting, but I much prefer Lanczos or Jinc over Bicubic with negative blur. Bicubic has too much aliasing for my taste.

Look, if I knew a way to support negative blur without making the settings dialog more complicated, I would consider it. Maybe I will add it in the future when I find the time to do separate settings dialogs for noobs and experts. But for now I don't want to implement it because I don't want to spend that time and because I don't want to make the settings dialog more complicated. Just have a bit of patience.

Quote:
Originally Posted by NicolasRobidoux View Post
I had missed this post, so busy I was fixing the "out of gamut" issue.
I've had another idea on how to fix the "out of gamut" issue: For values outside the valid range I could practically mirror the sigmoidization S curve. Meaning e.g. for values "> 1" I would do "2 - value" before *and* after sigmoidization. And for values "< 0" I could do "abs" before sigmoidization and then simply restore the "-" sign after sigmoidization. What do you think?

Quote:
Originally Posted by NicolasRobidoux View Post
My hunch is this:
Given that:

1) You actually "come from" Y'CbCr, not sRGB
2) Your luminance channel is better resolved. (Is it?)
3) Your video images are generally somewhat "soft" (by DSLR standards)

I would say that it is quite likely that contrast 11.5 is almost always safe. Maybe even 12.
I think I'll stick with 11.5. In one of your threads I found "11.6933", but that was for Ginseng. I guess you used some formula to end up with exactly "11.6933", right? But I guess that only applies to Ginseng? Is it alright to use exactly "11.50000" for 2D Jinc and 1D Lanczos? Or is there some math reason to choose slightly different values?

Quote:
Originally Posted by NicolasRobidoux View Post
I'd consider, however, using the slightly larger LanczosSharp deblur I discuss earlier in this thread
Will do. You mean 0.9891028367558475, right?

Quote:
Originally Posted by NicolasRobidoux View Post
sRGB does this too. All the perceptual color spaces, actually, to the best of my understanding.

But then enlarging through linear light sucks!
Agreed. But sigmoidization is not perceptual, is it? I mean it's linear light + S curve. That's not really perceptual, I think.

Quote:
Originally Posted by NicolasRobidoux View Post
You understand that I'm the one who invented EWA LanczosSharp in all its mutliple variants, yes? Was kind of an obvious extension of what people had done before, and Anthony Thyssen and Fred Weinhaus, as well as publications by Gustafson and deForest, were pointing in the right direction, but still.)
Yes, and I very much appreciate the invention you did there. Funny enough, in my own experiments I thought that 2D resampling should be better than scaling X and Y separately, because looking at a circle of source pixels to find a target pixel seems so much more intuitive, but I never got it to work right in my experiments. Well, I tried Sinc-Sinc (Lanczos) with the real 2D distance (Sqrt(xDist^2 + yDist^2)), but the results were pretty bad. So I gave up on that, my math skills were simply not good enough to understand why it didn't work. So I was quite happy when I understood that basically the key thing about your EWA resamplers is that they work in 2D instead of 1D, using a circle of source pixels... I will add a paragraph to madVR's readme to thank you for your contributions!
madshi is offline   Reply With Quote
Old 10th October 2012, 08:28   #14596  |  Link
bugmen0t
Banned
 
Join Date: May 2012
Location: _Lies|Greed|Misery_
Posts: 114
Not sure what you guys want to achieve but you could modify the atanh to fit every range (a, b): atanh_mod(x) = log((a + x)/(b - x))/2. If a and b are known this is really cheap.
bugmen0t is offline   Reply With Quote
Old 10th October 2012, 11:35   #14597  |  Link
Neeto
Registered User
 
Join Date: Feb 2009
Posts: 77
Quote:
Originally Posted by madshi View Post
Yes, and I very much appreciate the invention you did there. Funny enough, in my own experiments I thought that 2D resampling should be better than scaling X and Y separately, because looking at a circle of source pixels to find a target pixel seems so much more intuitive, but I never got it to work right in my experiments. Well, I tried Sinc-Sinc (Lanczos) with the real 2D distance (Sqrt(xDist^2 + yDist^2)), but the results were pretty bad. So I gave up on that, my math skills were simply not good enough to understand why it didn't work. So I was quite happy when I understood that basically the key thing about your EWA resamplers is that they work in 2D instead of 1D, using a circle of source pixels... I will add a paragraph to madVR's readme to thank you for your contributions!
NicolasRobidoux & Mashi I only understand about 10th of what you're discussing but you two seriously rock - brilliant maths/understanding & implementation - brilliant combination!!
__________________
ASUS H97 Plus, Intel i5-4690 2.50Ghz, 16GB DD3 1600, XFX R9 270X 2GB DDR5, LynxTwo B
Win 8.1 Pro with WMC 64Bit, Kodi, MPC-HC 1.7.8, LAV 0.65.0, Reclock 1.8.8.5, HD AnyDVD
Neeto is offline   Reply With Quote
Old 10th October 2012, 11:56   #14598  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Quote:
Originally Posted by madshi View Post
... I prefer the idea to offer a fully configurable cubic resampler, which could replace all the other cubic based algorithms (Mitchell, Bicubic, Catmull-Rom, SoftCubic). But then, that would definitely be a settings for experts, only, and will have to wait until I have separate beginner/expert settings.
You are aware that there is a discussion of exactly this in "The Recommendations", yes?
Quote:
Originally Posted by madshi View Post
It is not uninteresting, but I much prefer Lanczos or Jinc over Bicubic with negative blur. Bicubic has too much aliasing for my taste.
Agreed: The bicubics don't compete. (See "The Recommendations", which unfortunately don't make this point strongly enough, partly become some commercial clients of mine really really like the Robidoux cubic EWA filter for downsampling, and consequently I'm wondering if there is something I'm missing (I try not to argue with panels of top web designers). The runner-up was EWA LanczosSharp (the "older" one; the new one did not exist then). I'll give another try at making them change, but speed matters and they're using an 8-bit toolchain, and they are processing sRGB values directly with lots of sky lines, and sRGB is bad with "light" haloing...)
This can be done with a "knob" that changes the deblur of the filter, like actually used with Jinc right now, except hardwired to .98...
P.S. (I realize that you probably have this figured out already.)
Actually, some DSLR people have gotten acceptable results doing this with tensor Lanczos as well (Lanczos 3 with roots spaced tighter than 1 by scaling the support of the filter), which will make some people shudder in horror big time, when downsampling.
But it appears that careful USM or other sharpening is better, at least when downsampling. (The jury is still out.)

Quote:
Originally Posted by madshi View Post
I've had another idea on how to fix the "out of gamut" issue: For values outside the valid range I could practically mirror the sigmoidization S curve. Meaning e.g. for values "> 1" I would do "2 - value" before *and* after sigmoidization. And for values "< 0" I could do "abs" before sigmoidization and then simply restore the "-" sign after sigmoidization. What do you think?
Here is a better idea, inspired by yours (and somewhat reminiscent of the sRGB handling of the gamma curve's slope singularity at 0): Extend the curves past 0 and 1 with straight lines (which I feel like an idiot not having thought of before).
What do you think?

Quote:
Originally Posted by madshi View Post
I think I'll stick with 11.5. In one of your threads I found "11.6933", but that was for Ginseng. I guess you used some formula to end up with exactly "11.6933", right? But I guess that only applies to Ginseng? Is it alright to use exactly "11.50000" for 2D Jinc and 1D Lanczos? Or is there some math reason to choose slightly different values?
These fancy values were obtained by applying uniformly an exact method of determining the contrast values for each filter that used an numerical approach based on ... vaguely determined parameters (based on usually accepted values for JND, namely 2.3 in Lab = 8 in 8-bit sRGB).
Honest advice: Use your eyes, and the eyes of your users. There is a possibility that BluRay people ask you to back this down. Or that some people ask for more of this "strange halo free sharpening". Or that some people will want less of it even when enlarging somewhat blurry images.
11.5 is a good, solid, pretty strong value. Maybe a little too strong? Probably not with enlargements of slightly blurry images that can't be scrutinized because they go by at 30fps.
But remember that sigmoidization, as a resampling technique, is not even 3 months old. You're in terra incognita.

Quote:
Originally Posted by madshi View Post
Will do. You mean 0.9891028367558475, right?
Yes.

Quote:
Originally Posted by madshi View Post
Agreed. But sigmoidization is not perceptual, is it? I mean it's linear light + S curve. That's not really perceptual, I think.
Even though it was not meant to be perceptual, there is a not totally explained connection with Weber's law, which is an approximation of the HVS that gamma correction itself approximates, at the dark end.
So, I could explain sigmoidization purely "physically", but actually I think it works well partly because it connects to the perceptual: "Midtone" halos don't offend us. "Extreme" ones do.
This is why I can fake sigmoidization with linear RGB -> sRGB -> linear RGB conversions + inversions (going "through the photo negative").

What I'm doing, in a sense, is symmetrizing the perceptual spaces so that white does not creep too much into the darker end, in other words, I'm symmetrizing w.r.t. black <-> white interchange.

Will read the rest after making coffee for my wife!

-----

My earlier LUT advice was backward. Will fix in a later post.

Last edited by NicolasRobidoux; 10th October 2012 at 14:08.
NicolasRobidoux is offline   Reply With Quote
Old 10th October 2012, 13:13   #14599  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Quote:
Originally Posted by bugmen0t View Post
Not sure what you guys want to achieve but you could modify the atanh to fit every range (a, b): atanh_mod(x) = log((a + x)/(b - x))/2. If a and b are known this is really cheap.
Thank you! (My complex variables teacher would hit me with a ruler if he knew I overlooked this.)
Will need to think of the consequences. I think it may be equivalent to the "two multiply-add" suggestion, saving flops.
Basically, it's like simplifying the fraction (I think).
NicolasRobidoux is offline   Reply With Quote
Old 10th October 2012, 13:40   #14600  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Mathias:
RE: "Is sigmoidization perceptual?"
Near the dark end, the lightly truncated logistic curve (same as tanh, really) approximates a gamma correction curve (or it's inverse: depends how you define forward/backward).
NicolasRobidoux is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:04.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.