Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 11th June 2004, 13:03   #1  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
New Filters: TTempSmooth and TBilateral

Well, I started a little project a few days ago, which I've been planning for a long time, to make a mod of SmoothHiQ (for YV12 and a few algorithmic changes) and this is where it ended up. My changes to SmoothHiQ yielded almost no quality difference, but succeeded in making it 5 to 10 times slower. At which point, I thought why not make a temporal filter based off it. The result was TTempSmooth. It uses the same weighting/averaging as SmoothHiQ, but in the temporal domain. It processes the chroma/luma planes together when testing for pixel similarity, but separately when computing averages. It also uses a 2 frame distance pixel difference test aside from the single which helps to avoid some artifacts. Surprisingly, it is not really all that slow. It is not as fast as TemporalSoften though.

TTempSmooth v0.9.4

While I was doing that I started looking into bilateral filtering. It is a very similar inverse distance and inverse pixel difference weighting algorithm, but is slightly more complex. From that came TBilateral. It's a spatial smoother that uses the bilateral filtering algorithm. It is very good at smoothing out noise while maintaining picture structure. I didn't have time to test it extensively (more then to make sure it was working) thus the defaults aren't set very well. Speed wise this filter is roughly equivalent to SmoothHiQ... though maybe a little faster?

TBilateral v0.9.11

Last edited by tritical; 17th May 2006 at 02:54.
tritical is offline   Reply With Quote
Old 12th June 2004, 01:45   #2  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
No replies... I love making useless filters . The defaults for TBilateral seemed to be not so bad though maybe a little on the conservative side. After looking over the code again I noticed there was a little problem with the calculations of boundary pixels near the edges of the image. Pixels in those windows were getting incorrect spatial weights from the lookup table. I posted a fixed version 0.9.1 that now has these defaults:

(7,5,2.0,7.0,2.0,7.0)

I could speed up the filter a little more by unrolling the xloops to avoid unneeded boundary conditions in the interior. Might try and do that later.
tritical is offline   Reply With Quote
Old 12th June 2004, 02:16   #3  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
BTW there is a way of efficiently implementing the bilateral filter without a LUT, although it is a bit of a severe hack

You can do an approximation to the exponential function by interpreting the exponent as the exponent part of a floating point number and doing a float->int conversion. Dunno if it represents a speedup though, you can only do 2 conversions at a time with SIMD so that doesnt buy you much ... depends on the throughput of the lookups.
MfA is offline   Reply With Quote
Old 12th June 2004, 07:29   #4  |  Link
Dali Lama
Registered User
 
Join Date: Jan 2002
Posts: 331
Hi tritical,

I tried out both filters and I am very impressed with TBilateral. It was more than comendably fast for smoothing chroma abherations and luma blocking at radiuses as high as 13. It does indeed preserve edge detail in the process. I tested it on anime/cartoon content.

On the other hand, TTempSmooth was good, but not up to the level of Dust (faerydust). It was able to remove substantial noise and not ghost at the default settings, but even when I cranked up the settings, it could not remove as much noise as Dust, especially in movement. Perhaps you could try allowing more powerful smoothing? Its really not a bad filter, but in a short test on anime and regular footage it could not overcome Dust.

Thanks a lot for both filters. If you want some more objective results I can post pics for you.

Take Care,

Dali
Dali Lama is offline   Reply With Quote
Old 12th June 2004, 13:08   #5  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@Dali Lama
Thanks for the feedback. I'm not too surprised by the results you got with TTempSmooth. It would never be able to compete with faerydust (especially on moving areas since it is designed not to touch them ). faerydust uses ME to do temporal smoothing on moving areas (even though it is a temporal smoother it also does light spatial smoothing as well I think). TTempSmooth is a simple fixed position smoother like TemporalSmoother, TemporalSoften, etc... After I played around with it more it seems the averaging/smoothing didn't turn out exactly how I thought it would. Though the motion detection works well. I may have to go back to the drawing board on that one and experiment with some different methods. I might try to combine it with TBilateral and make a motion adaptive smoother.

@MfA
I looked into that alternative approach to a LUT. Actually, I searched doom9 for bilateral and it turns out that someone had already coded a bilateral filter, and used that method . After looking at it I have no idea whether it would be faster then the LUT method. I didn't/don't plan to do an asm version of TBilateral though, and the LUT is easier for me to get right without messing something up . I'll probably just stick with it.


P.S. Posted version 0.9.2 of TBilateral, I unrolled some of the inner loops to avoid boundary calculations and checking, giving a slight speed gain.
tritical is offline   Reply With Quote
Old 13th June 2004, 06:12   #6  |  Link
DarkNite
Almost Silent Member
 
DarkNite's Avatar
 
Join Date: Jun 2002
Location: Purgatory
Posts: 271
Quote:
I might try to combine it with TBilateral and make a motion adaptive smoother.
We can never have too many of those.

I too was impressed with TBilateral's ability to maintain edge detail while helping compression a fair bit more than I expected from looking at the effect in preview.


And now, every filter authors favorite things, questions!

Are their plans for further development of TBilateral, or is the featureset locked and you're just tweaking it now?

What papers did you find the most useful when implementing a bilateral filtering algorithm? I've never read into this method seriously, but now my curiosity is piqued.
__________________
Rethinking the "Why?" chromosome.
DarkNite is offline   Reply With Quote
Old 13th June 2004, 10:12   #7  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Well, the reason TTempSmooth was so bad was there was a more then major bug that I have no idea how I missed. It was keeping it from doing anything useful at all. I'll post a fixed version here in a second with just the needed change. I still can't believe I was that dumb. Anyways, I also experimented more with different and faster weighting/blending schemes that seem better, but I don't have a fully working version of any of them yet.

@DarkNite

This paper:

http://www.cse.ucsc.edu/~manduchi/Papers/ICCV98.pdf

And here is a paper suggesting some improvements such as using an alternate kernel and replacing the mean with a weighted median. Also, the second section of this paper shows the formula for the regular bilateral filter in a little bit easier to see notation (at least for me).

http://dsp7.ee.uct.ac.za/~jfrancis/p.../PRASA2003.pdf

As for further development... as a straight bilateral filter I would consider it pretty much finished except for a little tweaking, though I might try implementing some of the suggested improvements.
tritical is offline   Reply With Quote
Old 13th June 2004, 11:46   #8  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 4,412
Regarding quality and compression, how would TTempSmooth differ from TemporalSoften or ye olde TemporalSmoother (YUY2 only)? I'm interested in playing with your filter on my TV caps (as soon as you get the fixed version out )
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 13th June 2004, 19:33   #9  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Computing a mean of 25+ values per pixel is already a lot of work though.

Making Bilateral filtering suitable for impulsive noise is nice ... but IMO one of the biggest downfalls of Bilateral filtering is that its noise removal ability takes a nosedive on gradients. Michael Elad says an efficient variation which works well on piecewise smooth images exists, unfortunately he never describes the actual algorithm ... and deriving it is beyond my skills.

http://www.cs.technion.ac.il/~elad/J..._Filter_IP.pdf
MfA is offline   Reply With Quote
Old 14th June 2004, 02:43   #10  |  Link
You Know
Registered User
 
Join Date: Jun 2004
Posts: 75
very good filter it work great on noise old Anime (Asterix).

On standard settings it work better than eDeen soft settings
eDeen(2,9,11,3,4) and it don't destruct backgroud detail.

Only a disadvantage.....it's twice slow than eDeen (2.2fps vs 5fps) :(

@MfA
interesting pdf
You Know is offline   Reply With Quote
Old 14th June 2004, 06:29   #11  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
@MfA
Thanks very much for that link. I had read something very much like that, but it was only bits and pieces of it as part of a powerpoint file. After reading that paper the improvement for handling piecewise linear signals makes sense. As I understand it, when you calculate the intensity difference you use (assuming (0,0) is the upper left of the current filtering window):

cP = center pixel
tP = current position pixel
abs(cP - average(tP,inv-tP)) instead of simply
abs(cP - tP)

where inv-tP is the inverse position inside the filtering window. Say current position is 0,0 then inv-tP would be n,n. Anyways, I could be far off. This approach would definitely improve filtering for gradients and the like that fit the piecewise linear model. However, it would tend to hurt filtering of actual sharp edges. Whether the improvement would be worth the trade off in "normal" video I don't really know. The nice thing is that the spatial weights stay the same. The down side is that it means more calculations will have to go inside the inner most loops. I'm gonna try it in a bit.
tritical is offline   Reply With Quote
Old 14th June 2004, 06:53   #12  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
I kind of doubt that would be the result of actually performing what Michael Elad suggests (using the new penalty function, applying the jacobi iteration, choosing the weights etc). Not that Im gonna do that.

What you suggest wouldnt really be slower though, since you can skip the weight determination for half the pixels (the weights become pointwise symmetric due to the averaging).

BTW to increase the performance on impulsive noise you can also include an extra weight for the center pixel (the smaller the weight for the center pixel the more resistant against impulsive noise the filtering becomes). SUSAN denoising didnt include the center pixel at all for instance, ie. a weight of 0 with the above scheme, of which Bilateral filtering is really only trivial variation (the inventor of bilateral filtering was apparently not aware of how close it was to SUSAN denoising though, he never mentioned it at any rate).

Last edited by MfA; 14th June 2004 at 07:14.
MfA is offline   Reply With Quote
Old 14th June 2004, 07:21   #13  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Hm, the new penalty function for the piecewise linear handling that he proposes is simply one that uses the second derivative instead of the first. As he says in the paper that is the only thing that changes, and by looking at the formulas he gives for the new filter coefficients that average is how he calculates it. Also, even though the diffs inside the filtering windows would then become symmetric it wouldn't be possible for me to calc them once and reuse them unless I store them. That would probably be slower, or at least not much faster, then just calcing them a second time since it's just an add and shift. The other part about speeding up the filter seems only to deal with the case of speeding up multiple iterations, which I'm not even considering doing multiple iterations.

Adding an extra weight for the center pixel sounds interesting. I didn't know it was so close to SUSAN denoising either, probably because I've never heard of SUSAN denoising .

Last edited by tritical; 14th June 2004 at 07:37.
tritical is offline   Reply With Quote
Old 14th June 2004, 07:31   #14  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
It is the lookup which hurts you, not the averaging ... anyway there is nothing to store, you can just multiply the final weight with (tP + inv-tP) and only work on half the pixels. It should be faster in the end.

Dunno if dropping the LSB on the difference is a problem, might want to use abs(2*cP - (tP + inv-tP)) as an index instead and change the LUT accordingly.
MfA is offline   Reply With Quote
Old 14th June 2004, 07:44   #15  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Quote:
Dunno if dropping the LSB on the difference is a problem, might want to use abs(2*cP - (tP + inv-tP)) as an index instead and change the LUT accordingly.
Actually, the abs(2*cP - (tP + inv-tP)) is exactly what I was thinking to use as the reference for the diff LUT, since that would fit with how it is currently set up.
tritical is offline   Reply With Quote
Old 15th June 2004, 13:57   #16  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Posted version 0.9.3, it now has the option of using the second derivative instead of the first. It is definitely better at filtering smooth gradients. Interestingly, it doesn't effect large gradients as much as I thought it might. It really only effects small gradients (actual edges that don't have a large difference) and tends to blend them more, though I guess that was too be expected. The effect was rather subtle on the video I tested it on, a couple rather new anime sources. I also added a center weight modifier. Probably not exactly what MfA had in mind, but it gets the job done. After the spatial weights LUT is made the centerScale/centerScaleC factors are multiplied to the center pixel weight, so giving a value of < 1 gives it less weight then normal, giving a value > 1 gives it more weight then normal, and a value of 0 makes the center pixel have no weight.

Also posted that fixed version of TTempSmooth. Now it actually works, though it still isn't quite what I had in mind. I can't figure out exactly how it should work, which is a rather big problem .
tritical is offline   Reply With Quote
Old 16th June 2004, 10:08   #17  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
Posted version 0.9.4 of TBilateral. I was in too much of a hurry when adding the d2 option to 0.9.3 and forgot all about the rounding error from how I was calculating the average when d2=true. It could have ended up changing final pixel values by +-2. On the plus side, the fix also sped up d2=true processing by 10-15% on my computer. It is still slower then d2=false, but not nearly as much as it was.
tritical is offline   Reply With Quote
Old 16th June 2004, 13:24   #18  |  Link
Malevolent
Registered User
 
Malevolent's Avatar
 
Join Date: Jun 2003
Location: Here
Posts: 51
@ tritical:

I tried TBilateral (0.9.4), and it looks good,
but when i started experimenting, i got an error msg that appears when i insert iDevC to the parameters list (with whatever value):
"Script error: The named argument "iDevC" to TBilateral had the wrong type"

Here's the line used:
TBilateral(Diameter=7,DiameterC=5,sDev=2.0,sDevC=2.0,iDev=7.0,iDevC=7.0,Chroma=True,D2=False)
If i remove iDevC, it works fine. Any ideas what's wrong?
__________________
- Homo homini lupus -

Current Computer Configuration:
WinXP Pro SP1, NF-7, 1.833Ghz Barton @ 2.200Ghz, Ati Radeon 9600PRO, 1024MB 400 DDR, 2x40GB HDD (Deathstars).
Malevolent is offline   Reply With Quote
Old 16th June 2004, 21:19   #19  |  Link
tritical
Registered User
 
Join Date: Dec 2003
Location: MO, US
Posts: 999
The problem was that I accidently had an i instead of an f in the param's list for iDevC. So it was only accepting integers. Fixed that and posted v0.9.5.
tritical is offline   Reply With Quote
Old 17th June 2004, 23:14   #20  |  Link
You Know
Registered User
 
Join Date: Jun 2004
Posts: 75
tryed 0.9.4 no gain speed, i think produce only better result?
You Know is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 01:31.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2018, vBulletin Solutions Inc.