Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 3rd July 2002, 01:24   #1  |  Link
Leuf
Registered User
 
Join Date: Apr 2002
Posts: 67
Averaging two analog captures for noise reduction

I've long thought the best way to clean up analog captures is to capture the source multiple times and then average them together. Good ol' statistics will beat fancy threshold- using algorithms every time. But, I always dismissed the idea as being impractical. First, I thought you need a statistically valid sample - an average of two caps would probably not improve things much. Second, getting even just two captures would be difficult, much less more. Recently I started capturing music videos overnight though, and if you ever watch vh1 or mtv you know that they play 95% of the same videos *every* day. Plus when your source is only 3:30 minutes long, you can afford to play around to get the best results.

So, I captured a video, Jimmy Eat World - In the Middle, twice, combined them in avisnyth with StackHorizontal(), then wrote up a quick vdub filter to do the averaging - unfortunately I don't know avisynth well enough to write a plugin for it.

For a first comparison, I encoded the averaged result back to huffyuv to compare size with the original capture.

Capture 1: 661,810 kb
Averaged: 593,810 kb

Subjectively comparing the two, the amount of noise is very visibly reduced. There is still noise in the averaged file, but it's hard to tell how much just stepping through at 1:1, and huffyuv isn't exactly meant for fullscreen playback.

So, I then encoded both to divx 5, 1 pass CQ Quant=4 with b-frames. I put the raw capture file through my normal noise filtering, 2D cleaner (default settings)+ Smart Smoother HQ (Average mode, r=5, threshold=12). These settings give consistently good results for the station I capped from. The averaged file got no further processing.

Raw + noise filters: 31,262 kb
Averaged: 30,312 kb

Sometimes the simple ways are better. Subjectively, it's difficult to tell them apart. The experienced eye can tell that filters have been used on the filtered file, while the averaged file looks like it just came from a very clean analog source. There are none of the artifacts temporal filters can leave or loss of sharpness from spatial filters, and it works equally well on motion areas and motionless ones. Plus, I can always follow up the averaging with noise filters, but use more conservative settings.

And now the reason for my post, as I said I don't know avisynth well enough (yet) to write the entire thing as one function. Using stackhorizontal() then a vdub filter is cumbersome and less efficient. But for someone who does know it should be trivial to write, the vdub version took me all of 10 minutes. So, I was hoping some kind soul could help me out with this. You can assume that the two clips have already been aligned and made the same length with trim()

-Leuf
Leuf is offline   Reply With Quote
Old 3rd July 2002, 01:45   #2  |  Link
Richard Berg
developer wannabe
 
Richard Berg's Avatar
 
Join Date: Nov 2001
Location: Brooklyn, NY
Posts: 1,211
You're in luck It does seem to be a very good technique, though I haven't tried it myself.
Richard Berg is offline   Reply With Quote
Old 3rd July 2002, 03:01   #3  |  Link
Leuf
Registered User
 
Join Date: Apr 2002
Posts: 67
Hmm, well it seems my luck only goes so far. Layer requires RGB32, Telecide requires YUY2. So, either I have to telecide each clip separately first, converttoRGB32, layer.. or converttoRGB32 both first, layer, converttoyyuy2, telecide. I chose the latter, and it's just barely faster than my stackhorizontal+vdub filter. But here's the odd thing:

Test clip (first 500 frames of the same video as before):

mine 96,356 kb
layer 104,120 kb

Which is pretty odd, and I can't explain it yet. I used 128 for "amount" as in your second link, I'm assuming it's on a 255 scale.
Leuf is offline   Reply With Quote
Old 3rd July 2002, 06:13   #4  |  Link
poptones
Registered User
 
Join Date: Jun 2002
Posts: 135
Layer used to work wiht yuy2, and it wil be made to again. If no one else does it first I'll get my code repaired and share it asap. This feature was removed because... well, long story. I'm sure you heard it.

The difference might be because of a difference in averaging. Since the veedub combine is intended to combine to adjacent fields which may or may not align it is probably blurring the image somewhat. Layer doesn't do this - it just adds'em up and divides by two on each pixel.

Man, I suck as a typist.

Last edited by poptones; 3rd July 2002 at 06:17.
poptones is offline   Reply With Quote
Old 3rd July 2002, 07:17   #5  |  Link
trbarry
Registered User
 
trbarry's Avatar
 
Join Date: Oct 2001
Location: Gainesville FL USA
Posts: 2,092
I always wondered if this would work. I never got around to it (and won't) but thought it might be useful to capture from VHS multiple times, maybe even from different tapes and VCRs, and then average them.

But I was never sure I'd be able to keep them lined up properly and was too lazy to write a program that would do it right, even if I knew how.

But the way I was thinking, I'd capture 4 times and then use a median filter, taking the average of the center two samples of each frame and tossing out the outliers.

- Tom
trbarry is offline   Reply With Quote
Old 3rd July 2002, 15:24   #6  |  Link
Leuf
Registered User
 
Join Date: Apr 2002
Posts: 67
Quote:
Originally posted by poptones
Layer used to work wiht yuy2, and it wil be made to again. If no one else does it first I'll get my code repaired and share it asap. This feature was removed because... well, long story. I'm sure you heard it.
No rush

Quote:
The difference might be because of a difference in averaging. Since the veedub combine is intended to combine to adjacent fields which may or may not align it is probably blurring the image somewhat. Layer doesn't do this - it just adds'em up and divides by two on each pixel.
The vdub filter is working the same way, just that instead of two source files it has two pointers to the same file, one offset half the width to the right. They are both aligned the same with Trim beforehand. The only difference I can see is that with mine Telecide is run before the averaging, with layer it's run after. So mine is working on progressive frames, layer is working on interlaced. So, they shouldn't produce exactly the same results, but for the most part it ought to come out in the wash. I have a hard time accepting that explanation for a ~10% difference. I had to delete the source files to make room for a cap last night, or else I would try running telecide on the two clips first then layer.. but that again has it's own set of problems, as even though both are aligned to start with Telecide may produce different output for each.

-Leuf
Leuf is offline   Reply With Quote
Old 3rd July 2002, 17:38   #7  |  Link
sh0dan
Retired AviSynth Dev ;)
 
sh0dan's Avatar
 
Join Date: Nov 2001
Location: Dark Side of the Moon
Posts: 3,480
You could use Luma / Chroma Merger to merge two sources in Avisynth. So a script like this:

Code:
(import stuff)
avisource("file 1")
avi2=avisource("file 2")
mergeluma(avi2,0.5)
mergechroma(avi2,0.5)
That should give you an exact average of the two sources (and be much easier and faster than using vdub). Be aware of your colors - there may be a bug in mergechroma in weighed mode - but it may also be related to a specific testcase - if so, please mail me, or post here.
__________________
Regards, sh0dan // VoxPod

Last edited by sh0dan; 3rd July 2002 at 17:47.
sh0dan is offline   Reply With Quote
Old 7th July 2002, 06:44   #8  |  Link
Milkman Dan
Registered User
 
Join Date: Oct 2001
Posts: 243
How did you deal with dropped frames?

This sounds like a very interesting method indeed.
Milkman Dan is offline   Reply With Quote
Old 7th July 2002, 12:27   #9  |  Link
poptones
Registered User
 
Join Date: Jun 2002
Posts: 135
On the examples I posted with the garbage video, it was a direct stream rip from a satellite; there were no dropped frames. For other stuff I've found it's not terribly difficult if you have a decent capture. If there are more than a very few, I just run it over. For a few, it's pretty easy to find the frame that loses sync, duplicate it (or trim the other clip) and go on to the next. It's confusing the first time or two, but it's something quickly learned.
poptones is offline   Reply With Quote
Old 7th July 2002, 19:16   #10  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,390
A direct stream rip from satellite ... do I get you right: Satellite´s MPEG-2 data directly to HD?

If so, could you share some more of your skills? (although you already share very much!)
Didée is offline   Reply With Quote
Old 7th July 2002, 20:32   #11  |  Link
poptones
Registered User
 
Join Date: Jun 2002
Posts: 135
My secret: connections in germany.

I didn't do the rip. Apparently there is a very big underground in europe of people who have created linux PC add-ons to certain models of satellite receivers. I can't recall the brand but I'm certain it's a very well known euro brand and - get this - it has a SCSI port on the back! People have created all sorts of useful tools based around this. Rather than ripping DVDs, they can just as easily rip right from the satellite in "perfect" (ahem) digital quality.

Through indirect connections with mutual garbage fans a couple of us here in the states managed to get "an evening with garbage" - WDR broadcast three back to back episodes of the german rock concert type show "Rockpalast" (very cool, just go to wdr.de or whatever it is).

The supremely cool part was they rebroadcast two of the shows (the best ones) immediately after! So I have about a dozen discs here (something around 11GB) which I have been hoping to "remaster" and commit to a pristinely beautiful DVD collection (all 3 shows, or at least the best two).

It's what got me "serious" about working on avisynth, in fact, cuz I was just frustrated to death futzing around in Adobe's stuff (not to mention I just kinda despise them in general) and I really really want to make this look good. So, in a way, just getting this already broadcast show to DVD has been the source of about a year of on and off creation for me.

Yeah, I could just "rip" it. But there's no fun (to me) in that. I wanna "art it all up" and to do that I need good tools. Basically this is the same thing I ended up doing when I took scientific visualization in college. Soon as I learned the file formats in Wavefront, I spent the rest of the semester writing cool tools to morph models rather than actually trying to make a demo reel.

Too much info?

Anyway, with the loop() filter dividee recently added I think it just got a LOT easier to fix misalingments between analog captures.

poptones is offline   Reply With Quote
Old 10th July 2002, 19:17   #12  |  Link
Leuf
Registered User
 
Join Date: Apr 2002
Posts: 67
Quote:
Originally posted by sh0dan
That should give you an exact average of the two sources (and be much easier and faster than using vdub). Be aware of your colors - there may be a bug in mergechroma in weighed mode - but it may also be related to a specific testcase - if so, please mail me, or post here.
Well, I've had the chance this week to capture Witchblade twice, since TNT are nice enough to repeat it the following day. I've been anxious to try it out on this, as my TNT source is among my worst channels for noise and so far I haven't come up with a filter chain for it that I'm happy with the results from.

So, I tried again this time using all 3 methods we've discussed in this thread, my vdub filter, layer, and merge.
First I did up to the opening credits back to huffyuv to compare file sizes again.

vdub: 1,377,366
merge: 1,349,376
layer: 1,472,616

So, again it seems like layer is high for some reason. I tried again on just the first 500 frames, and didn't use decomb this time so I could compare to the original frames.

vdub: 111,456
merge: 109,722
layer: 119,562

I then picked a frame at random and looked at the 2 sources and outputs closeup in photoshop. What I've concluded is that layer doesn't seem to be producing an average of the two, it's weighed more to the first clip. I've been using amount = 128. For example, given source 1 (179,180,177) source 2 (181,184,177) layer produced (180,181, 178) Though there are some colorspace conversions going on that account for some error. This is the only explanation I can come up with to account for the different filesizes.

With Merge there *does* seem to be something fishy going on with the color, though it's difficult to pin down exactly. There isn't a huge difference between the two sources + colorspace conversions, it's hard to tell. But my first reaction when I loaded it was there was more green. In looking at the values it seems more like the green is right, but R+B are too low, sometimes lower than were in either source pixel. Example: source 1 (19,15,16) source 2 (20,18,19) merge produced (18,16,14)

-Leuf
Leuf is offline   Reply With Quote
Old 10th July 2002, 19:42   #13  |  Link
poptones
Registered User
 
Join Date: Jun 2002
Posts: 135
There are a couple of sources of error. One is simply the math is based on 255 (8 bits) but then it is divided by 256 (shift 8). So right there you have a tiny bit of error. And because of the weightings applied to colorspace, it's going to effect blue the least, and green the most. No getting around it.

One could use "c math" to do it all, but then it's significantly slower. The tradeoff is a tiny bit of error in exchange for much faster execution.

This is the place where I started thinking about going to 16 bits before. If all calculations were done in 16 bits there'd be far less error (which is a good reason the "high end" applications have gone there, considering the much smaller speed tradeoffs) but it would be a LOT of work to port all these filters to supporting 16 bit planes.

For merging there's going to be far mroe error. Especially when you're just "merging." Color is determined by luminance and chroma, so just blindly overwriting one with the other is going to have indeterminate results.

const int scaled_y = (y1+y2 - 32) * int(255.0/219.0*32768+0.5);
const int b_y = ((rgb[0]+rgb_next[0]) << 15) - scaled_y;
yuv[1] = ScaledPixelClip((b_y >> 10) * int(1/2.018*1024+0.5) + 0x800000); // u
const int r_y = ((rgb[2]+rgb_next[2]) << 15) - scaled_y;
yuv[3] = ScaledPixelClip((r_y >> 10) * int(1/1.596*1024+0.5) + 0x800000); // v

The most "perfect" solution would be to go to 16 bits. this would also put avisynth on par (in this respect) with stuff like premeire and vegas. I think I could even handle most of the filters, but I have no idea what's going on deeper where frames get created and passed around.
poptones is offline   Reply With Quote
Old 10th July 2002, 22:13   #14  |  Link
waka
Registered User
 
Join Date: Jul 2002
Posts: 29
I played around with averaging clips together not too long ago and it works quite well. With my source, so-so cable, the benefit became pretty minor after 4 or 5 blends. Almost indistinguishable in individual frames. File sizes do give a rough idea of the improvement you can expect. The following is from a 100 frame, huffyuv sample.

1 source: 23.9 megs
2 sources: 22.0
3 sources: 21.5
4 sources: 20.8
5 sources: 20.5

I didn't bother going any further as the difference was very minor at that point. I imagine dirtier samples would still show improvement at 5+ sources. The first blend was done using a method I put together in vdub, subsequent blends were done with layer in an earlier avisynth build by poptones.

I am currently trying out avisynth 2.0.1.1 and layer isn't working like I expect it to.
I expect clip1.layer(clip2,"add",128,0,0,use_chroma=true) to average the 2 clips. More often than not, it seems to just spit out clip1 and not a blend. Have there been issues worked out in the layer code since 2.0.1.1, or am I misunderstanding layer? Any help would be greatly appreciated as this is driving me insane.

Thanks
waka is offline   Reply With Quote
Old 10th July 2002, 22:51   #15  |  Link
dividee
Registered User
 
Join Date: Oct 2001
Location: Brussels
Posts: 358
In RGB32, you should make sure the second clip has an opaque mask in the alpha channel before calling Layer:
clip2.Mask(clip2.BlankClip(color=$FFFFFF))
__________________
dividee
dividee is offline   Reply With Quote
Old 10th July 2002, 23:08   #16  |  Link
waka
Registered User
 
Join Date: Jul 2002
Posts: 29
That did it.
Thanks again.
waka is offline   Reply With Quote
Old 10th July 2002, 23:43   #17  |  Link
Leuf
Registered User
 
Join Date: Apr 2002
Posts: 67
Hmmm, well I tried just doing mergeluma, leaving the chroma alone, ended up with a size of 1,360,944. Which is pretty much square in the middle between what the RGB and luma/chroma averages gave. No color problems, and it's the fastest method. For say a vhs source I wouldn't want to leave out the chroma, but it seems to be okay for this cable source.. and still more compressible than the RGB average.. which is pretty interesting. I guess since the codec ultimately has to store it as YUY2 that makes sense.

Time to try a full encode..


-Leuf
Leuf is offline   Reply With Quote
Old 11th July 2002, 04:05   #18  |  Link
Richard Berg
developer wannabe
 
Richard Berg's Avatar
 
Join Date: Nov 2001
Location: Brooklyn, NY
Posts: 1,211
If I had a P4 I'd push for 16-bit in a second -- they can twiddle 4x16 integers exactly the way MMX procs can do 4x8 so there'd be no loss of speed at all.
Richard Berg is offline   Reply With Quote
Old 11th July 2002, 05:48   #19  |  Link
poptones
Registered User
 
Join Date: Jun 2002
Posts: 135
but most of the core stuff isn't yet mmx'd. I mean a lot is, but not so much as to be daunting. A good bit of it would be changing BYTEs to WORDS and such. The c code math is (mostly) ints anyway, right? A lot of that would just be adjusting shifts and coefficients.

compare "layer."

:loopstart

movd mm6, [esi + ecx*4] ;src2
//change to movq


movq mm2,mm6
//no change


psrlq mm2,24 ;mm2= 0000|0000|0000|00aa
//change to psrlq mm2,48 - big deal


pmullw mm2,mm1 ;mm2= pixel alpha * script alpha
//changes to pmadd mm2,mm1 - easy cuz it's mostly zeroes
//BTW we use 15 bit coefficient here and now we have "almost perfect" math


movd mm7, [edi + ecx*4] ;src1/dest
//another movq


psrlw mm2,8 ;mm2= 0000|0000|0000|00aa*
// scale change - big deal again


punpcklwd mm2,mm2 ;mm2= 0000|0000|00aa*|00aa*
punpckldq mm2, mm2 ;mm2=00aa*|00aa*|00aa*|00aa*
//no change


movd mm3, rgb2lum
//one more movq
//----- alpha mask now in all four channels of mm3


punpcklbw mm7,mm0 ;mm7= 00aa|00bb|00gg|00rr
punpcklbw mm6,mm0 ;mm6= 00aa|00bb|00gg|00rr
//these two actually go away!


pandn mm6, mm4
//for subtract only - no change


//----- begin the fun stuff
//----- start rgb -> monochrome
pmaddwd mm6,mm3
punpckldq mm3,mm6
paddd mm6, mm3 ;32 bit result
psrld mm6, 47 ;8 bit result
punpcklwd mm6, mm6 ;propogate words
punpckldq mm6, mm6
//----- end rgb -> monochrome
//with 15 bit coefficients, this is essentially unchanged!


psubsw mm6, mm7
//no change


pmullw mm6, mm2 ;mm6=scaled difference*255
//this part actually adds a few ops - perhaps as many as five
//for pmov mm1,mm6/pmullw mm1,mm2/pmullhw mm6,mm2 and then recombining
//- but I don't feel like figgering it out right now


psrlw mm6, 8 ;scale result
//scale changes - big deal


paddb mm6, mm7 ;add src1
//becomes paddw


//----- end the fun stuff...
packuswb mm6,mm0
//more history


movd [edi + ecx*4],mm6
//becomes another movq


inc ecx
cmp ecx, edx
jnz loopstart


Granted it's only one routine, but it does quite a bit - one conversion RGB to monochrome (which costs nothing extra at all!) and the "math" part is only negligibly longer, part of which is offset by the fewer pack/unpack instructions - which occupy most of the time in this, BTW, and which cannot be paralleled. So the entire routine could end up executing almost the exact same speed, but with greater accuracy!

I wouldn't mind tackling the colorspace conversions with this, and I'd definitely tackle the c-code filters, but I don't know how much of the core relies on the assumption of 8 but planes.

Sure would be nice. One could even leave the "old" routines alone and just add new support for 16 bit formats.

Hmmm... How hard would that part be?

Last edited by poptones; 11th July 2002 at 05:50.
poptones is offline   Reply With Quote
Old 11th July 2002, 05:57   #20  |  Link
Richard Berg
developer wannabe
 
Richard Berg's Avatar
 
Join Date: Nov 2001
Location: Brooklyn, NY
Posts: 1,211
I'd just have to double-check every filter to make sure they don't implicitly assume "if it's not RGB24 or RGB32 it must be YUY2" or similar. Way easier than the planar formats someone proposed in another thread, in any case.
Richard Berg is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:37.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2018, vBulletin Solutions Inc.