Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
14th July 2002, 09:43 | #41 | Link |
Registered User
Join Date: Oct 2001
Location: Brussels
Posts: 358
|
hehe got it now. I just assumed clip=filt and didn't give it a second tought.
But I was talking about the source in the CVS, not the zip! I just did a Rebuild All to be sure, and the mask is incorrect just after AVISource. And the source.cpp hasn't been updated for 8 days. Maybe the behaviour depends on the decompressor of the AVI file ?
__________________
dividee |
14th July 2002, 17:59 | #43 | Link |
Registered User
Join Date: Jun 2002
Posts: 135
|
And with the avisynth version currently in the CVS, I still had to use Mask after AVISource.
Doh! Got it now. I fixed the part in convertToRGB32() but never went after the source. Mkay, so it works after conversion, but not if you just "open" an RGB32 file. I'll look into that one. |
15th July 2002, 09:20 | #44 | Link | ||||
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,364
|
@kaizen,
Thanks for attaching the file. Maybe it is because the filename which had the form "filename..zip". I didn't see that until Chris told me so. @Poptones and the other sourceforge people: Quote:
# showed only shakira: clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) # uncompressed clip2=ConvertToRGB32(AVISource("G:\atomic_kitten.avi")) # compressed with ASUS video codec clip3=trim(clip2,100,506) Layer(clip1,clip3,"add",128,0,0,use_chroma=true) # showed shakira and atomic kitten: clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) clip2=ConvertToRGB32(AVISource("G:\atomic_kitten.avi")) clip3=trim(clip2,100,506) Layer(clip3,clip1,"add",128,0,0,use_chroma=true) # showed shakira and atomic kitten: clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) # uncompressed clip2=ConvertToRGB32(AVISource("G:\atomic_kitten.avi")) # compressed with Huffyuv codec clip3=trim(clip2,100,506) Layer(clip1,clip3,"add",128,0,0,use_chroma=true) # showed shakira and atomic kitten: clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) # uncompressed clip2=ConvertToRGB32(AVISource("G:\atomic_kitten.avi")) # compressed with ASUS video codec clip3=trim(clip2,100,506) clip3=Mask(clip3,BlankClip(clip3,color=$FFFFFF)) Layer(clip1,clip3,"add",128,0,0,use_chroma=true) Haven't tried any other codecs. @all: Why does the following result in a red clip, and not in a white clip? white = blankclip(width=352,height=288,color=$ffffff) red = blankclip(width=352,height=288,color=$ff0000) interleave(white,red).assumefieldbased.weave # result: line1 red, line2 white, etc. (why?), with resolution 352x576 separatefields.selectodd() # red clip @Waka: Quote:
Quote:
Ending on "selecteven" results in frames with: line1 = (line1_clip1+line1_clip2)/2 line2 = (line2_clip1+line2_clip2)/2 and the deinterlacing effects remain. Ending on "selectodd" results in frames with: line1 = line1_clip1 line2 = (line1_clip2+line2_clip1)/2 line3 = (line2_clip2+line3_clip1)/2 and the deinterlacing effects are smoothed out. What do you think of the last method, as a method of filtering noise and cleaning up the remaining deinterlacing effects (= method 1)? I also use a deinterlacer (smart deinterlace gave good result, wasn't satisfied with area based deinterlace) to clean up the deinterlacing effects on the method 2) output clip --> time: 2:54 method 3) input clips --> time: 3:09 Although the result are a bit better than method 1. If you take the encoding time into account, 1:15, the first method is also nice. What do you about that? @Poptones and others: Quote:
clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) clip2=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes2.avi")) clip3=Layer(clip1,clip2,"add",128,0,0,use_chroma=true) Levels(clip3,3,1.03,255,8,240) # time 1:18 (without cleaning up deinterlacing effects) clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) clip2=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes2.avi")) clip3=Layer(clip1,clip2,"fast") Levels(clip3,3,1.03,255,8,240) # time 1:19 (without cleaning up deinterlacing effects) Sorry, the reduces version appears to be slower ... |
||||
15th July 2002, 21:09 | #45 | Link | ||
Registered User
Join Date: Oct 2001
Location: Brussels
Posts: 358
|
Quote:
Quote:
Both Weave and SeparateFields uses parity to combine or separate the fields. If you didn't look at the intermediate result you wouldn't have been surprised by the final result. If the top line of the intermediate result was red, it means that parity was bottom field first in the clip. If you add ComplementParity just before Weave, you'll have the top line white, but the final result should still be the red clip. If you add ComplementParity before SeparateFields you'll obtain the white clip.
__________________
dividee |
||
15th July 2002, 22:46 | #46 | Link |
Registered User
Join Date: Jun 2002
Posts: 135
|
The problems are multifold. Some software returns RGB and RGBA as the same thing - RGBA - except RGB has an inderterminate alpha channel. And convertToRGB32 will ignore it if it sees it's already RGBA, so the "indeterminate" mask doesn't get cleaned up. And there's nothing much that can be done about this problem unless you're willing to live with never being able to import RGBA video! (Which, quite frankly, might not be too big an issue... see below.) In these cases, all you can do is set a white mask yourself. Perhaps this could be better done with a special, optimized "mask" switch - I'll look into that.
Wilbert, I have a question about this, 'tho... clip1=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes.avi")) clip2=ConvertToRGB32(AVISource("F:\shakira-underneath_your_clothes2.avi")) clip3=Layer(clip1,clip2,"add",128,0,0,use_chroma=true) Levels(clip3,3,1.03,255,8,240) Why are you converting this to RGB twice? Or why at all? Layer works just fine with YUY2. In fact, it's twice as fast. So unless you're I/O bound on this, getting rid of the redundant RGB conversions and using the FAST mode as I suggested will cut these times dramatically. Since there's only one LAYER in there, this really isn't much of a benchmark, except as as measure of total system throughput. clip1=AVISource("F:\shakira-underneath_your_clothes.avi") clip3=Layer(clip1,clip1,"fast") convertToRGB32().Levels(clip3,3,1.03,255,8,240) Give that a shot and see whatcha get. I just (last night, before I committed the latest revision) went through all these modes on my PII/450 using this: clip=avisource("garbage-test.avi") clip=clip.layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0) return clip.trim(0,99) I made two different source clips, one in RGBA and one in YUY2, by changing the conversion on the first line above and saving the file using direct stream copy. So no conversions in avisynth were done for the first round, and I threw away the first run because it was always about 50% slower. What I ended up with was approximately: YUY2(ADD) - 12 sec YUY2(LIGHTEN) - 12-13 sec YUY2(FAST) - 9 sec RGBA(ADD) - 43 sec RGBA(LIGHTEN) - 48 sec RGBA(FAST) - 40 sec Then I used the YUY source for the RGBA tests, this time doing the conversion internally (on the first line, above). The results were pretty revealing: RGBA(ADD) - 20 sec RGBA(LIGHTEN) - 24 sec RGBA(FAST) - 16 sec So even 'tho I used an RGBA "source" it was much, much slower. Was it just because the YUY source file could fit in memory and the RGBA couldn't? I doubt it as I have only 256MB memory to begin. Was it the file handler? I don't know how I would test that; the files were both saved uncompressed using the standard windows file handler via veedub. On my system, at least, 100 uncompressed frames (640x480) are more I/O bound than processing. And by extrapolating the above, it would seem I could run this test on 500 frames (still) in under a minute. And my test was with a whole chain of'em. I really doubt you're going to see much difference in a benchmark of these filters unless you stack'em. My system will barely direct stream copy straight through avisynth at more than ten fps, and using the above benchmark it was often maxxing out (on the YUY2) at more than six fps. I'd love to know what others get with higher end systems. If you'd care to try the above, just use a YUY2 source at 640x480. The content is irrelevant as all commands use the exact same operations for every pixel in the frame (even the booleans). All that matters is the 640x480 YUY2 source - and run it more than once. Should I post the DLL? I hate to keep introducing new versions. These latest additions are, to me, "stable" so if mr berg want to put together a new rev on sourceforge that'd be rockin'... Last edited by poptones; 15th July 2002 at 22:50. |
16th July 2002, 04:56 | #47 | Link |
Registered User
Join Date: Jul 2002
Posts: 29
|
I did some preliminary runs using the dll from a couple posts back. This is on an xp@1710(190x9), epox 8k3a+, 512MB ram. Times for first run thrown out.
100 frames yuy2(add) - 2 sec rgba(add) - 4 sec yuy2>rgba(add) - 4 sec fast was the same 500 frames yuy2(add) - 11 sec rgba(add) - 49 sec yuy2>rgba(add) - 22 sec fast was the same If you are interested I can do something more comprehensive, different fsb speeds, reading/writing to drives on separate ide channels, etc. For Wilbert, I haven't done much with interlaced stuff recently, but when I did I was never very happy with the smart deinterlacers and usually ended up blending the fields. In which case using convolution would seem to kill two birds with one stone. That said, I'm sure the deinterlacers have improved since then so I really couldn't say which way I would prefer. As for quality vs. time, I always try to go for the best quality, within reason. Of course what is reasonable for one person isn't for another. I prefer to ivtc by hand, most people would probably think that a waste. If you have the time and patience I say go with what looks best. Edit: Now that I actually think about it, doing the selectodd wouldn't be any better than a blend deinterlace on one source, oops. Last edited by waka; 16th July 2002 at 07:02. |
16th July 2002, 10:19 | #48 | Link | |||||
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,364
|
@poptones,
I'm a bit confused, a couple of posts back you said: Quote:
Does that mean that if your input is YUY2 that you have to you mask? But above you asked: Quote:
I will do your proposed checks tonight. @dividee Quote:
Quote:
interleave(white,red) Result: frame1=white, frame2=red, frame3=white, right? interleave(white,red).assumefieldbased Result: frame1_topfield = frame1 = white, frame1_bottomfield = frame2 = red, why is this false or is true? Why is the parity: bottom field first? Sorry for asking those basic questions @waka Quote:
|
|||||
17th July 2002, 01:05 | #49 | Link |
Registered User
Join Date: Jun 2002
Posts: 135
|
But if you don't need the mask features (and you don't for simple layering for noise reduction) then you don't need to run the conversions unless your source is RGB (24 bit, not 32 bit). If the source is YUY or RGBA, all you need to do is layer them:
source1.layer(source2,"Fast") No need to set masks or convert, because "fast" ignores the masks. It just adds'em up and divides by 2 - that's it. The other part - I dunno. I've not looked into it too deeply, as the only thing I use separatefields for is to perform fieldwise processing. |
17th July 2002, 08:47 | #50 | Link |
C64
Join Date: Apr 2002
Location: Austria
Posts: 830
|
Wouldn't it be good to combine
MergeLuma, MergeChroma, Layer into one function? It seems that all 3 do alike things, only the user gets confused. Or is it not true that MergeChroma(MergeLuma, 0.5) , 0.5) = Layer("fast")? |
17th July 2002, 09:25 | #51 | Link | ||
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,364
|
@dividee
Quote:
@poptones Quote:
I did some preliminary runs using the dll from a couple posts back. This is on an Athlon@1200(133x9), ASUS A7M266, 384MB DDR SDRAM 500 frames [no audio]: yuy2(add) - [44 sec] yuy2(lighten) - [37 sec] yuy2(fast) - [44 sec] rgba(add) - [78 sec] rgba(lighten) - [49 sec] rgba(fast) - [56 sec] rgba -> yuy2(add) - [51 sec] rgba -> yuy2(lighten) - [46 sec] rgba -> yuy2(fast) - [47 sec] |
||
17th July 2002, 09:27 | #52 | Link | |
Registered User
Join Date: Oct 2001
Location: Brussels
Posts: 358
|
@Wilbert:
Quote:
__________________
dividee |
|
17th July 2002, 09:43 | #54 | Link | ||
Registered User
Join Date: Oct 2001
Location: Brussels
Posts: 358
|
Quote:
Quote:
[EDIT:] I have written the filter. Note that you should use it after AssumeFieldBased or AssumeFrameBased, since they both reset the parity to BFF (which make sense, if you think about it). BFF seems to be the default in avisynth. Will be in my next commit on the CVS.
__________________
dividee Last edited by dividee; 17th July 2002 at 13:25. |
||
17th July 2002, 18:06 | #55 | Link |
Registered User
Join Date: Jun 2002
Posts: 135
|
Wilbert, thanks for the feedback. Now, that's really odd. I mean, I did a good bit of instruction reordering because the lighten and darken ops are more complicated, but I didn't think I had actually made them MORE efficient than the add and mul and sub!
Would you mind running this below for all three types, replacing the "lighten" with "add" and "fast?" If you and a couple others could do it and report back it would help me a lot. I honestly cannot understand how lighten could be FASTER than a simple add, but if this is common on many machines it seems obvious I need to look again at my mmx instruction sequences. I know on my "old fashion" PII it acts as I expect and your results, while pleasing, are also kinda puzzling. clip=avisource("garbage-test.avi") clip=clip.layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0).layer(clip,"lighten",128,0,0) return clip.trim(0,499) edit: never mind, I think. Are you using the latest version off sourceforge? If you are using one with c-code I can understand, as it's probably just doing a bunch of null ops and returning. To get valid lighten and darken functions you need to make a build from the cvs, as I don't believe there's a fresher build just yet. Last edited by poptones; 17th July 2002 at 18:09. |
18th July 2002, 12:13 | #56 | Link | |
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,364
|
@poptones,
Quote:
|
|
27th July 2002, 18:09 | #57 | Link | |
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,364
|
@dividee,
Quote:
|
|
27th July 2002, 18:21 | #58 | Link |
Registered User
Join Date: Oct 2001
Location: Brussels
Posts: 358
|
Yeah, that's the one. Internally it's only one function but it maps to two script filters. But I'm sorry, I commited the filter after Richard released 2.02. I'll compile a version and send it to you.
__________________
dividee |
28th December 2002, 11:26 | #60 | Link |
Registered User
Join Date: Apr 2002
Posts: 272
|
gee, big old bump
conclusion - it works bloody well! successfully reduces noise and saves me on average about 100mb on my 1 cd caps simplest script would be: clip0=avisource("capture.avi") clip1=avisource("capture1.avi") union=clip0.layer(clip1,"fast") return union |
|
|