Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 1st August 2009, 15:23   #1  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
First time encoding, need some help.

Hello, I'm totally new to avisynth and basically all the video processing stuff.

What brings me here is this DVD that I'm trying to encode. The content is 6 episodes of the Anime FLCL.

OK, as always, samples first :

Clip01-1.CrackedGuitar.m2v 4.2 MB
Clip01-2.BlackManga.m2v 8.9 MB
Clip01-3.LightScene.m2v 17.0 MB
Clip01-4.Fade.m2v 9.2 MB

About the samples:
:: They're all from the 1st episode :: Cut using "Save Project and Demux Video" in DGIndex ::

01-1.CrackedGuitar:
High motion scene where noise/blockiness peak. Play it one frame per step to see the rubbish...
01-2.BlackManga:
There're several minutes in the episode where it's totally black and white manga, with some high motion among them, where it's very noisy/blocky...
01-3.LightScene:
Light scene during the dark blue night. I'm not sure it's haloing or ringing...as in the other parts of the video.
01-4.Fade:
Scene change, where the previous scene shrinks to top-right. High motion produces some noise. Also look at the girl's skirt( haloing or ringing?)
The noise after scene change is intentional I think, though there're non-intentional ones...Complicated...

I want to use avisynth(version 2.58) to do some filtering and to frameserve to x264.exe for encoding, then mux video and audio through mkvmerge.

Used DGIndex "Respect Pulldown Flags" to produce .d2v file, with Clipping set to "8,8,0,0".

My avisynth script looks like this so far:

Code:
LoadPlugin("G:\FLCL\software\dgmpgdec155\DGDecode.dll")
LoadPlugin("G:\FLCL\software\TIVTC\TIVTC.dll")
LoadPlugin("G:\FLCL\software\dedup\Dedup.dll")

MPEG2Source("G:\FLCL\output\VTS_01_1.d2v", cpu=0)

# 2nd pass of TIVTC(VFR mkv), use ovr to override mis-postprocessing.
tfm(d2v="G:\FLCL\output\VTS_01_1.d2v",mode=3,pp=6,slow=2,ovr="ovr.txt",input="matches.txt")
tdecimate(mode=5,hybrid=2,vfrDec=1,hint=true,input="metrics.txt",tfmIn="matches.txt",mkvOut="tivtc_timecodes.txt")

# Replace Bad frames with images
# Remove duplicated frames with DeDup
# Denoise/Deblock/Dering/Dehalo...
# Sharpen/Linedarken...
"matches.txt" and "metrics.txt" are generated by 1st pass of TIVTC:

Code:
tfm(d2v="G:\FLCL\output\VTS_01_1.d2v",mode=3,pp=1,slow=2,output="matches.txt")
tdecimate(mode=4,output="metrics.txt")
You may notice that I'm making the video a Variable FrameRate one. This brings further confusion, as described later.

By using TIVTC and "ovr.txt" I see almost no combing now. Fine.

Obviously, there's a lot left to be done...

One thing is around frame 904 to 911 there's tiny blocks hanging between the girl's nose and mouth, which means a wierd flash of 0.33

second.
Experimented with some denoiser with no good result. I thought,"How about painting it by hand?"

So here is the first challenge:

>> Replace frames with external images <<

>Solution 1.0:

Code:
SetMemoryMax(512)

___LoadPlugin___

___MPEG2Source___

function imagesplice (clip c, string "filename", int "frameno")
{
frame=imagesource(filename)
frame=frame.selectevery(frame.framecount,0).assumefps(c.framerate).converttoyv12()
c.trim(0,frameno-1)++frame++c.trim(frameno+1,0)
}

___TIVTC___

imagesplice("904.png",904)
imagesplice("905.png",905)
imagesplice("906.png",906)
imagesplice("907.png",907)
imagesplice("908.png",908)
imagesplice("909.png",909)
imagesplice("910.png",910)
imagesplice("911.png",911)
It basically does a great job but there's the problem:
"Avisynth read error: CAVISteamSynth: System expection - Access Violation at 0x16cbf7d, reading from 0x3fad004" -- Memory Leak
This happened when I added DegrainMedian().deen() and LimitedSharpenFaster() to the script.
Too many trims and splices make it crash.

Note: the imagesplice function is from Mug Funky's post
I don't know how ReplaceFrame works since the link seems dead.

>Solution 1.1:

Code:
SetMemoryMax(512)

___LoadPlugin___

___MPEG2Source___

function imagesplice2 (clip c, string "filename", int "frameno",int "replaceno")
{
frames=imagesource(filename,frameno,frameno+replaceno-1,c.framerate).converttoyv12()
c.trim(0,frameno-1)++frames++c.trim(frameno+replaceno,0)
}

___TIVTC___

imagesplice2("%03d.png",904,8)
It looks to me a better solution... But unfortunately crashes in the same situation.

>> I just find Freezeframe(21433,21442,21443) works great in replacing bad frames with a good version which is inside the video.
>> Also I've read about LBKiller. Haven't tried it yet but there I read : "avs script generation, using frame freezing" So... it's using Freezeframe() to replace frames with

external images?

>Solution 2.0:
Help me with this version bump... ^^

>> Note: Since I'm making the clip VFR, the image replacing the corresponding frame should read exactly the framerate of the replaced frame......Or, maybe NOT??

Now it's challenge 2:

>> Denoise <<

I see there're always a lot discussions dedicated to this topic. But... I think I'd better ask here for this specific solution anyway.

The video is anime material but there is something to be taken into account:

1. Sometimes there are some background with more detail than normal anime, e.g. a wall or bridge

with some detailed texture.
2. High motion scene brings much more noise than static scene.
3. The manga part. It's black and white and some frames are really noisy. I guess denoising method differs with manga from normal anime?

So... different parts need different strategy.
If I need to apply different filters to different parts of the video, what do I do? ApplyRange? ConditionalFilter? (I don't really know how to use them, yet.)

Also, I'm not quite sure about those terms: noise / ring / halo, maybe also block / grain etc...

Another challenge:

>> Color Problem <<

I'm confused a little about the color display in VirtualDub:
1. When playing the script in VDub, there's some "rainbow effect"(if I understand it right...it's like "Posterization", or "banding") -- the picture is no way smooth...
2. When paused, I see the color obviously off(darker/less saturation). But the rainbow is gone and it looks smooth.
3. Still paused, but with the VDub window unfocused, the color seems fine. It's the same as "Copy source frame to clipboard" and paste it in photoshop.
4. Play the .avs in Windows Media Player 6.4.09.1130, and the color is the same as 2. : color is off but no rainbow effect. No matter focused or not, no matter playing or paused.

So, do you also experience this? Which is the color in the final encoded video?

Also, I've read about ColorMatrix()... [ Makes someone doubt if they've been encoding crap all those years...lol ]

With "MPEG2Source("G:\FLCL\output\VTS_01_1.d2v", cpu=0,info=1)" I get:

Colorimetry: ITU-R BT.470-2 System B, G (5)

Which is equal to ITU-R BT.601 or Rec.601.

Do I need to add ColorMatrix() to my script? If I do, where?

>> Sharpen <<

Anime materials always look better when those dark lines are sharp with smooth outlines. I tried LimitedSharpenFaster() and am already amazed:

original
--> deen().LimitedSharpenFaster()
I think I only need to do the denoising before sharpen better and it's gonna be great!
Maybe you guys have something else to recommend too? Or help me tweak the parameters ^^

>> Filter Order <<

I don't know how avisynth reads the commands... I mean the order they're read/applied.
Say, normally, in a programming language, we have "if... then ... elif... then... fi"...Some structure like this is read from top until the condition is matched, then the command

is run. While in avisynth, there is:
Code:
___Filter_for_all___
A = Last
___Filter_for_Bad_Frames___
B = Last
ConditionalFilter(A, B, "Bad", "==", "False", false)
ConditionalReader("If.txt", "Bad", false)
The "Bad" values are placed in a file called "If.txt"... ConditionalReader is placed at the bottom, which looks strange to me... Someone can explain this to me?

Back to the practical side, I want to ask what is the best order for those filters/commands to be placed?
Say, I want to use dedup to reduce unnecessary frames, it is for sure placed after tdecimate, but where exactly? Before replacing frames with images or after?
And how about VFR? When replacing frames with images, how to keep the framerate intact?
Also, is it OK, in terms of compression, to replace frames woth images?(Though the image size is no big anyway,but...just in case.^^)

So much so far. Thanks in advance.

Last edited by D'zzy; 5th August 2009 at 17:55.
D'zzy is offline   Reply With Quote
Old 2nd August 2009, 05:18   #2  |  Link
10L23r
Registered User
 
Join Date: Apr 2009
Posts: 122
1. You may want to try animeivtc.
2. the imagesplice thing is not very efficient when used repetitively. You're better off doing it manually (i.e. trim(1,902) ++ imagesource("903.png").converttoyv12() ++ imagesource("903.png").converttoyv12()...)

Compression-wise, replacing frames doesn't really matter because it's only a few frames. As long as you don't change things too much, it won't hurt compression

also, don't forget audiodub, or you will lose audio in those regions. this is an example:
a=last
b=a.trim(1,902) ++ imagesource("903.png").converttoyv12() ++ imagesource("903.png").converttoyv12() ++ a.trim(904,1000)
audiodubex(b,a)

3. I have had the same problem in vdub. Just ignore it. Give AvsP a try too.
Afaik, rec. 601 means that colormatrix is unnecessary.

definitions/examples:
ringing
haloing and more extreme example
film grain(prob. not found in anime i think)
noise film grain is a type of noise
example of severe blocking

order of filters, a general guide:
source
deinterlace/ivtc
make mod16 with borders(usually not required)
deblock
degrain/denoise
sharpen
replace frames
resize/crop
trim

crop before resize if there are black borders. other wise, use the crop-like functioning of resize. Read this

each filter builds off the previous one, so just use common sense For example, you don't want to sharpen before degraining because
sharpening (usually) amplifies noise, giving the degrainer a harder time.

A note on blocking: find a high motion area, copy the frame to the gimp/photoshop/whatever, and zoom in about 400%. If you see blocks (not pixels) in areas of motion or dark areas, then deblock. I prefer deblock_qed.

Last edited by 10L23r; 2nd August 2009 at 05:38.
10L23r is offline   Reply With Quote
Old 2nd August 2009, 09:27   #3  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
Thanks for your patient explanation, 10L23r.

1. Animeivtc must be great and I will definitely try it later. But this time I'd go step by step so that I'd understand things better.

2. I don't do audio at this stage, but plan to mux the audio with video through mkvmerge. So, no worries I think. (as long as I can keep the video timing intact.)

I followed your advice and used this (fps needs to be added I think):

Code:
__TIVTC_etc.___
function imageimport (clip c, string "filename")
{
frame=imagesource(filename,end=0,fps=c.framerate).converttoyv12()
return frame
}

trim(0,903)++imageimport("904.png")++imageimport("905.png")++imageimport("906.png")++imageimport("907.png")++imageimport("908.png")++imageimport("909.png")++imageimport("910.png")++imageimport("911.png")++trim(912,0)

DegrainMedian()
deen()
LimitedSharpenFaster()
At the first try with VDub, it crashes. But I didn't just give up. I opened up Process Explorer to monitor the total memory usage, the memory is at a high level even when I closed the video in VDub. I have VDub, AvsP and many tabs open in my web browser. I determined to close some tabs and close AvsP too(it takes 100+MB memory...), and the script runs well in VDub. The memory goes up about 500 MB (I put SetMemoryMax(512) at the top so maybe it's just fully 512MB)

Then I thought, imagesplice2("Solution 1.1") SHOULD do the job just the same! why? because imagesource() provides a range, and out of common sense :wink: , I believe it imports image sequence in a similar manner as in the code above. So I tried imagesplice2 again... (also followed by DegrainMedian() deen() and LSF, loaded in VDub)

Guess what?! The memory usage is just the same(about 512) (the memory growing curve may or may not be the same... the final top memory is.)
It even amazes me further: imagesplice("Solution 1.0") works the same. -- The same top memory as NO image replacement at all!

Oops! It fails again! All three solutions! In VDub, that is. If I comment out all image replacement stuff, VDub then plays it.
On the other hand, AvsP can preview it and also play it through external player (windows media player 6.4.9.1126).
What is wrong with VDub? I just quit VDub then restarted VDub, it played well(replace frame in the script with 10L23r's method). After a while, I tried again, it was fine at first, then I seek to a new frame, it told me about the memory leak again... After another while, I loaded the script using imagesplice2, it plays well again! Very, very strange

Anyway... basically I believe imagesplice2("%03d.png",904,8) to be equal to the slightly long chain in the code above. My PC has a memory of 2GB, so I don't doubt its capability. AvsP's preview and play function works fine in both cases. Maybe I should just leave VDub alone? Or, I should use a different method for frame replacing? :?

What I'm concerned about is, since I'm gonna feed x264.exe with this avs script, will it crash as VDub do?

Oops! It happens for the first time with AvsP (when seeking):
Quote:
Traceback (most recent call last):
File "AvsP.pyo", line 6291, in OnSliderReleased
File "AvsP.pyo", line 8925, in ShowVideoFrame
File "AvsP.pyo", line 9467, in PaintAVIFrame
File "pyavs.pyo", line 322, in DrawFrame
File "pyavs.pyo", line 301, in _GetFrame
File "avisynth.pyo", line 277, in GetFrame
WindowsError: exception: access violation reading 0x027ED004
Although it's just once... and recovered soon with no restart... :?

3. AvsP is a nice Editor. I have been using it for several days, but never tried the preview/play function because of VDub. Now I'm using it!

About the color again(in AvsP):
a.. When previewing in AvsP, the picture color looks right to me.
b.. When playing, the color is off (darker) just like I decribed in my first post. The external player is windows media player 6.4.9.1126.
Quesion is: which is the same color as finally encoded?

4. >> Confusion about framerate <<

By adding info() to the end of the script, it shows(never changes):
Frames Per Second: 24.1665 (32510000/1345253)
But I'm using TIVTC in VFR mode, the resulting video should be VFR. How does avisynth treat VFR material?
For example, if I do the frame replacing with imagesplice2 for frames 904-911, would that "c.framerate" be accurate? I mean, is it the framerate read from frame 903, or is it from the average (like with info()) or something? Or they are the same for avisynth?
In the timecodes file generated by tdecimate, there're two values of framerate, 23.976024 and occasional 17.982018. How come info() tells me it's 24.1665?
Since I'm gonna add dedup to the script too, this has to be made clear in case I'd mess it all up.

5. Cropping.
The source has 2 black lines on the left/right side so I have to crop them. I did it in DGIndex so the .d2v file contains "Clipping=8,8,0,0". This should be OK?

Has anybody played with the samples already? Gimme some news!

Last edited by D'zzy; 2nd August 2009 at 13:30.
D'zzy is offline   Reply With Quote
Old 2nd August 2009, 09:50   #4  |  Link
TheRyuu
warpsharpened
 
Join Date: Feb 2007
Posts: 788
Quote:
Originally Posted by D'zzy View Post
How does avisynth treat VFR material?
Assumed CFR essentially.
http://forums.animesuki.com/showpost...&postcount=140

http://forums.animesuki.com/showthread.php?t=34738 is a good read about vfr (and anime).

I'll look at the samples tomorrow.
TheRyuu is offline   Reply With Quote
Old 2nd August 2009, 10:05   #5  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
TheRyuu, thanks very much for the quick answer!! Reading now.

>> You're warpsharpened? Makes me chuckle
D'zzy is offline   Reply With Quote
Old 4th August 2009, 07:02   #6  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
Gavino has pointed me to this, and now I understand it better how a script works. (Thanks, Gavino!)

How about this script:
Code:
DGSource() #Source has no sound. #[1]

R1=imagesource("%03d.png", 904, 911, fps=last.framerate, pixel_type="rgb32").converttoyv12() #[2]
R2=imagesource("%04d.png", 5163, 5163, fps=last.framerate, pixel_type="rgb32").converttoyv12() #[3]

trim(0,903) ++ R1 ++ trim(912,5162) ++ R2 ++ trim(5164,0)#[4]
Let me describe how I understand it:

When a frame is called, [4] is requested, and [4] will request something according to the called frame number:
If the called frame is in range [0,903], then [4] requests it from [1].
If the called frame is in range [904,911], then [4] requests it from [2]; and only for frames in this range is imagesource() for R1 is run.
If the called frame is in range [912,5162], then [4] requests it from [1].
If the called frame is in range [5163,5163], then [4] requests it from [3]; and only for frames in this range is imagesource() for R2 is run.
If the called frame is in range [5164,0], then [4] requests it from [1].


If this theory is true, then the "range" is determined prior to all the requests above. How is this done? Maybe imagesource() is called just for the frame-number part for this "range"? Sounds unlikely... [But it is good because in this way, only one instance of imagesource() is running at a time.]

Another assumption:

R1, R2 brings one instance of imagesource() each, as soon as the script is loaded. Hence, 2 instances of imagesource() are running at the same time, throughout the whole run of the script. This requires more memory I think. Or maybe imagesource() just takes too little memory to be an issue?



In my current script, there're R1,R2, ..., R7 and the script is running without crash. That's fine. Well, I just need to make it clear

BTW, is there a plugin that shows how much memory each function takes in a script?

I've also come up with another way of doing frame replacement from external images:

Code:
DGSource() #Source has no sound.

#Renamed images to 000.png 001.png ... 011.png
R=imagesource("%03d.png", 000, 011, fps=last.framerate, pixel_type="rgb32").converttoyv12()
RemapFrames(mappings="
[904 911] [0 7]
5163 8
[5400 5402] [9 11]", sourceClip=R)
How is this compared to the R1,R2 solution? (memory-wise)

Hey, anybody tested the samples already? Your advice is always appreciated!

Last edited by D'zzy; 4th August 2009 at 07:32.
D'zzy is offline   Reply With Quote
Old 4th August 2009, 09:19   #7  |  Link
Gavino
Avisynth language lover
 
Join Date: Dec 2007
Location: Spain
Posts: 3,377
Quote:
Originally Posted by D'zzy View Post
Let me describe how I understand it:
...
If this theory is true, then the "range" is determined prior to all the requests above. How is this done?
Your description of the way frames are requested is correct. The ++ filter knows the frame counts of its constituent clips and hence for any given frame number requested, it knows which frame of which constituent to request from it.
Quote:
Another assumption:

R1, R2 brings one instance of imagesource() each, as soon as the script is loaded. Hence, 2 instances of imagesource() are running at the same time, throughout the whole run of the script. This requires more memory I think. Or maybe imagesource() just takes too little memory to be an issue?
Again correct. Two instances are created, although any given frame will use only one (or neither) of them. This can be a memory issue with a large number of ImageSource instances (especially if they are digital photos with a large frame size).
Quote:
BTW, is there a plugin that shows how much memory each function takes in a script?
I'm not aware of any, and indeed I don't think it's possible to attribute memory to a specific function.
Quote:
RemapFrames(mappings="...", sourceClip=R)
How is this compared to the R1,R2 solution? (memory-wise)
Better, since only one instance of ImageSource is created.
Gavino is offline   Reply With Quote
Old 4th August 2009, 09:46   #8  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
Very clear answer, got it all. Thanks again dude.

I'm experimenting with Deblock_QED. Stronger values will make some tiny blocks grow bigger. Also tried Debolock_QED().Debolock_QED().Debolock_QED(), and tiny blocks grows too. I don't like this result so maybe I'd leave it as Debolock_QED(), then use some denoiser for the rest of cleaning up.

Ugh, zooming in to 400% brings so much blockiness and noise to my eyes!
D'zzy is offline   Reply With Quote
Old 4th August 2009, 10:49   #9  |  Link
TheRyuu
warpsharpened
 
Join Date: Feb 2007
Posts: 788
Quote:
Originally Posted by D'zzy View Post
Very clear answer, got it all. Thanks again dude.

I'm experimenting with Deblock_QED. Stronger values will make some tiny blocks grow bigger. Also tried Debolock_QED().Debolock_QED().Debolock_QED(), and tiny blocks grows too. I don't like this result so maybe I'd leave it as Debolock_QED(), then use some denoiser for the rest of cleaning up.

Ugh, zooming in to 400% brings so much blockiness and noise to my eyes!
You can try:
Deblock_QED_MT2().gradfunkmirror() (or Deblock_QED_MT2().gradfun2dbmod() if you like grain) to try and counter some blocking.

If anything needs more than that then just break out some strong(er) fft3d/dfttest or something since at that point it's kinda really fucked anyone what's the point... just try and minimize the way it looks as best you can.

As far as your samples are concerned, I see a bunch of just starved bitrate mpeg2 artifacts. Jeez I don't remember FLCL was that bad of a source considering there was only 2 episodes per dvd.
TheRyuu is offline   Reply With Quote
Old 4th August 2009, 13:17   #10  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
With Deblock_QED().GradFunkMirror() I get about the same result as just Deblock_QED()... Look at this picture (zoomed to 400% in AvsP's preview ; the Deblock_QED is the MaskTools v2 version.)

This is called blockiness, right?

Yeah, it is a bad source (even worse now that I'm watching closer )

I think I'll have to try some mighty denoiser to remove those.

BTW, I thought gradfunkmirror is for debanding? I plan to put it at the end of my script...
D'zzy is offline   Reply With Quote
Old 4th August 2009, 13:38   #11  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,390
That's not (macro-)blocking. That kind of artifact often is called "mosquito noise" - it's an artifact caused by the DCT quantization process, which introduces artificial frequencies.

Shortly, any "deblocking" filter is rather unlikely to act upon these artifacts.
Removing this kind of artifact is the realm of "DeRing-" or eventually "halo-" filters.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 4th August 2009, 14:37   #12  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
I thought they were blocks because they do form a block-like shape. Nice to know they're not, Didée (Your scripts are always good at what they're made for. Many thanks!!)

So I just tried HQDering(), the defference is very obvious around dark lines! You can compare this picture with the former one to see.
Is there some alternative to HQDering too?

I'm also going to try Dehalo_alpha()

I think in my source it's likely to be ringing. Halo is different but I'm not sure about its difference from ringing. :?

Eh, MVDegrain3 is so heavy! And there're also MC_spuds and MCTemporalDenoise waiting for me to try out: Which should be the king of slowness? =.=

Also, HQDering by default uses deen, any suggestion for an efficient alternative?
D'zzy is offline   Reply With Quote
Old 4th August 2009, 14:51   #13  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,924
Try the VirtualDub masked smoothers, such as MSmooth or Smart Smoother. If you like the result, there are Avisynth equivalents.

MSmooth (with strong settings):


Last edited by Guest; 4th August 2009 at 15:01.
Guest is offline   Reply With Quote
Old 4th August 2009, 15:48   #14  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
Thank you, neuron2.

MSmooth() is fast. I think it takes some tweaking if I want the best result.

What I've just tried is HQDering() with default settings(smoother: Deen("a3d",4,15,15,20)) and HQDering() with Msmooth() as smoother.

Mostly their results are just about the same. Only very occasionally Msmooth() introduces a little unwanted smoothing on the edge line. Take a look at the pictures below:
HQDering with Default: http://omploader.org/vMjNlaw
HQDering with MSmooth(): http://omploader.org/vMjNlbA

Probably MSmooth isn't meant for such usage I think.

I find masking to be really important. Many scripts use Masktools and I believe it's excellent. VirtualDub's masking should be a faster way to go, but maybe not as accurate?

But wait, I'm viewing the difference at a zoom of 400%! lol, just trying things out.

---------------------------------------------------------------------------

Quote:
Originally Posted by TheRyuu
then just break out some strong(er) fft3d/dfttest or something
Tested FFT3DGPU() against dfttest() : Oh, they're both strong and detail respective. Very impressive. dfttest() removes more noise while remaining as much detail as FFT3DGPU(). But again, things might change if options are tweaked.

Last edited by D'zzy; 4th August 2009 at 18:42.
D'zzy is offline   Reply With Quote
Old 5th August 2009, 10:16   #15  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
I've tested a bit with dfttest. What a great job tritical has done!
The problem, though, is that at some point where there is a scene change, the first frame of the new scene would be blended with the last frame from the last scene (and several other frames, depending maybe on the the tbsize value[default=5]). Look at the 2 pictures below in case you don't see what I mean:
first frame after scene change : http://omploader.org/vMjNtNA
last frame before scene change : http://omploader.org/vMjNtNQ
[Note:: Both zoomed to 400%]

Then there is the MC version! Did it like this (adapted from the example by tritical) :

Code:
function MCdfttest (clip c)
{
vf1=c.mvanalyse(pel=2,blksize=8,isb=false,idx=1,overlap=4,sharp=2,truemotion=true)
vf2=c.mvanalyse(pel=2,blksize=8,isb=false,idx=1,delta=2,overlap=4,sharp=2,truemotion=true)
vb1=c.mvanalyse(pel=2,blksize=8,isb=true,idx=1,overlap=4,sharp=2,truemotion=true)
vb2=c.mvanalyse(pel=2,blksize=8,isb=true,idx=1,delta=2,overlap=4,sharp=2,truemotion=true)
interleave(\
mvcompensate(c,vf2,idx=1,thSCD1=800)\
, mvcompensate(c,vf1,idx=1,thSCD1=800)\
, c\
, mvcompensate(c,vb1,idx=1,thSCD1=800)\
, mvcompensate(c,vb2,idx=1,thSCD1=800))
dfttest(sigma=24)
selectevery(5,2)
}
And now that side effect is gone. Also there's slightly more noise removed.

Now my script looks like this:
Code:
SetMemoryMax(512)

#__LoadPlugins__#

MPEG2Source("G:\FLCL\output\VTS_01_1.d2v", cpu=0)

### IVTC
tfm(d2v="G:\FLCL\output\VTS_01_1.d2v",mode=3,pp=6,slow=2,ovr="ovr.txt",input="matches.txt")
tdecimate(mode=5,hybrid=2,vfrDec=1,hint=true,input="metrics.txt",ovr="ovr_tdec.txt",tfmIn="matches.txt",mkvOut="tivtc_timecodes.txt")

### Replace bad frames with internal frames.
RemapFrames(mappings="5190 5189
                      [5686 5687] 5685
                      [5713 5734] 5735
                      [21433 21442] 21443")

### Replace real bad frames with external images
R=imagesource("%03d.png", 000, 045, fps=last.framerate, pixel_type="rgb32").converttoyv12()
RemapFrames(mappings="[904 911] [0 7]
                      5163 8
                      [5400 5402] [9 11]
                      [5688 5710] [12 34]
                      8471 35
                      [35371 35373] [36 38]
                      [35542 35548] [39 45]", sourceClip=R)

### Deblock
Deblock_QED()

### Dering
HQDering()

### Denoise
function MCdfttest (clip c)
{
vf1=c.mvanalyse(pel=2,blksize=8,isb=false,idx=1,overlap=4,sharp=2,truemotion=true)
vf2=c.mvanalyse(pel=2,blksize=8,isb=false,idx=1,delta=2,overlap=4,sharp=2,truemotion=true)
vb1=c.mvanalyse(pel=2,blksize=8,isb=true,idx=1,overlap=4,sharp=2,truemotion=true)
vb2=c.mvanalyse(pel=2,blksize=8,isb=true,idx=1,delta=2,overlap=4,sharp=2,truemotion=true)
interleave(\
mvcompensate(c,vf2,idx=1,thSCD1=800)\
, mvcompensate(c,vf1,idx=1,thSCD1=800)\
, c\
, mvcompensate(c,vb1,idx=1,thSCD1=800)\
, mvcompensate(c,vb2,idx=1,thSCD1=800))
dfttest(sigma=24)
selectevery(5,2)
}
MCdfttest()
FluxSmoothST()

### ---- Planned to Do the Following ---- ###

### Remove duplicated frames, resulting in VFR(mkv).
#DupMC(log="dup.txt")  #first pass
#DeDup(threshold=0.4, maxcopies=20, maxdrops=20, log="dup.txt", times="dup.times.txt",timesin="tivtc_timecodes.txt")

### Sharpen the cleaned-up frames
#LSFmod()

### LineDarkening
#FastLineDarkenMOD(thinning=0)

### Tweak Color if you feel like
#Tweak(sat=1.06)

### Fast Debanding
#GradFunkMirror()
Any suggestions? Say, the filter order?

By far, the majority of noise has been removed . What I find strange is some occasional jagged edges, look at this picture.
Is there a tool designed for jagged edge handling? I consider this a special kind of smoother... which, smooths only line outlines.

Another thing I'd like to ask. FluxSmoothST() after MCdfttest() does very slight noise removal, which leaves the details well intact but also not so effecient at denoising.
I think applying some light smoother before or after MCdfttest() would help. Which smoother should I use in that place?

Yeah, I think I'll start encoding soon!

P.S. What is that tool that reports the speed of the script? Mine is running like real slow... I'd like to see how slow it really is.

Edit::
Where should I put DeDup()?
I think it's best to do DeDup() as early as possible -- this way the filters after DeDup() will only have to process frames that DeDup() doesn't drop, in which case the whole cost may reduce a lot! (Correct me if I'm wrong.)
Then, why do I put it after MCdfttest()?? Because I think it needs the original frames for temporal denoise as well as for the MC part... If it's put after DeDup, the number of frames MCdfttest() can work on is reduced by DeDup, thus the result might not be as good as otherwise -- Similar frames are dropped by DeDup, the temporal comparison might be not as accurate. :? I'm really not sure about this.

My script is already very slow... Maybe it's gonna take about 10 hours for just 20 minutes' video. So I think if filter order can be optimized for better speed, it's well worth asking

Last edited by D'zzy; 5th August 2009 at 14:24.
D'zzy is offline   Reply With Quote
Old 5th August 2009, 14:34   #16  |  Link
TheFluff
Excessively jovial fellow
 
Join Date: Jun 2004
Location: rude
Posts: 974
don't use dedup
TheFluff is offline   Reply With Quote
Old 5th August 2009, 14:45   #17  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
Hello, TheFluff.

Amazing... tell me why?
D'zzy is offline   Reply With Quote
Old 5th August 2009, 15:18   #18  |  Link
thewebchat
Advanced Blogging
 
Join Date: May 2009
Posts: 483
DeDup saves next to no bitrate and has a high tendency to cause failure.
thewebchat is offline   Reply With Quote
Old 5th August 2009, 15:33   #19  |  Link
TheFluff
Excessively jovial fellow
 
Join Date: Jun 2004
Location: rude
Posts: 974
what he said, it just causes a lot of complications for extremely small tangible benefits.
TheFluff is offline   Reply With Quote
Old 5th August 2009, 15:46   #20  |  Link
D'zzy
paint it red
 
D'zzy's Avatar
 
Join Date: Jul 2009
Posts: 19
TheFluff, I found what you said at animesuki

Quote:
Originally Posted by TheFluff
because h.264 is so efficient that identical frames compress to almost zero (IIRC it's 10 bytes per frame).
Makes me wonder if VFR is actually useless when encoding to h.264...

---

Even if compression-wise DeDup does do about nothing, I still consider it helpful speed-wise...

Like what I described before, DeDup will reduce the work for the filters after it, especially when the filters after are heavy. Am I right about this?

Last edited by D'zzy; 5th August 2009 at 15:51.
D'zzy is offline   Reply With Quote
Reply

Tags
anime, avisynth, ivtc, replace

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:01.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2017, vBulletin Solutions Inc.