Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 20th September 2003, 00:44   #1  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Restore24: Script to recover FILM from field-blended PAL

Hello all.

Beware, this post is LONG.

The beginning of this effort lies here. I couldn't live with simple deinterlacing, and I couldn't live with SmartBobbed video, either. So I started just another attempt - probably the X-thousandth one in digital video history - to somehow restore 24p content of such mistreated sources.
The task seems easy: don't use the blended fields, but only the good ones, while avoiding to drop or duplicate any of the good fields. Well, it would be easy, if computers weren't dumb like grass ...
Now, I scripted something together that, although being far from perfect yet, already works pretty well. As of now, it's hardly more than proof-of-concept, and lots of improvement should be possible. But since already the very first and *dumbest possible* version worked like a charm, I really *do believe* in this concept, and strongly suggest you people to step in and help improving!

The basic principle is in no way spectacular:

1. Bob the video to 50fps
2. Detect fields that are blends
3. Replace those blended fields with unblended neighbors
4. Decimate the cleaned stream to 24fps
5. Smile and enjoy

Step 1 is obvious.
Step 2 is working quite nice, although I cannot analyze the frames in an optimal way through an AviSynth script only. This would need additional commands not yet available. In the end, the best solution would be to make a filter plugin out of all of this.
Step 3 & 4 are directly related, and these are the really tricky ones! Avoiding the blends is not so hard, but if we get a jerky stream, like "go-stop-go-go-stop-jump-go-jump-go-..." as an exchange, then that would be even more distracting. And exactly these two steps are the first target to optimise. Currently, my solution still has jerks every now and then - it depends also a lot on scene content. Straight pannings work really good, and so do most sequences with noticable motion in it. It gets harder in still scenes with little motion, e.g. mouth-movement only, and in scenes with too low contrast or blurry picture.

Now let me explain the mehod. The script comes at the very end.


1. Bobbing: Doubling the frame rate

This is the first step: separate the fields, and convert them to fullsize frames, so that we have something to work with.
For analyzing, I use an Unsmart bob, to retain the fields as pure as they are. In fact, "TomsBob" is used with line averaging acivated ( TomsMoComp(x,-1,1) ) as this seems to work out best with the given concept.
However, it is perfectly possible to use any other method of (Smart)Bobbing to finally render the stream, as long as the output of the bob filter used for rendering is exactly in-phase with TomsBob.


2. Detecting blended fields

At first, I tried the plugins "deblend" and "unblend" for this step. Most probably, it was me being too dumb to use them correctly Quite some blends were detected, but lots of blends sneaked through. Changing the parameters seemed to have almost no effect at all, and all the filter's work was too much of a "black box" to me: I simply don't understand their working principle. Then, I would have liked to try the PFR plugin, but this one in all versions crashes for me with Access Violation after about 3 to 5 frames - sorry, Simon. So, I started my own approach of detecting blends.)

Look at a clean frame and a blended frame side-by-side. The clean frame has well-defined content. The blended frame's content is not so well defined: the detail in it has lower contrast, just because the blending has dimished the detail.
Well, that's it: if we calculate an edge mask of these two frames, the edge mask of the clean frame will appear noticabely brighter than that of the blended frame. Or, in contrary, the edge mask of the blended frame will appear darker.
This, simple as it is, is my way to detect the blended fields:
- create an edge mask on the bobbed stream
- for every frame, check if the current frame's edgemask is darker than that of both the preceeding and proceeding frame. If yes, it's a blend, on to step 3. If no, it's clean, leave it as it is.
This works wonders, detecting almost every blend!
BUT, one problem is left here. See "Possible improvements" below.


3. Replacing the blends

Replacing a blend is quite easy, as soon as we have detected one. The bigger question is: replace with what?
The obvious answer seems to be: replace it with the neighbor that is more similar. However, in practice, this is not necessarily true. My first attempts did use the most similar neighbor, but the end result was quite jerky in places. After fiddling a long time without noticeable improvement, I lost nerves and simply replaced with the preceeding frame *always*. To my surprise, the result was much, much smoother - but still not satisfying. The goal is to achieve an evenly distributed amount of 2-frame-dups and 3-frame-dups in the bobbed source, and by all means avoid to let stay single frames, because these are in danger to be missed by the final decimation step. As for now, I have some simple checks in the script that try to do exactly this, but it's not working out good enough - yet.


4. Decimating the cleaned stream

After replacing the blends, we need to decimate the 50fps stream down to 24fps (or maybe to 25fps, in case the NTSC stream was created from PAL, not FILM).
To make a long story short: after lots of trying, waiting, swearing and tearing-out-my-hair, I stepped away from Donald's decimate for this particular task. I don't want at all to make Donalds work small somehow - what would we do without his great decomb package? But in the given case, I couldn't get from it what I wanted. (But keep in mind, as a PAL user, I am no deinterlacing expert at all ). However, with decimate(), I'd need to decimate in at least two steps to get to 24fps: "decimate(2).decimate(25)". Firstly, decimating 1-of-25 is somewhat slow, making previewing difficult, and decimate(2) is not smart enough for the stream created so far. It got better with "decimate(4, mode=2).decimate(3, mode=2).decimate(25)", but this is even slower, and still was somehow not satisfying.
Thanks go to Kevin Atkinson and his filter "SmartDecimate", which comes to a rescue here. In fact, I used SmartDecimate for the very first time. Since this filter does field matching + decimating in one step, it is much better suited for the given problem. Or, at least, it was easier for me to get it working like it should.
Briefly, now the cleaned stream is UnBobbed again, and fed to SmartDecimate (while providing the cleaned bobbed stream as bob source to the filter, but that's detail). SmartDecimate then fiddles out a given number of unique fields (according to decimation ratio, here: 24/50), searches for matching fields for these, and uses the bob source for those fields without match.

Et voilà: it works.


So far, so good. The results are quite promising, but there are several

Possible improvements

1. Detecting blends

As stated above, the edge mask of a blended frame appears darker, but lets have a closer look.
What I am doing is simply taking "AverageLuma" through Conditional Filtering, and compare that. It works quite well, but it is not correct:
- a clean frame has higher contrast edges, leading to brighter lines in the edgemask, but
- a blended frame has lower contrast on edges, but it also might have *more* edges than a corresponding clean frame, because of the overlaying of a second frame.
Therefore, it is possible that the sum of "darker, but more" edge lines is equal, or even higher than the sum of "fewer, but brighter" edge lines, which obviously makes detection by AverageLuma fail. What would be needed is something like "average distance from overall average", or a similar stochastic measurement. Comparing the distribution of lightness of the edge masks, a blended frame will have more samples in medium range, less in the higher range. A clean frame will produce more samples in the higher range, and less in the medium range.

I don't see an easy solution to do this with AviSynth's current instruction set - or maybe I just forgot about something?
Moreover, I think it could be quite beneficial not to compare the whole frames at once, but to implement a "windowed" comparison. Correcly done, this should help in ignoring static parts of the frames, and to detect blended frames even if the blending occurs only in small areas like mouth movement.
This would even be possible through scripting somehow, but it would get awfully slow for sure.


2. Replacing blends

The current procedure is really dumb. The problems to solve are
- preferably not letting stay single frames alone
- by all means, avoid the creation of 4-frame-dups.

Imagine the bobbed source has a sequence
...- A - AB - B - B - BC - C - CD - ...
If the algorithm should decide to replace "AB" with "next", and "BC" and "CD" with "previous", we would get
...- A - B - B - B - B - C - C -...
This is very bad, since four dup's in a row are very likely to come out as "B-B" after decimation. A jerk.
We should replace either "AB" with "previous", or "BC" with "next". But which is better ... ?

Or imagine this sequence:
...- A - AB - B - C - CD - D -...
If (worst case!) "AB" gets "previous" and "CD" gets "next", we get
...- A - A - B - C - D - D -...
Also very, very bad. We should avoid to let stay even one 'single' frame, because it would be in danger to be dropped by decimation.
The last example shows it can easily happen to produce two 'singles' in a row, where one of them will be dropped by decimation almost for sure. Another jerk.
Although SmartDecimate seems to handle the stream with very reasonable intelligence, we should definetely care to create an even distribution through the replacing process!

Interlacing experts - Coding experts - Hello !?! - I'm neither, and my brain is smoking ...


So far the explanation.

You will notice that several things in the script's could be improved immediately. For example, the script would run much quicker if Kurosu's MaskTools were used for all the edge related stuff ... - should be not too hard, as long as there's no nifty little memory leak in them
There are also outcommented fragments left of some more things I played with. I hope it's readable, comments are a little sparse ...

Try it, play with it, tell me something.

Thanks for your time


- Didée
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 20th September 2003, 00:51   #2  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Here comes the script:

[edit]

** DELETED **

Scroll down for an updated script.

[/edit]
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 27th September 2003 at 00:05.
Didée is offline   Reply With Quote
Old 20th September 2003, 08:19   #3  |  Link
HomiE FR
Registered User
 
Join Date: Jul 2002
Location: France
Posts: 140
OH MY GOD!

Didée you're simply the best. I haven't tried it yet, but I'll do it right now. By the way I have VOBs from bad PAL animes with those ugly blended fields, so... let's try your script.

Sorry for this useless post, but I'm really happy you try to do something about those clips. I'll come later for some results.
HomiE FR is offline   Reply With Quote
Old 20th September 2003, 19:16   #4  |  Link
kevina
Registered User
 
Join Date: Jun 2003
Posts: 71
HomiE FR, be sure and let me know of any way my filter can be changed to make it more suited to your task.
kevina is offline   Reply With Quote
Old 20th September 2003, 22:38   #5  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Script is updated - it didn't work.
A little error happened while cleaning the script - a function call was empty where it mustn't be ... it's OK now.

I changed a little more:
- stepped back to a more simple replace decision
- restructured a little the scriptclip part. Should be clearer now.

Please do copy'n past once more.


Kevin:
I'm not completely sure what exactly would be needed. The current problem is, that the produced stream sometimes has an irregular distribution of unique frames. Ideally, the stream should contain only 2-dups and 3-dups from the blend clearing, but it still happens that single frames or 4-dups are produced (speaking of the 50fps bobbed stream).
I would assume that SmartDecimate, for this task, should look a little more relaxed for fields to use (before matching).
But honestly, I don't understand SmartDecimate good enough. I'm not a programmer, and so the sources of SmartDecimate are like Chinese for me
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 22nd September 2003, 05:13   #6  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
nice, nice work.

one situation i can see problems with, even with all your suggested improvements would be hybrid NTSC clips converted to PAL. this, after all is the reason loads of anime is simply run through a converter box rather than IVTC'd and sped up.

take "excel saga" for example - there's chunks of raw video in there at very unpredictable places (anybody see "bowling girls"?). so on top of blended fields, you'll have some nightmares at field matching and detecting progressive frames when the script hits one of these bits.

this may not be too bad a problem in practice - i haven't tested your script yet, but in the past other methods i've tried haven't fallen over too badly, at least perceptually.


Quote:
Therefore, it is possible that the sum of "darker, but more" edge lines is equal, or even higher than the sum of "fewer, but brighter" edge lines
it seems to me this could be solved by simply whacking a levels(0,.75,255,0,255) in there - by decreasing gamma, the middle range will be pushed down more than the higher range, making the sum of 2 blends less than 1 not-blend. hope you get what i mean.

keep up the good work! i'd love to see this as a plugin (i'm not expecting you to bust yourself writing one for me though)
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 22nd September 2003, 16:25   #7  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Things are improving.

Currently under investigation & debugging:
- better replacing strategy
- pattern guidance (sort of)

I *hope* to have the script ready for usage in the next 48 hours - however, don't bet too much money on it

Mug Funky:
I get you. In fact, shortly after my last post, I thought again about it and came to the same obvious conclusion of gamma reduction. Shame on me.
As for hybrid clips, yes, that may get - interesting. Currently the script is only tested for pure FILM content, and I rely on the assumption that the two neighbors of a blended field are clean. That's a limitation for general usage, but it fits perfectly MY actual problem

And, for sure *I* will *not* be the person who makes a plugin out of this:
my knowledge about compilers ends with spelling the word ...

- Didée
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 22nd September 2003, 17:18   #8  |  Link
mf
·
 
mf's Avatar
 
Join Date: Jan 2002
Posts: 1,729
Quote:
Originally posted by Didée
And, for sure *I* will *not* be the person who makes a plugin out of this:
my knowledge about compilers ends with spelling the word ...
Join the club . Great to see another interesting avisynth function.
mf is offline   Reply With Quote
Old 22nd September 2003, 18:10   #9  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
hey, to go off topic here... mf? is that Nabeshin in your signature? i fekking love excel saga (and i've got a funky fro, too).

@ Didée:

pure film... that's fair enough. around my parts, an anime is only run through a converter box if the authoring house gets a crappy (or nonexistent) digibeta tape of the original film source, or the original was simply mastered in NTSC. this happens quite often. i think i've only seen 1 or 2 titles that are blended for no good reason (the original UK release of Ghost in the Shell re-packaged for australia was a prime culprit, but there have since been several better releases).

so once you've got this working nicely for film sources, it'd be interesting to take on this problem (i might just do it myself, but i'm kinda lazy, and much better at talking about a problem than solving it :P)

oh, another edit... have you tried DGbob by donald graft? i think it'll give a smoother result (not sure)
__________________
sucking the life out of your videos since 2004

Last edited by Mug Funky; 22nd September 2003 at 18:23.
Mug Funky is offline   Reply With Quote
Old 23rd September 2003, 15:13   #10  |  Link
crOOk
Compulsive PixieDust User
 
crOOk's Avatar
 
Join Date: Mar 2002
Location: Hamburg
Posts: 115
@Didée
You rock!

It sure was time for something like this...
crOOk is offline   Reply With Quote
Old 23rd September 2003, 19:43   #11  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
@Didee: Great script!
I hope, that someone will take the time to code this... *begging*

btw.: would it be possible, to use the pure-bobbed Video only for blend-detection and decimating,
but making the output-video with an bob-deinterlaced version of the input-stream?
This maybe would avoid jittery station-logos...
scharfis_brain is offline   Reply With Quote
Old 24th September 2003, 12:51   #12  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Okay people, here comes a new extended version.
This one is - well, it's not too bad

But first some answers:

mf: Yeah, I rather prefer to be a member in that club, compared to always abuse AviSynth for nothing more than "XYsource()-FilterTheHellOutOfIt(max,max)-Return(ToDivX@UltraFast)"

@ all: Thanks for the flowers

Mug Funky, scharfis_brain:
It is already possible to use another bob filter for rendering - but currently it must be changed in the script itself (at the top). Since the script is still changing, I didn't parametrize that yet. Same goes for 'debugging' mode (at the bottom).
However, using two different bob filters slows the rendering down additionally. Actually I'm trying something ... read on.


What's new:
- Replacing decision works much better now.
- Simple pattern guidance helps a lot in predicting blends that are missed by the metrics.
So far, quite as promised.
- Just for the fun of it, a prototype of "TomsBobSpecial", which claims to be a SmartBob, directed by static edges. Works quite good so far for rendering (greatly reduces shimmering on logos and (sub)titles, and as a bonus reduces rainbowing), but is not yet well enough tested for blend detection. This one requires currently Kurosu's MaskTools - and works on P-III, too.

What's NOT new:
- I tried to utilize Kurosu's MaskTools for the relating stuff. It works flawless on my Athlon, but I had crashes on a Pentium-III.
So, for stability reasons, I put here the script with native AviSynth implementation. Those who want to try it with MaskTools instead, please do the un/commenting in the sccript yourself - it is marked.

My results so far:
This version renders my test streams - captures of (the new) Enterprise Season 2 - near to perfect. YUMMY! Jerks due to doubled or dropped frames are very, very rare now.
Still problematic are scenes where most of the frame's content is in-focus and totally static, but motion appears in the blurry fore- or background. This problem is by design of the detection function. (But I already have an idea ... )

What's next:
- refine pattern guidance - currently, in rare cases, it might mislead
- perhaps deal with 'motion-only-in-blurry-areas'
- play a little more with 'TomsBobSpecial'


Try it out, friends!
Personally - please excuse me - I myself am impressed. When I started out with it, I didn't really expect it to become that good.

Have fun

- Didée


P.S.
If anyone would like to host the script(s), please do so! It's a little strange to *post* a script like that. And I (still) have no webspace configured. Yes, I'm a lazy man.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 24th September 2003, 13:21   #13  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Oooops, message too long - reducing comments...

[edit on Sep 27 03:15 : v04.b: cleanup & speedup]

[edit on Sep 28 02:30 : v0.4c: fixed bug - 0.4b swapped past&future]

[edit on Oct 02 16:00 : slightly better replace decision - still v0.4c]

Code:
# # Restore24 v0.4c
# This script tries to restore 24 progressive frames 
# out of 25 interlaced frames with blended fields resulting 
# from 24p -> TC -> NTSC -> FieldBlendDecimation -> PAL
#

Function Restore24( clip clp )
{
start=blankclip(clp).loop().trim(1,25)
clp=start+clp
input = clp.ConvertToYUY2()
global work = input.TomsBobSoft() # TomsBobSpecial() # TomsBobSmart() # 

###### OUTPUT STREAM ######-----------------------------------------------------------------------

global out = clp.TomsBobSpecial() # clp.DGbob(order=1, thresh=12) # work # 
#            ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
#         PLEASE SET HERE *YOUR* BOB
#       FILTER OF CHOICE FOR RENDERING    -   ATM I prefer TomsBobspecial 

global out0=blankclip(out).trim(1,1)+out # .subtitle("p")
global out1=out.trim(1,0)                # .subtitle("n")

###### WORKING STREAM ######----------------------------------------------------------------------
work=work.crop(8,8,-8,-8).greyscale().levels(0,1.75,255,16,235).BicubicResize(work.width/2,work.height/2-32,.0,.5)

###### BUILD EDGEMASK ######----------------------------------------------------------------------
edgeRGB = work.ConvertToRGB32().\
          GeneralConvolution(0,"
           -1 -1 -1 
           -2 10 -2 
           -1 -1 -1 ")           
           
#####################################################################################
######  This following part is implemented with native AviSynth commands       ######
#####################################################################################
#
###### PRECEEDING & PROCEEDING ######
global edge_c  = edgeRGB.ConvertToYUY2().levels(0,0.5,160,0,255).levels(0,0.5,160,0,255)
global edge_p2 = BlankClip(edge_c).trim(1,2)+edge_c
global edge_p1 = edge_p2.trim(1,0)
global edge_n1 = edge_c .trim(1,0)
global edge_n2 = edge_c .trim(2,0)

###### WEAKEN STATIC PARTS, BIAS TO MOTION ######
global edge_p2_dark_n = layer(edge_p2, edge_p1.levels(0,1.0,255,255,0),"darken").ConvertToYV12()
global edge_p1_dark_p = layer(edge_p1, edge_p2.levels(0,1.0,255,255,0),"darken").ConvertToYV12()
global edge_n2_dark_p = edge_p1_dark_p.trim(3,0)
global edge_n1_dark_n = edge_p2_dark_n.trim(3,0)
global edge_n1_dark_p = edge_p1_dark_p.trim(2,0)
global edge_c_dark_n  = edge_p2_dark_n.trim(2,0)
global edge_c_dark_p  = edge_p1_dark_p.trim(1,0)
global edge_p1_dark_n = edge_p2_dark_n.trim(1,0)

###### BACK'N FORTH, FORTH'N BACK ... ######
global edge_p2 = edge_p2.ConvertToYV12()
global edge_p1 = edge_p1.ConvertToYV12()
global edge_c  = edge_c .ConvertToYV12()
global edge_n1 = edge_n1.ConvertToYV12()
global edge_n2 = edge_n2.ConvertToYV12()

#################################################
###### End of native implementation        ######
#################################################

#------------------------------------------------------------------------

###########################################################################################
#####  The following part is implemented with MaskTools v1.4.0. Only tested on Athlon! #####
######  To try it, un-comment it, and comment-out the above native part.             ######
############################################################################################
#
###### PRECEEDING & PROCEEDING ######
#global edge_c  = edgeRGB.ConvertToYV12().levels(0,0.5,160,0,255) #.levels(64,8.0,128,0,255)
#global edge_p2 = BlankClip(edge_c).trim(1,2)+edge_c
#global edge_p1 = edge_p2.trim(1,0)
#global edge_n1 = edge_c .trim(1,0)
#global edge_n2 = edge_c .trim(2,0)
#
####### WEAKEN STATIC PARTS, BIAS TO MOTION ######
#global edge_p2_dark_n = YV12layer(edge_p2, edge_p1.invert(),"mul",chroma=false)
#global edge_p1_dark_p = YV12layer(edge_p1, edge_p2.invert(),"mul",chroma=false)
#
#global edge_n2_dark_p = edge_p1_dark_p.trim(3,0)
#global edge_n1_dark_n = edge_p2_dark_n.trim(3,0)
#global edge_n1_dark_p = edge_p1_dark_p.trim(2,0)
#global edge_c_dark_n  = edge_p2_dark_n.trim(2,0)
#global edge_c_dark_p  = edge_p1_dark_p.trim(1,0)
#global edge_p1_dark_n = edge_p2_dark_n.trim(1,0)
#
########################################################
###### End of implementation with MaskTools       ######
########################################################


###### INITIALIZING MORE VARS ######
global btest_p  = 0
global btest_p1  = 0
global btest_pc = 0
global btest_c  = 0
global btest_cn = 0
global btest_n  = 0
global btest_n1  = 0
global btest_p2_n = 0
global btest_p2   = 0
global btest_p1_p = 0
global btest_p1   = 0
global btest_p1_n = 0
global btest_c_p  = 0
global btest_c    = 0
global btest_c_n  = 0
global btest_n1_p = 0
global btest_n1_n = 0
global btest_n2_p = 0
global IsBlend_n = false
global IsBlend_c = false
global IsBlend_p1 = false
global IsBlend_p2 = false
global frametype_p3 = 0
global frametype_p2 = 0
global frametype_p1 = 0
global frametype_c = 0
global frametype_n = 0
global single_ahead = false
global in_pattern = false
global pattern_guidance = 0
global count_p=0
global count_n=0
global P2_motion_btest = 0
global P1_motion_btest = 0
global N1_motion_btest = 0
global N2_motion_btest = 0


##### DEBUGGING, CHANGES ALL TIME ######
function ShowAll ()
{
stackvertical(stackhorizontal(edge_p2_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40),
 \                            edge_n2_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40) 
 \                            ),
 \            stackhorizontal(edge_p1_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40),
 \                            edge_n1_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40) 
 \                            ),
 \            stackhorizontal(edge_p1_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40),
 \                            edge_n1_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40) 
 \                            ),		  
 \            stackhorizontal(edge_c_dark_p.ColorYUV(analyze=true).crop(0,0,-0,40),
 \                            edge_c_dark_n.ColorYUV(analyze=true).crop(0,0,-0,40)  
 \                            ),
 \            Calculate(work).crop(0,0,-0,-16).ConvertToYV12(),
 \            out.crop(0,16,-0,-16).ConvertToYV12()
 \            )	
}

function ShowMetrics( clip clip )
{ 
clip=clip.subtitle("detected is "+string(IsBlend_c),       y=16)
clip=clip.subtitle("diff prev= "+string(btest_c_p-btest_p1_n) + "   nxt ="+string(btest_c_n-btest_n1_p) , y=32) 
clip=clip.subtitle("ratio prv= "+string(btest_c_p/btest_p1_n) + "   nxt ="+string(btest_c_n/btest_n1_p) , y=48) 
clip=clip.subtitle("pattern lock is "+string(in_pattern),  y=64)
clip=clip.subtitle("pattern guideance = "+string(pattern_guidance),  y=80)
clip=clip.subtitle("single ahead is "+string(single_ahead),y=96)
clip=clip.subtitle("frametype_p3 = "+string(frametype_p3) + "   IsBlend_p3 = "+string(IsBlend_p3),y=112)
clip=clip.subtitle("frametype_p2 = "+string(frametype_p2) + "   IsBlend_p2 = "+string(IsBlend_p2),y=128)
clip=clip.subtitle("frametype_p1 = "+string(frametype_p1) + "   IsBlend_p1 = "+string(IsBlend_p1),y=144)
clip=clip.subtitle("frametype_c  = "+string(frametype_c)  + "   IsBlend_c  = "+string(IsBlend_c), y=160)
clip=clip.subtitle(             "                               IsBlend_n  = "+string(IsBlend_n), y=176)
return( clip ) 
}

###### REPLACE FUNCTIONS ######
function PutCurr( ) { 
         global count_p = count_p+1
         global count_n = count_n+1
         global frametype_c = 0
         return( out ) 
         }
         
function PutPrev( ) { 
         global count_p = 0
         global count_n = count_n+1
         global frametype_c = -1
         return( out0 ) 
         }
         
function PutNext( ) { 
         global count_p = count_p+1
         global count_n = 0
         global frametype_c = 1
         return( out1 ) 
         }
         
###### REPLACE DECISION, SAFETY CHECK TO AVOID SINGLES & TRIPLES ######
function PREV( ) { 
         # it's no good idea to put a 'prev' if the decision two frames back was 'next'. 
         (frametype_p3 == 1) ? PutNext() : PutPrev()
         return( last ) 
         }
         
function NEXT( ) { 
         # it's a bad idea to put a 'next' if two frames earlier we put a 'prev': 
         # this is very likely to leave a 'single' frame
         (frametype_p2 == -1) ? PutPrev() : PutNext()
         return( last ) 
         }

function CURR( ) { 
         PutCurr()
         return( last ) 
         }

###### REPLACE BLEND WITH MOST SIMILAR NEIGHBOR ######
function UseMostSimilar( ) 
{
# The frame which blend-test's ratio is closer to 1 should be more similar
ratio_p = abs(btest_c_p/btest_p1_n)
ratio_n = abs(btest_c_n/btest_n1_p)
(ratio_p > ratio_n) ? PREV() : NEXT()   # putPrev() : putNext()

return( last ) 
}

###### REPLACE BLEND ACC. TO PATTERN GUIDANCE ######
# currently: only detect by pattern, but guidance not used (SOFT PATTERN)
function UsePattern( ) 
{
pattern_guidance == 1 ? NEXT() : NOP   # PutNext() : NOP
pattern_guidance == 0 ? CURR() : NOP   # PutCurr() : NOP
pattern_guidance ==-1 ? PREV() : NOP   # PutPrev() : NOP

return( last ) 
}

###### CHECK IF A SINGLE FRAME IS PROBABLY AHEAD ######
function CheckSingleAhead( ) 
{ 
# if brightness of *both* edges n+1 & n+2 is greater than past two frames, that means more motion
# in them, whilst n+1 can't be a double of n+2, cause it was bright enough to outrace n-1 & n-2
# Seems good theoretically, but it seems not to work quite as expected. No tragedy, 'cause false 
# decisions should be caught by the safety check in NEXT() & PREV()
#global single_ahead = (   ( (btest_n1_p > btest_c_p)  && (btest_n2_p > btest_c_p)  )
# \                     && ( (btest_n1_p > btest_p1_p) && (btest_n2_p > btest_p1_p) ) 
# \                       )
global single_ahead = (   ( (btest_n1_n > btest_p2_n) && (btest_n2_p > btest_p2_n) )
 \                     && ( (btest_n1_n > btest_p1_p) && (btest_n2_p > btest_p1_p) ) 
 \                       )  ### Is this second variant better??? ###

in_pattern = single_ahead ? false : true 
return( last ) 
}

###### CHECK IF HISTORY-OF-BLENDS SHOWS A PATTERN ######
function CheckPattern( ) 
{ 
#--- not bad, but still to improve
global in_pattern =
 \              ( (IsBlend_c == false) && (frametype_p1 == 0) && (frametype_p2 != 0) && (frametype_p3 == 0) && (Isblend_n == false) )
# \           || ( (IsBlend_c == false) && (frametype_p2 == 0) && ( (frametype_p1 != 0) || (frametype_p3 != 0) ) && (Isblend_n == true) )
# SOFT: use pattern only to detect blends that arn't detected by metrics

# \              ( (IsBlend_c == false) && (frametype_p1 == 0) && (frametype_p2 != 0) && (frametype_p3 == 0) && (Isblend_n == false) )
# \           || ( (IsBlend_c == false) && (IsBlend_p2 == true) && (frametype_p1 == 0) && (frametype_p3 == 0) )# && (Isblend_n == true) )
# STRONG: use pattern even if blend is detected at n+1: OVERRIDE! - requires atleast one 'real' detection in the past

global pattern_guidance = in_pattern ? frametype_p2 : 99  # currently not used
return( last ) 
}

###### REPLACE DECISION ######
function Replace( ) 
{ 
# If next frame is detected as a single, replace current blend with it to double it. Else, use
# pattern guidance, if a pattern is actually locked. If not, simply use most similar neighbor.
CheckSingleAhead()
single_ahead ? NEXT() : UseMostSimilar()                                 # SOFT pattern: replace undetected blends
                      # ( in_pattern ? UsePattern() : UseMostSimilar() ) # STRONG pattern: replace acc. to pattern
return( last ) 
}

###### DO WE HAVE A BLEND TO REPLACE ? ######
function Evaluate( ) 
{ 
# If current frame is detected or predicted as a blend, replace it. Else it's clean, put it out
CheckPattern()
( IsBlend_c || in_pattern ) ? Replace() : PutCurr()
#( IsBlend_c || ( in_pattern && (pattern_guidance != 0)) ) ? Replace() : PutCurr() # Pattern guidance is not yet final

debug ? ShowMetrics() : NOP

return( last ) 
}

###### CALCULATE METRICS ######
function Calculate( clip clip ) 
{
c99=scriptclip(out, "Evaluate()")

c16=FrameEvaluate(c99, "global IsBlend_n = (btest_n1_p<btest_c_n)&&(btest_n1_n<btest_n2_p)")
c15=FrameEvaluate(c16, "global IsBlend_c = IsBlend_n")
c14=FrameEvaluate(c15, "global IsBlend_p1 = IsBlend_c")
c13=FrameEvaluate(c14, "global IsBlend_p2 = IsBlend_p1")
c12=FrameEvaluate(c13, "global IsBlend_p3 = IsBlend_p2")
c11=FrameEvaluate(c12, "global frametype_p1 = frametype_c")
c10=FrameEvaluate(c11, "global frametype_p2 = frametype_p1")
c9 =FrameEvaluate(c10, "global frametype_p3 = frametype_p2")
c8=FrameEvaluate(c9, "global btest_n1_n = AverageLuma(edge_n1_dark_n)") # 
c7=FrameEvaluate(c8, "global btest_n2_p = AverageLuma(edge_n2_dark_p)") # 
c6=FrameEvaluate(c7, "global btest_p2_n = btest_p1_n") # 
c5=FrameEvaluate(c6, "global btest_p1_p = btest_c_p") # 
c4=FrameEvaluate(c5, "global btest_p1_n = btest_c_n") # enhanced blend-test -
c3=FrameEvaluate(c4, "global btest_c_p  = btest_n1_p")  # test with darkened
c2=FrameEvaluate(c3, "global btest_c_n  = btest_n1_n")  # static edges, making
c1=FrameEvaluate(c2, "global btest_n1_p = btest_n2_p") # motion more important 
return(c1) 
}

###### DO IT ######
function DoIt ()
{
Calculate(work)
AlreadyBobbed = last
TomsUnBob()
assumetff()
SmartDecimate(24,50, bob=AlreadyBobbed, tel=0.9, t_max=0.0000050, console=false)
# assumefps(25)  # this should really be done externally, not within Restore24 ...
trim(24,0)
return(last)
}

###### DEBUG, OR NORMAL OUTPUT ? ######

global debug = false # true # 
debug ? ShowAll() : DoIt()

return( last )
}
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 2nd October 2003 at 15:07.
Didée is offline   Reply With Quote
Old 24th September 2003, 13:26   #14  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Ah, finally squeezed it to get under 15000 chars

And here is TomsBob.avs, with the experimental 'TomsBobSpecial':

[edit: actualized on Sep 27, 03:30]

Code:
# Functions based on Tom Barry's TomsMoComp
# Except for TomsUnBob, but I liked the name
#

function TomsBob (clip c) 
{   
    flag=c.isYV12()
    c = flag ? c.ConvertToYUY2(interlaced=true) : c
    c = c.SeparateFields.TomsMoComp(1,-1,0)
    
    top  = c.SelectEven 
    bottom  = c.SelectOdd 
    
    bottom = bottom.Crop(0,1,0,-1).addborders(0,1,0,1) 
    top = top.Crop(0,0,0,-2) .addborders(0,1,0,1)      
    work = interleave(bottom, top) 
    out = flag ? work.convertToYV12() : work
    return(out)
}

function TomsBobSoft (clip c) 
{   
    flag=c.isYV12()
    c = flag ? c.ConvertToYUY2(interlaced=true) : c
    c = c.SeparateFields.TomsMoComp(1,-1,1)
    
    top  = c.SelectEven 
    bottom  = c.SelectOdd 
    
    bottom = bottom.Crop(0,1,0,-1).addborders(0,1,0,1)
    top = top.Crop(0,0,0,-2) .addborders(0,1,0,1)     
    work = interleave(bottom, top) 
    out = flag ? work.convertToYV12() : work
    return(out)
}

function TomsBobSmart (clip c) 
{   
    flag=c.isYV12()
    c = flag ? c.ConvertToYUY2(interlaced=true) : c

# This script is only for TopFirst material 
    Top=c.AssumeFrameBased() 
    Bottom=Top.SeparateFields().Trim(1,0).weave()
    out = Interleave(Bottom.TomsMoComp(1,0,0),Top.TomsMoComp(0,0,0))
    out = flag ? out.convertToYV12() : out
    return(out)
}

function TomsBobSpecial (clip c) 
{   
#   '#' = fiddling with something
#  'old1' is the perhaps more precise one. 'old2' is a little more blobby on decision. To try 
#  it, swap the "ConvertToRGB32" with the "verticalReduceBy2.Convert...", and activate
# the two 'BicubicResize's
    flag=c.isYV12()
    c = flag ? c.ConvertToYUY2(interlaced=true) : c
    c = c.SeparateFields.TomsMoComp(1,-1,0)
    top  = c.SelectEven 
    bottom  = c.SelectOdd 
    bottom = bottom.Crop(0,1,0,-1).addborders(0,1,0,1)
    top = top.Crop(0,0,0,-2) .addborders(0,1,0,1)     
    work = interleave(bottom, top) 
    out = flag ? work.convertToYV12() : work
    
    tmp = flag ? out : out.ConvertToYV12()
    
    #                       .VerticalReduceBy2().ConvertToRGB32().\ 
    EdgeC = work.GreyScale().ConvertToRGB32().\
              GeneralConvolution(0,"
               -1 -2 -1 
               -0  8 -0 
               -1 -2 -1 ").ConvertToYV12().levels(16,8.0,128,0,255)

    EdgeP=blankclip(EdgeC).trim(1,1)+EdgeC
    EdgeN=EdgeC.trim(1,0)
    EdgeCN=YV12layer(EdgeC,EdgeN,"mul", chroma=false)
    EdgePC=YV12layer(EdgeC,EdgeP,"mul", chroma=false)
    #EdgePC1=EdgeCN.trim(1,0)
    oldEdgePN=YV12layer(EdgeP,EdgeN,"mul", chroma=false)#.BicubicResize(tmp.width,tmp.height,1.0,0)###
    oldEdgePCN=YV12layer(oldEdgePN,edgeC,"mul", chroma=false)#.BicubicResize(tmp.width,tmp.height,1.0,0)###
    P=BlankClip(tmp).trim(1,1) + tmp
    N=tmp.trim(1,0)
    oldPNavg=YV12layer(P,N,"add", 128, chroma=true) ###
    #PCavg=YV12layer(tmp,P,"add", 128, chroma=true)
    #CNavg=YV12layer(tmp,N,"add", 128, chroma=true)
    old1=MaskedMerge(tmp,oldPNavg,oldEdgePN, Y=3,U=3,V=3)###
    old2=MaskedMerge(tmp,oldPNavg,oldEdgePCN,Y=3,U=3,V=3)###
    #PC=MaskedMerge(tmp,PCavg,EdgePC,Y=3,U=3,V=3)
    ##PCboth=MaskedMerge(old,PCavg,EdgePC,Y=3,U=3,V=3)
    #CN=MaskedMerge(tmp,CNavg,EdgeCN,Y=3,U=3,V=3)
    #PCN=MaskedMerge(PC,CNavg,EdgeCN,Y=3,U=3,V=3)
    ##PCNboth=MaskedMerge(PCboth,CNavg,EdgeCN,Y=3,U=3,V=3)
    #    stackvertical(old,PCN)
    
    return(old1)
}

function TomsUnBob (clip c) 
{
    bottom=c.selecteven().separatefields().selecteven()
    top=c.selectodd().separatefields().selecteven()
    interleave(bottom,top)
    weave()
}

function TomsUnBob2 (clip c) 
{
    bottom=c.selecteven().separatefields().selecteven()
    top=c.selectodd().separatefields().selecteven()
    interleave(bottom,top)
    doubleweave()
} 
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 11th January 2004 at 19:14.
Didée is offline   Reply With Quote
Old 24th September 2003, 13:30   #15  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
Quote:
YUMMY! Jerks due to doubled or dropped frames are very, very rare now.
It would be possible, that some dupes occur due the 24p to 23,976p slowdown.... (every 1000th frame (42sec) one dupe...)
scharfis_brain is offline   Reply With Quote
Old 24th September 2003, 13:44   #16  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Quote:
Originally posted by scharfis_brain
It would be possible, that some dupes occur due the 24p to 23,976p slowdown.... (every 1000th frame (42sec) one dupe...)
I usually get when thinking about this conversion stuff.
Isn't it: I expect 24 progressive frames in a sec, but there are only 23.976
--> in the timespan I expect 1000 progressive frames, there are only 999
==> one frame drop, not a dup

But really, I'm unsure from which direction I have to put the saddle on the horse.

[edit]
Ah, while cooking some coffee, I got it.
By the time I think I have the 1000th frame, the source has only progressed to the 999th frame, and I have to wait one frame -> a DUP!
I feel enlightened now

Well, we could try "SmartDecimate( (24*1000)-1, 25*1000, ...)" - but somehow, I have the feeling that's not really a good idea ...
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 24th September 2003 at 13:57.
Didée is offline   Reply With Quote
Old 24th September 2003, 21:30   #17  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
Oh, I think, it would be okay, to have one dupe in 1000 frames...

I think there are more important things to do!

I have made a script, that simulates this kind of 24fps (30i) to 25fps PAL conversion.

It maybe helps to have a well defined input source for comparing the output...

Code:
loadplugin("c:\dvdrip'in\dgbob.dll")
avisource("any progressive video.avi")


assumefps(23.976)        #assure 23,976 fps
selectevery(2,0,0,0,1,1) #telecine
separatefields()         #
selectevery(4,1,2)       #
weave()                  #re-interlacing

dgbob(order=1)           #starting conversion with deinterlacing
converttoyuy2()          
convertfps(50)           #60p -> 50p
separatefields()         #
selectevery(4,0,3)       #
weave()                  #re-interlacing
EDIT: I can't use your new script!
It says: "there is no function named YV12layer"

Last edited by scharfis_brain; 24th September 2003 at 22:18.
scharfis_brain is offline   Reply With Quote
Old 24th September 2003, 22:12   #18  |  Link
BBQmyNUTZ
Banned
 
Join Date: Mar 2003
Posts: 66
Could this script also be adapted to restore telecined 29.97 NTSC with blended fields to 23.976 fps? I have been running into a good # of anime recently that have this kind of problem.

Kai
BBQmyNUTZ is offline   Reply With Quote
Old 24th September 2003, 22:23   #19  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
@BBQmyNUTZ: I don't think that this will work...

23.976fps are progressive and this script handles 25fps interlaced video.

in an 24fps progressive video there are only 24 states per second and 24 full frames/sec and according to your description about 20 to 50% of those frames are blends. but those blends don't have an neighbor that represents their unblended state.

in 25fps interlaced video there are 50states per second but only 24frames are put in this stream. here about 20 to 30% of the video are blends. so it es "easy" to select the non-blended frames and use them for the new output...

I hope my german-english was not that confusing...
scharfis_brain is offline   Reply With Quote
Old 25th September 2003, 00:56   #20  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Quote:
I can't use your new script!
It says: "there is no function named YV12layer"
Argh.

The very next thing to do is to parametrize the function argument of Restore24.

My fault. Accidentially, I left in TomsBobSpecial as rendering filter, and this one requires MaskTools. Hey, could it be your local filter collection is - ~ INCOMPLETE ?
I'll edit above script in a minute, sorry.

Would you all please go to into the following line:
Quote:
Code:
# Restore24 v0.4a
# This script tries to restore 24 progressive frames 
# out of 25 interlaced frames with blended fields
#

Function Restore24( clip clip )
{
input = clip.ConvertToYUY2()
global work = input.TomsBobSoft() # TomsBobSpecial() # TomsBobSmart() # 

###### OUTPUT STREAM ######
global out=clip.TomsBobSpecial() # work # clip.DGbob(order=1, thresh=20) # 
global out0=blankclip(out).trim(1,1)+out # .subtitle("p")
global out1=out.trim(1,0)                # .subtitle("n")
and please set there your bob filter of choise for rendering the output stream. Either DGbob, or 'work' for TomsBobSoft, or whatever you like.
On Kurosu's site is the following dead link: MaskTools1.4.1.zip
However, version 1.4.0 is there, but without an active link in sight ...

I started talking to Kurosu about his mask tools, and he said he'll try to hopefully do something about them soon. That would speed up the hole thing quite a bit - GeneralConvolution is effectively called very frequently, see...

BTW, Scharfi, nice explanation.
But, is this truly correct? Or, in other words, aren't there different possible blending techniques? I could also think of a way to blend-decimate 29.97 to 25 so that 95% of all fields become blends - not so hard to do, really
And a handy script for generic testing, hah. Oh, we could start a little competition: Who makes the shortest, or fastest-running, script for that job? - No, not me. I'm not so genious in juggling fields, really!

I think you will get the script running now.
Sorry for the hassle.

- Didée
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 25th September 2003 at 01:14.
Didée is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 23:16.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.