Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 25th October 2005, 22:54   #1  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Motion filter : +1st MC FPS-change script for real usage

FPS-change script in this post:
http://forum.doom9.org/showthread.ph...288#post747288
______________

This is an implementation of a filter based on some papers that I was pointed to here ... (thanks MfA, Fizick!). It needs several features (like scene detection) to be added before its usable in real scripts, so just treat this version as something fun to look at/play with.

Motion, 25 October 2005

(Edit: HTML documentation/quick reference is now up... but this filter is still in development, and not meant for general usage.)

Usage is very simple:

#source = ...
motion = findmotion(source, useassembly = true )
compensate(source, motion) #approximate current frame with blocks from previous frame


This script measures at 200 FPS on my P4/2400 on PAL DVD material with AVSTimer. It will slow down when I add subpixel accuracy, but should still be plenty fast!

If you want to see the motion vectors:

source = last
motion = findmotion(source, useassembly = true )
DrawMotion(source, motion) #relatively slow


Frames should be accessed from frame 0 in a linear order. Height and width must be multiples of 8. YV12 only. Assembly is iSSE.



It's taken me nearly all day to build this, so I'm rather out of it... I'm sure there are plenty of things I should mention here but I can't think clearly at present . I will collect my thoughts in the next couple of days and reply to this and other outstanding threads (as and when real-life permits!)

Last edited by mg262; 6th December 2005 at 13:21.
mg262 is offline   Reply With Quote
Old 26th October 2005, 06:29   #2  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
you are the winrar! i was just trying to build denoisers with MVtools and thought true motion would be very cool to have (if i force mvtools to be too accurate, it gives me an exact copy of the frame to be compensated... not useful for denoising if it does the grain as well )

i'll test it immediately!

[edit]

hehe... the seeking from frame 0 got me, but apart from that it's looking very interesting. it follows motion pretty well, and isn't sensitive to crap and spots on the screen as far as i can tell. of course it still needs work (nonlinear access, backward prediction, etc).

where are you hoping to take this engine? will there be motion interpolators made? are the motion vectors compatibly with mvtools (so we can use it's interpolators on your more accurate vectors for standards-conversion)? i see a lot of potential here.

btw, what are your thoughts on constructing the compensated image? would mesh-warping or something similar be a good idea considering the motion search is so fast?

whatever happens, this is a sterling effort! thanks for the new tool
__________________
sucking the life out of your videos since 2004

Last edited by Mug Funky; 26th October 2005 at 06:38.
Mug Funky is offline   Reply With Quote
Old 26th October 2005, 06:36   #3  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
With the mvtools, you can emulate true motion filter by raising the lambda. It'll add a penalty to the SAD when the motion vector differs from the spatial predictor. Hence, the motion field is far more coherent.

mg262 : good filter. Nice to see somebody working on ME filters.
__________________
Manao is offline   Reply With Quote
Old 26th October 2005, 06:44   #4  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
quick hack for halfpel:

Code:
pointresize(last.width*2,last.height*2)
motion=findmotion(last,useassembly=true)
compensate(last,motion)
reduceby2()
[edit]

how are you thinking of handling occlusion and revealing of objects? i suppose that'll come with backward prediction. doing a subtract-from-source shows crossing objects are a problem area. i'm sure you already know all this stuff as you wrote the filter
__________________
sucking the life out of your videos since 2004

Last edited by Mug Funky; 26th October 2005 at 06:47.
Mug Funky is offline   Reply With Quote
Old 26th October 2005, 06:54   #5  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
That's not hpel. Hpel would be using bilinearresize as an upsampler. [edit : mm i'm wrong][edit2 : both of us are wrong, hpel can't be emulated that way]

Backward prediction isn't possible for true motion filters ( unless two passes are done, which goes against the principle of of true motion ( the principle being : fast ) ).
__________________

Last edited by Manao; 26th October 2005 at 06:57.
Manao is offline   Reply With Quote
Old 26th October 2005, 08:32   #6  |  Link
AVIL
Registered User
 
Join Date: Nov 2004
Location: Spain
Posts: 408
@Mug Funky and Manao

To emulate true motion, perhaps is preferable tweak the parameter "Search" of the MVtools. Actually i use search=3 and searchparam = 3, but I remember have tried with search=1. The goal is limit the radius of search to the maximum expectable difference of position inter-frames of the objects in motion.
AVIL is offline   Reply With Quote
Old 26th October 2005, 08:43   #7  |  Link
AVIL
Registered User
 
Join Date: Nov 2004
Location: Spain
Posts: 408
@mg262

An important problem I have with mvtools is the blocks it generates when not appropiate predicted block is found. This block ruins denoising. I know its impossible predict a frame if in the reference frame they are new objects or part of objetcs not visible in the previous or next frames. But i think its possible minimize the problem if the predictor uses the reference block when the predicted block not matches well.
AVIL is offline   Reply With Quote
Old 26th October 2005, 09:42   #8  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Very nice tool, Clouded. Very Nice.

(However Avisynth world is turning so fast these days, I can't follow anymore ... dizzy ...)


@ Mug Funky
Quote:
if i force mvtools to be too accurate, it gives me an exact copy of the frame to be compensated
Now, would you please tell how you do to achieve THAT (an *exact* copy) with MVTools ??


@ Manao
Quote:
That's not hpel. Hpel would be using bilinearresize as an upsampler. [edit : mm i'm wrong][edit2 : both of us are wrong, hpel can't be emulated that way]
Do you mean it's not hpel "by definition"? Because, by working on doubled resolution, obviously a ~sort of~ subpixel accuracy is achieved, isn't it.

Quote:
Backward prediction isn't possible for true motion filters ( unless two passes are done, which goes against the principle of of true motion ( the principle being : fast ) ).
For the time being, shouldn't

reverse()
MVstuff()
reverse()

work out ... it's halving the speed when forward & backward are done both, sure. But at least one can "have it" before Clouded implements it


Quote:
Originally Posted by AVIL
An important problem I have with mvtools is the blocks it generates when not appropiate predicted block is found. This block ruins denoising.
That's an issue indeed. Using the builtin MVDenoise, at least you can tweak SAD and other thresholds to circumvent the problem - but it's only a compromise, and not even a good one : in areas with "strong" detail, often one would like blocks with a (relatively) high SAD to still be used. Whereas in "flat" areas, a rather small SAD can already mean visible blocking ...
Lowering the pixel threshold is even more poor, since lots of the potential of MV-compensation is thrown away ...
In custom-build denoising (using MVCompensate & Co., but not MVDenoise) it gets even more difficult, with SAD, vector lenghts etc. only available through dreadful backdoors.

That's why I thought out loud about a "new" deblocking filter. The already existing deblocking filters are not suited to deblock compensated frames, if they are to be used for denoising: if the deblocking is cranked up enough to catch all blocking, all noise will be killed from the compensation, too ... dead end.
(The proposed method should work, I think ... but I can't dive into it, currently.)
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 26th October 2005, 09:47   #9  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
@Manao, @Mug Funky, Thank you for the kind comments .
Quote:
i'm sure you already know all this stuff as you wrote the filter
Afraid not... it's just an implementation of standard methods. I haven't even tested/examined the algorithm (as opposed to implementation correctness) to any great degree. So feel free to throw things at me ...
Quote:
where are you hoping to take this engine?
There are four features I had in mind before I started coding:
  • Scene awareness (see below)
  • Sub-pixel accuracy
  • Feature-based motion estimation similar to this*
  • Mesh-warping similar to this
*(A very low/nonexistent priority after seeing results from the basic version.)

Beyond these, I'm not sure. I'm certainly not going to fiddle with the core algorithm -- looking through the papers you can see the algorithm evolving and being extensively tested over a decade, with the result that the current version is very fine-tuned.
Quote:
will there be motion interpolators made? are the motion vectors compatibly with mvtools (so we can use it's interpolators on your more accurate vectors for standards-conversion)? i see a lot of potential here.
Certainly possible (compatibility via a translation filter). The only thing I would say is that de Haan's website (papers Fizick linked to) has a huge amount of material on true-motion motion-compensated standards conversion and deinterlacing, and IMO we should assimilate that material before diving in -- scripts/methods appropriate for MVTools may turn out to be inappropriate for this filter.

Linear access, unidirectionality:
The algorithm was AFAICS designed for standalone hardware compatibility, so the limitations (linear access, unidirectionality) make good sense. As Manao says, there is no way of doing backwards prediction without at least a 2x slowdown. OTOH, with subpixel switched off, this may often be acceptable. I was thinking of working scene by scene, scanning forwards and backwards on each scene, to minimise space usage.

In any case, the filter will need to handle scene breaks. I strongly prefer scene detection to be done once and scene information to be passed to each filter -- for separation of functionality as much as efficiency. AVISynth has no native support for this, so I will pass scene information in a clip. Seeking to the 100th frame in a scene will then cost as much as seeking from frame 0 to frame 100 at present.* (Faster seeking is possible but ugly -- more on this later.)

*Each scene behaves like a separate clip passed through the filter. I sometimes consider an Animate-like meta-filter to do this to any filter.


Last: Development is likely to be sporadic as I'm beginning to get joint fatigue from interface issues... sorry to make you wait.

Clouded

Edit: AVIL, Didée, just seen your posts...

Last edited by mg262; 26th October 2005 at 17:01.
mg262 is offline   Reply With Quote
Old 26th October 2005, 11:36   #10  |  Link
krieger2005
Registered User
 
krieger2005's Avatar
 
Join Date: Oct 2003
Location: Germany
Posts: 377
Is this not possible to use the results (output) of this filter as an input for MVTools (for Denoising) using the MVTools-Function "mvchangecompensate"?

But for this a motion-compensated-clip have to be done before by MVTools, or not? Just an idea...
krieger2005 is offline   Reply With Quote
Old 26th October 2005, 11:36   #11  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Motion, 26 October 2005

FindMotion(source, int initialise = ..., useassembly = ..., int reset = ... )

I added a quick feature to FindMotion to facilitate testing. There is an optional argument called reset; every reset frames, the filter calculates motion without looking at the motion computed for the previous frame. So (slow) seeking is possible.

Additionally, whenever a frame is calculated without motion vectors from the previous frame being available, the filter iterates the basic algorithm initialise times (default 0); if you don't set initialise, the motion vectors will reset to 0 every reset frames. Even a fairly small value of initialise should produce results that are very close to the non-reset version.

(Detail: if reset is 50, frames 1, 51, 101, etc are computed without looking at previous frames. So seeking to frame 175 will compute 25 frames. I could have set it to reinitialise for the exact frame seeked to, but this would be nondeterministic -- and very unfriendly to Didée's Reverse() script.)
________________________________________________________________

Didée: I'm glad you like it . As for this:
Quote:
Originally Posted by AVIL
An important problem I have with mvtools is the blocks it generates when not appropiate predicted block is found. This block ruins denoising.
The true-motion algorithm has a tendency to try and use contiguous (touching) blocks from the previous frame,* which should reduce this effect -- so play with the filter and see what you think. It will also be interesting to try the mesh-warping compensation (don't expect it soon!) to see if that helps. But I could at some point give you a mask of the per-block SAD if that is any use?

*AVIL: I'm not sure what you mean by reference block, but the algorithm would generally choose something that is better than the corresponding block in the previous frame.

Clouded

Edit: What about a stand-alone mask-creating BlockSAD(clip, clip) filter?

Last edited by mg262; 26th October 2005 at 17:01.
mg262 is offline   Reply With Quote
Old 26th October 2005, 12:55   #12  |  Link
AVIL
Registered User
 
Join Date: Nov 2004
Location: Spain
Posts: 408
@mg262

I call the reference frame to the real frame. In motion compensation we are trying to simulate this reference frame by evolving blocks of another frame, previous, next, 2 past, etc.... In this context I will call this frame the "seed" frame.

Then, I call reference block to a block in the reference frame. When the block choose from the "seed" frame don't simulate well his correspondant reference block I suggested to use the reference block instead to build the mo-comped frame.

Sorry for my poor english.
AVIL is offline   Reply With Quote
Old 26th October 2005, 14:22   #13  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
AVIL, understood. What you want could be done by postprocessing using the kind of filter I suggested at the end of the last post.

@all, it's just struck me that I may need to be careful about patents? Does anyone know what the situation actually is?
mg262 is offline   Reply With Quote
Old 26th October 2005, 16:03   #14  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
You don't sell it --> you don't care about patents. It's at least true in some countries.
__________________
Manao is offline   Reply With Quote
Old 26th October 2005, 16:03   #15  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
Quote:
Originally Posted by Manao
With the mvtools, you can emulate true motion filter by raising the lambda. It'll add a penalty to the SAD when the motion vector differs from the spatial predictor. Hence, the motion field is far more coherent.
Predictive searches can get trapped in local minima though ... now for coding they get it right enough of the time for it not to matter much, but for image processing it makes sense to do a little more work and use feature based matching to get extra candidates.

As for patents, if you are really worried you can always only release in source code form ... Im not aware of anyone ever being sued for contributory infringement for releasing source code.

Last edited by MfA; 26th October 2005 at 16:05.
MfA is offline   Reply With Quote
Old 26th October 2005, 17:23   #16  |  Link
Fizick
AviSynth plugger
 
Fizick's Avatar
 
Join Date: Nov 2003
Location: Russia
Posts: 2,183
@Mg262,
You are fast code writer!
Probably even faster then me.
I will try your filter today.

IMHO, BlockSAD function will not useful. I try similar way. The better SAD is not better block for true motion.
Occlusion info is more important.

@Manao,
By the way, recently (last week) I tried to improve MVTools in two directions:
1. True motion
Strictly as you say above, http://forum.doom9.org/showthread.ph...904#post728904
I added a penalty to the SAD when the motion vector differs from the spatial predictor.
my parameter draft name is "penalty"
2. Motion interpolation.
It uses some (clever ?) combination of both shifted and fetched blocks (forward and backward), and temporal average of some regions from two source frames to produce output.

The speed is not fast, of course.

I have some alpha version, but my work is still not finished. I want:
1. to limit of motion vector length (at every level probably ?)
2. to use most clever blocks combination for interpolation (i try many ways).

I wanted to publish my modified version next week (if my real life give some time), after some conversation with you.
But mg262 work provoke me to publish it more quickly.

Will it be politically correct? Or you work at MVTools too now?
__________________
My Avisynth plugins are now at http://avisynth.org.ru and mirror at http://avisynth.nl/users/fizick
I usually do not provide a technical support in private messages.
Fizick is offline   Reply With Quote
Old 26th October 2005, 17:53   #17  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
Fizick : please, go ahead. I've got no time to put my hands on the MVTools in the near future, so you're really welcome to add features to it.

The penalty you're speaking about already exist :
Code:
int cost = sad + MotionDistorsion(vx, vy);

inline int MotionDistorsion(int vx, int vy)
{
  int dist = SquareDifferenceNorm(predictor, vx, vy);
  return (nLambda * dist) >> 8;
}
> to limit of motion vector length (at every level probably ?)
Code:
   /* computes search boundaries */
   nDxMax = nPel * (pSrcFrame->GetPlane(YPLANE)->GetExtendedWidth() - x[0] - nBlkSize);
   nDyMax = nPel * (pSrcFrame->GetPlane(YPLANE)->GetExtendedHeight()  - y[0] - nBlkSize);
   nDxMin = -nPel * x[0];
   nDyMin = -nPel * y[0];
Quote:
but for image processing it makes sense to do a little more work and use feature based matching to get extra candidates.
Indeed. But the main predictor I use come from hierarchal me, which tends to be rather good.
__________________
Manao is offline   Reply With Quote
Old 26th October 2005, 18:17   #18  |  Link
AVIL
Registered User
 
Join Date: Nov 2004
Location: Spain
Posts: 408
@mg262

I find useful your suggested filter in scope of postprocessing or even like a wide-usage filter. IMHO appart from the two clips must be a parameter for block size, more colourspaces, and could be another parameter for distinct block-difference calculations (sum(SQR(x**2 - y**2)), by example).

I`ve gived your filter a try. I've found it more blocky and less accurate than MVtools, but I think is because of the lack of hqpel. The blockiness is especially hard at the vertical borders of the image and frequent in the edges with hard luma stepping.

I`ve subtract the mo-compend frame with the reference frame. An ideal mo-comped frame only give, as result of subtraction, an arbitrary image with the inter-frame noise. All the similarities between the subtracted image and the real one are product of blocks not accurately calculated.

This filter is ver promisory.

@Fizick

A new filter (or a new version of and old one) is always a good new. Thanks
AVIL is offline   Reply With Quote
Old 26th October 2005, 19:47   #19  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
AVIL,

I looked at the issue; first I tried a number of minor variations on the method to see if they had any substantial effect, but the essential behaviour was unchanged. But supersampling (as Mug Funky does) certainly helps... so I think that you are right and subpixel estimation is particularly important for this algorithm, as it changes not only the accuracy but also the average step size (which affects motion vector convergence)... so when I can I will implement that and we can see how it looks.
__
Made a minor fix+change to improve behaviour of the new parameters. Please re-download Motion, 26 October 2005.

Last edited by mg262; 26th October 2005 at 20:28.
mg262 is offline   Reply With Quote
Old 26th October 2005, 21:51   #20  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
Quote:
Originally Posted by Fizick
@Mg262,
You are fast code writer!
Note that in this very case it's not only true, but particularily impressive...


/* bows down deeply */


/* stumbles, and falls */
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:16.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.