Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Reply
 
Thread Tools Search this Thread Display Modes
Old 31st October 2005, 13:57   #41  |  Link
castellandw
Registered User
 
Join Date: Sep 2005
Posts: 90
So basically for phase correlation motion estimation, it's a mix of phase correllation first (for large scale motion) and then overlapped block motion compensation (for small scale motion and multiple object movements)?
(I say overlapped block motion compensation instead of block-matching motion compensation because overlapped block MC is said to be more optimal than block-matching MC:http://forum.doom9.org/showthread.ph...376#post719376 )
castellandw is offline   Reply With Quote
Old 31st October 2005, 14:14   #42  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
it's a little simpler, but basically what you said.

instead of taking a block and finding a vector, phase corellation finds a load of vectors and passes them to some other algo which finds the block for that vector. it essentially works backwards. requires more stages and is conceptually more complicated than block matching, but in the end there's a fair bit less work for the computer to do.

i might have thrown you off a tad with the depan thing...
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 31st October 2005, 16:37   #43  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Minor point, but could cause confusion along the line... I want some standard terminology. Ideas on sensible names for

a) the frame we are trying to approximate

and

b) the frame from which we are extracting blocks to be used to approximate a)

, please?

I particularly ask because I've seen the word "reference" being used for both a) and b)...
mg262 is offline   Reply With Quote
Old 31st October 2005, 16:46   #44  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
what about "source frame" and "target frame", but then these could get confused too i suppose. i'm thinking of source frame as being the one blocks are moved from, and target frame is the frame blocks are being moved to match.

for forward compensation, think of source frame as n-1 and target frame as n.
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 31st October 2005, 17:44   #45  |  Link
MfA
Registered User
 
Join Date: Mar 2002
Posts: 1,075
I use the terms forward reference and backward reference (although I use the terms exactly in reverse as they are usually used in video coding, where they use the direction of prediction rather than the direction in time to determine what's forward/backward).
MfA is offline   Reply With Quote
Old 31st October 2005, 18:05   #46  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
I'm trying to forwards/backwards/previous/next as much as possible, partly because of this:
Quote:
I use the terms exactly in reverse as they are usually used in video coding, where they use the direction of prediction rather than the direction in time to determine what's forward/backward
Both conventions make sense in their own way, but having both around at once can make it very confusing for the reader! Plus, much of the time you want to describe the compensation process itself in a way that is direction-independent...

Source and target are much better, but still a bit prone to confusion. I think I might go with calling a) "current"... that kind of makes sense for both directions.
mg262 is offline   Reply With Quote
Old 31st October 2005, 19:19   #47  |  Link
castellandw
Registered User
 
Join Date: Sep 2005
Posts: 90
Quote:
Originally Posted by mg262
Minor point, but could cause confusion along the line... I want some standard terminology. Ideas on sensible names for

a) the frame we are trying to approximate
mg262, don't you think we should also use separate terminology for block portions of a frame as well because a load of vectors are being used from a block portion of a frame to approximate the (as Mug Funky put it, possibly "local maxima") vector for that block portion of the frame?

In any case, I can see that motion estimation using phase correlation uses various things such as the fourier transform to output the motion vectors used for motion compensation, but do you guys think this can be really implemented for Avisynth or is motion estimation using phase correllation and then motion compensation look like it's not gonna happen because it would be great to see how it works out on broadcast standards conversion?

Last edited by castellandw; 31st October 2005 at 19:25.
castellandw is offline   Reply With Quote
Old 31st October 2005, 19:31   #48  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Quote:
mg262, don't you think we should also use separate terminology for block portions of a frame as well because a load of vectors are being used from a block portion of a frame to approximate the (as Mug Funky put it, possibly "local maxima") vector for that block portion of the frame?
I'm sorry, I don't understand this comment.

Motion estimation using phase correlation... I don't see why wouldn't be possible in AVISynth; it just requires someone to want to code it.
mg262 is offline   Reply With Quote
Old 31st October 2005, 20:03   #49  |  Link
castellandw
Registered User
 
Join Date: Sep 2005
Posts: 90
Quote:
Originally Posted by msg262
Quote:
"mg262, don't you think we should also use separate terminology for block portions of a frame as well because a load of vectors are being used from a block portion of a frame to approximate the (as Mug Funky put it, possibly "local maxima") vector for that block portion of the frame?"

I'm sorry, I don't understand this comment.

Well, you asked that you wanted standard terminology for "the frame we are trying to approximate", but since each motion vector approximated from phase correlation are derived from each portion (a.k.a. block) of the frame, do we need separate terminology (besides "source frame", "target frame" and "current frame") for each of those blocks of the frame to avoid even more confusion (Basically, to distinguish between discussing a whole frame and a block or portion of the frame)?

Last edited by castellandw; 31st October 2005 at 23:49.
castellandw is offline   Reply With Quote
Old 1st November 2005, 19:57   #50  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
Unlike the backward/forwards related terminology, I can't at present see a situation in which that kind of confusion could arise*... but at the moment I'm not using phase plane correlation, so it's rather academic.

*although I'm open to be proved wrong ... you could suggest an example of a sentence which could be misconstrued with two sensible readings.


_______________________________________

Maybe this is obvious in the Development Forum, or maybe I should have said it before, but anyway: like most other temporal filters which don't explicitly state otherwise, this filter currently takes no account of interlacing. If you are using interlaced content, either deinterlace or bob first. (I do actually plan to make it treat interlaced content sensibly... but there are a couple of prerequisites.) Simply separating fields is a bad idea because the up-down jitter will mess with the true-motion algorithm.
mg262 is offline   Reply With Quote
Old 1st November 2005, 20:22   #51  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
maybe a special interlaced mode could be implemented, that takes care for the bobbing?

So it isn't fooled by static areas (which obviously are bobbing) AND creates non bobbing full sized (frame heigth, not field heigth) vector frames.

This would be a GREAT help for mocomped deinterlacing.
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 1st November 2005, 21:12   #52  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
scharfis,

I'm being a bit dense atm and I'm not sure how to read and that -- do you mean that you want the filter to take bobbed input but try and be intelligent about using original scanlines rather than interpolated lines? Or do you mean that you want the filter to be/include a bobber?
mg262 is offline   Reply With Quote
Old 1st November 2005, 21:21   #53  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
sorry, I wrote it a bit mixed up.

I mean this:

source("blah.xxx")
assume?ff()
vectors=separatefields().analyse(interlaced=true)
compensated=somesmartbob().mvcompensate(vectors)

this means: take a fieldseparated video and do the analyse on it.
of course this analyse must take care for static areas and has to ensure that the vector movement is smooth (no bob-jitter!)
also the vector field has to have the size of the full frame (like the smart bobbed one)
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 1st November 2005, 21:24   #54  |  Link
Fizick
AviSynth plugger
 
Fizick's Avatar
 
Join Date: Nov 2003
Location: Russia
Posts: 2,183
I do not know, what scharfis_brain told about,
but I consider (in some future) fieldbased input, half-pel interpolation,
and one pixel shift field compensation (I have it in DePan).
Fizick is offline   Reply With Quote
Old 1st November 2005, 21:30   #55  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
Quote:
Originally Posted by Fizick
but I consider (in some future) fieldbased input, half-pel interpolation,
and one pixel shift field compensation (I have it in DePan).

but this will create wobbly vectors in static areas
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 1st November 2005, 21:42   #56  |  Link
Fizick
AviSynth plugger
 
Fizick's Avatar
 
Join Date: Nov 2003
Location: Russia
Posts: 2,183
Half-pel assume bilinear interpolation.

Another approach (I see it in some deHaan article?) :
Withot halft-pel, simply to add penalty to SAD cost for candidate vectors with odd vertial shift.
Take full interlaced frames. (It is used for fast realtime).
Fizick is offline   Reply With Quote
Old 1st November 2005, 22:41   #57  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
scharfis,

what you ask is definitely possible. I was in any case thinking of having motion vectors for interlaced clips actually use distances measured in bobbed space;* it removes a lot of headache, and makes the algorithm trivially resistant to jitter. So would just be a matter of moving 8x16 blocks instead of 8x8 blocks, or equivalently duplicating each row of motion information. You can almost do that right now by writing:

Interleave(motioninformation, motioninformation).Weave()

(The only reason that wouldn't work is because motion information is currently padded to make sure it contains an even number of scanlines -- but I was planning to move it from YV12 luma to YUY2 or forthcoming Y8 anyway, which would obviate the need for padding.)

*I hope that comment is clear... poke me if it isn't. I tried to write a bit more about it but it looked like I was just making it more confusing. Key point is that still scenes will have stored motion vectors of (0,0), not (0,±0.5).

Also bear in mind that there are proper motion compensated bobbing algorithms... cf one of the papers Fizick pointed me to: http://www.ics.ele.tue.nl/~dehaan/pdf/111_IVCP_ZHAO

Edit: I have yet to find any explicit discussion of interlaced->interlaced FPS conversion. AFAICS most papers implicitly expect you to use a MC bob + a progressive MC frame rate change (both discussed extensively in the literature), then discard unneeded fields. Quite apart from the usage of interpolated values for further interpolation, I think that this misses the point completely -- if you are applying half pixel compensation, direct interlaced->interlaced FPS conversion is almost the same problem as progressive->progressive conversion, and the methods extend directly. I will expand on that later.

[That's not to imply that bob + frame rate change + discard is a bad idea in scripts, given the currently available tools.]

Last edited by mg262; 1st November 2005 at 23:25. Reason: Script corrected to include Weave()
mg262 is offline   Reply With Quote
Old 1st November 2005, 23:12   #58  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
IMO one cannot go another way than

mvbobbing -> mvfpsconversion -> resize -> reinterlace

working directly with the fields won't work. You need to make it progressive.
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 2nd November 2005, 00:00   #59  |  Link
mg262
Clouded
 
mg262's Avatar
 
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
To keep the argument as simple as possible, let me first ask you this: if resizing were not an issue, would you consider a one step process to be feasible?
mg262 is offline   Reply With Quote
Old 2nd November 2005, 01:49   #60  |  Link
castellandw
Registered User
 
Join Date: Sep 2005
Posts: 90
Oh yeah, the Doctor Who restoration team video engineers said the motion compensation with phase correlation was done on a full frame meaning all the 50-60 fps full progressive frames were deinterlaced from the 25-30 fps interlaced frames. So you have to make the frames progressive for motion compensation. In terms of frame rate conversion, there should be a motion compensated fps function (like MVFps in MVTools) required for anything involving motion compensation because the frame rate change occurs during the motion compensation process. To do motion compensation first with something like MVTools and then use ConvertFPS or ChangeFPS won't work properly. On the subject of bobbing,

@scharfis, on the subject of bobbing for interlaced frames involving motion compensation, you could look at section 2.2-2.3 of Snell's Guide To Motion Compensation on Pre-Processing and Motion Estimation:
http://www.snellwilcox.com/knowledge...ks/emotion.pdf
Although I'm not sure how helpful, but you should try out that Block Overlap plugin by Fizick in your mvbob function, and let's hope someone can achieve coding phase correlation.
castellandw is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:28.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.