Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 23rd April 2018, 22:36   #21  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by StainlessS View Post
Code:
            n   n+delta
            |    |
            <---- BV        Vectors used for eg MDegrain, vectors stored @ frame where arrowhead points (ie n).
     FV ---->               Pixels moved from frame at back end of arrow (n+/- delta), along vector to synth frame where arrowhead points.
        |
        n-delta


            n   n+delta
            |    |
            <---- BV        Vectors used for eg MFlowInter, Uses the next one along forward vector so that vectors used are from
            |    |          either side of the frame that will be synthesized (reason, only part of the vector distances are
        FVI ---->           used, based on Time arg).
            n    n+delta
                            For MFlowInter, the created vectors are exactly the same as for eg MDegrain, its just that it uses the
                            next forward vector so that backward and forward vectors straddle the predicted frame, MFlowInter
                            just uses the vectors in a slightly different way to MDegrain.
                            The synthesized frame lies logically somewhere between n and n + delta (based on Time arg), and is
                            physically created at frame n, and so needs to be relocated to the required bad frame position in clip.
Hope the above makes sense.
That in itself makes sense. It's only when we also look at the screenshots of masks and make some reasonable assumptions that things start to get... weird. [EDIT] Added relevant screenshot below.



So, let's take a look at the masks. I don't know how MMask generates the colors from the vectors, but we can see that the forward mask is lighter than average and the backward mask is darker than average. Since we are showing horizontal component of motion vector and the movement is horizontal it's safe to assume that the lighter and darker colors represent horizontal motion vectors with opposite directions and the middle gray represents no motion. Since frame n+delta has the forward vectors (pointing from frame n to frame n+delta), the vector direction should be towards left and therefore lighter color means vectors towards left. Crudely depicted here:
Code:
frame N+delta FORWARD vectors shown along with pixel colors (0 = black, X = white)
vector direction is towards left (<)

0  0  0  X< X< X< X< X  X  X  X  X  X  X

0  0  0  X< X< X< X< X  X  X  X  X  X  X

0  0  0  X< X< X< X< X  X  X  X  X  X  X

One pixel and its motion vector
+-----+
|     |
|  <--------+
|     |
+-----+
Now how can these vectors and pixels be used to generate an interpolated frame? The pixels at frame n+delta should be moved towards right, therefore we must move the pixel "backwards" from the arrowhead towards the back end of arrow.

Things get more interesting when we look at the backward vectors at frame n. The vector direction is now towards right
Code:
frame N BACKWARD vectors

0  0  0  0> 0> 0> 0> X  X  X  X  X  X  X

0  0  0  0> 0> 0> 0> X  X  X  X  X  X  X

0  0  0  0> 0> 0> 0> X  X  X  X  X  X  X

      +-----+
      |     |
+-------->  |
      |     |
      +-----+
How can we use these vectors and pixels to generate the interpolated frame? The pixels should be moving towards left, so we take the pixel at the arrowhead and move it.... wait. There is no pixel at the arrowhead. Or more accurately, there is a pixel but it has the wrong color. We should move at least some of the white pixels in both frames but frame n doesn't have any motion over the white pixels.

Ok, this is not how it works so let's try again. This time we swap the pixels and the motion vectors.

Code:
frame N+delta FORWARD vectors shown along with pixel colors of frame N

0  0  0  0< 0< 0< 0< X  X  X  X  X  X  X

0  0  0  0< 0< 0< 0< X  X  X  X  X  X  X

0  0  0  0< 0< 0< 0< X  X  X  X  X  X  X

One pixel and its motion vector
+-----+
|     |
|  <--------+
|     |
+-----+
What if we follow the arrow backwards and take the pixel located at the back end of the arrow and move it towards the arrowhead, but we take the pixel from frame N? It should work, there are white pixels there.

Does it also work with the backward vectors?

Code:
frame N BACKWARD vectors shown along with pixel colors of frame N+delta

0  0  0  X> X> X> X> X  X  X  X  X  X  X

0  0  0  X> X> X> X> X  X  X  X  X  X  X

0  0  0  X> X> X> X> X  X  X  X  X  X  X

      +-----+
      |     |
+-------->  |
      |     |
      +-----+
Hmm... no, it doesn't look like this will work either. The vector points to the wrong direction.

So how can we make it work? There is at least one way, we must draw the arrow so that the back end is at the center of the pixel:

Code:
+-----+
|     |
|  +-------->
|     |
+-----+
Now we can simply take the pixel at the back end and move it towards the arrowhead. But this is inconsistent with the way it was done with the forward vectors and the same method will not work for both. This implies that the backward vectors are stored at the back end of the arrow whereas the forward vectors are stored at the pixel pointed by the arrowhead. Or there's another, simpler explanation but then we would have to break some of the rules you laid out in your post.

Last edited by zorr; 24th April 2018 at 00:12. Reason: Added screenshot
zorr is offline   Reply With Quote
Old 24th April 2018, 02:12   #22  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
I'm not allowing myself to get too confused (despite the assistance of your post) ,
the arrow only shows the frame where vectors are stored (arrow head), and the frame used as the pixel source (arrow flight/feathers/rear end),
the actual horizontal distance/direction is in the color (above or below Mid Grey, which might be 127, 128 or even 126 [rounded 125.5 mid point for TV levels], I'm guessin' that it might be 128).

I'm quite happy to accept that it works, now stop trying to confuse me
[But if you figure it out 100.0%, and find different then do let us know]

EDIT: Although this kinda makes things a little more 'interesting'
From MFlowInter Docs, last arg tclip,
Quote:
tclip
If set, the time parameter is ignored. Then the time for the motion interpolation is applied pixel-wise. Each component of each pixel of tclip gives the time to be applied to the corresponding source clip pixel. The time scale is 256, meaning that 0 corresponds to the current frame, and 255 is an almost the next frame (128 is exactly half-way). A single occlusion mask is calculated with the luma-time only, therefore it is recommended to keep the chroma-time synchronized with the luma.
Not sure, I think I'm confused again
(not really, it just allows specification of the time arg for every pixel individually,
the how does not really matter unless you want to use it tclip arg and if so just do as instructed in above doc).

The arrow in the title bar above the graphic[at RHS] shows the direction of the white block travelling across the frame, right to left (when HFlip=False).
For the forward vector, pixels at n moved left for the n to n+delta transition, so the color at 'FVEC[n+delta=62] aligned' is lighter than mid grey and so must mean move pixels at n left to match the pixel at n+delta. In the backwards vector is is just the other way around (from frame n+delta to n) and so is opposite and darker than mid grey.
So to recap, lighter than mid grey means Move_Left, darker, means Move_Right, absolute difference from mid grey is the distance to move.
If you change to HFlip=True, white block will move left to right, and n->n+Delta_Aligned will be darker than mid grey instead of lighter.
In a perfect world the results would always be exact opposite (relative mid grey), both vector sets are just a check on each other,
that both lots of calculations agree.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 24th April 2018 at 06:54.
StainlessS is offline   Reply With Quote
Old 24th April 2018, 17:36   #23  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Post #16 script updated.

Shows colors of the vector greys, and little white (single pixel) dot overlaid where taken from (Align MUST be true otherwise not shown).

eg
Code:
CS="Y8"
#CS="YV24"
VectorTest(Delta=2,time=50.0,HFlip=False,CS=CS)

Fwd/Bak, Is mostly opposite (relative mid grey, 128), but sometimes goes awry (usually at edge of grey, we try take mid pixel).

EDIT: With CS="YV24"


EDIT: The movement in above graphics should be DELTA(2) * XSTEP(8)=16, but in MSuper, Pel=2, ie 2*8*2=32,
so for the vector Greys = +/- 32 , seems about right.
[XSTEP is the horizontal pixel movement per frame, Delta is the number of frames apart, and Pel=2 is at 1/2 pixel resolution].
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 24th April 2018 at 18:51.
StainlessS is offline   Reply With Quote
Old 24th April 2018, 23:23   #24  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by StainlessS View Post
I'm quite happy to accept that it works, now stop trying to confuse me
Oh, I'm not quite done messing with your mind yet.

Quote:
Originally Posted by StainlessS View Post
the arrow only shows the frame where vectors are stored (arrow head), and the frame used as the pixel source (arrow flight/feathers/rear end),
the actual horizontal distance/direction is in the color (above or below Mid Grey, which might be 127, 128 or even 126 [rounded 125.5 mid point for TV levels], I'm guessin' that it might be 128).
Yes, I agree completely and the arrows in my previous post were trying to show how the pixels should move based on the vectors, not about where they are stored. Also I found this in MVMask.cpp:

Quote:
else if (kind == 3) // vector x mask
{
for (int j = 0; j < nBlkCount; j++)
smallMask[j] = std::max(int(0), std::min(255, int(mvClip.GetBlock(0, j).GetMV().x * fMaskNormFactor * 100 + 128))); // shited by 128 for signed support
I kinda like that it's 'shited' by 128. So yes, middle gray is hardcoded at value 128.

Some clarification for this explanation:
Quote:
Since frame n+delta has the forward vectors (pointing from frame n to frame n+delta), the vector direction should be towards left and therefore lighter color means vectors towards left.
The justification for the direction (left) for the foward vector is that the forward vector really means forward in time and stepping forward in time the white/black edge is moving towards left. So it's simply the direction things are moving when time flows forward.

Quote:
Originally Posted by StainlessS View Post
For the forward vector, pixels at n moved left for the n to n+delta transition, so the color at 'FVEC[n+delta=62] aligned' is lighter than mid grey and so must mean move pixels at n left to match the pixel at n+delta.
Great, we agree on this too!

Code:
FORWARD VECTORS (MMask lighter than average, positive x value)
		
   0     1     2     3     4     5     6     7     8		
+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|(8,0)|     |     |     |     |     |     |     |(0,0)|		 motion vector (delta x, delta y)
|  <-----------------------------------------------+  |
|     |     |     |     |     |     |     |     |     |		 
+-----+-----+-----+-----+-----+-----+-----+-----+-----+
		 
+-----+			+-----+			+-----+
|     |			|     |		 	|     |		 
|  A  |			|  X  |			|  B  |		 
|     |			|     |		 	|     |		 
+-----+			+-----+			+-----+
So here's a row of pixels along with the forward vectors. There's a motion vector value displayed for pixel A and B. I used offset 8 here, 32 would have made this drawing too large...

Like we agreed the vector points towards left. The vector is stored at pixel A (there is a vector at pixel B too but it has delta zero). The goal here is to move PIXEL B along the vector towards PIXEL A into location of PIXEL X which is at 50% of the vector length.

This means we have to draw the vector so that the arrowhead is at PIXEL A and the rear end is at PIXEL B. So we have an algorithm like this:

Code:
1) place the arrowhead at the pixel which contains the motion vector (PIXEL A)
2) calculate position delta by adding motion vector to current pixel position (x=0+8, y=0+0) = (8,0)
3) place the rear end of arrow to this location (PIXEL B)
4) move the pixel at the rear end towards the arrowhead by 50% of arrow length (PIXEL X)
   (in reality: render the pixel color of PIXEL B to the location of PIXEL X)
Ok, this totally works. So let's look at the backward vectors...
Code:
BACKWARD VECTORS (MMask darker than average, negative x value)
		
   0     1     2     3     4     5     6     7     8		
+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|(-8,0)     |     |     |     |     |     |     |(0,0)|		 motion vector (delta x, delta y)
|     |     |     |     |     |     |     |     |     |
|     |     |     |     |     |     |     |     |     |		 
+-----+-----+-----+-----+-----+-----+-----+-----+-----+
Now using the same algorithm we find that PIXEL B is at coordinate (0+(-8), 0+0) == (-8,0) which is completely wrong. Besides that we should be moving the PIXEL A with the backward vectors (we already moved PIXEL B so not useful to do it again). This means we need a different algorithm for the backward vectors.

Quote:
Originally Posted by StainlessS View Post
[But if you figure it out 100.0%, and find different then do let us know]
I haven't really figured it out, just making observations which show that the current theory leads to some complications. I believe there is a simpler way to use the vectors. To really figure out what MVTools2 is doing would mean diving into the source code and I don't really have the time nor competence to do that.
zorr is offline   Reply With Quote
Old 25th April 2018, 13:00   #25  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Quote:
Originally Posted by zorr View Post
Oh, I'm not quite done messing with your mind yet.
You are an evil and repulsive person who delights in the suffering of others.

Dont think I had my programming head on when I wrote the vector color direction stuff.
Light grey does indeed mean move pixel left for (forward) frames n to n+delta, but that is stating it the wrong way
around programmatically. If you iterate over frame n for every x, putting pixels in frame n+delta at x + Some_Offset,
you could end up eg setting only a single pixel many times (and all the rest would hold rubbish).

For forwards component pixel, You should actually iterate over the destination for every x, and use the vector grey
stuff to take the source pixel from frame n (the back end of the forward vector arrow) at x+((Fgrey-128)/pel), so
instead of moving pixel left in forward prediction, you actually get it from right (so light grey, get it from the
right, dark grey get it from the left).
[It also corrects the weird sign issue we had so (rel 128) +ve grey means take from the right, -ve from left,
and so it matches raster layout in memory].

For backwards component pixel, iterate over the destination for every x, and use the vector grey stuff to take the
source pixel from frame n+delta (the back end of the backward vector arrow) at x+((Bgrey-128)/pel),
so instead of moving pixel right in backward prediction, you actually get it from left. [when considering VectorTest(HFlip=False)]

The "x+((Fgrey-128)/pel)" and "x+((Bgrey-128)/pel)" above should of course be limited to valid pixel coords (min,max),
and also need to apply the Time arg to both.

Code:
# Something like (where Time=33.33%),

# EDIT: Added lines below
FGrey = FGrey_at_nPlusDelta[x]                                            # @ head of Forward vector arrow, the horiz mask from fvec
BGrey = BGrey_at_n[x]                                                     # @ head of Backward vector arrow, the horiz mask from bvec
pel=2.0                                                                   # whatever pel is in use, as float

FCoord_X = x + Round(((Fgrey-128)/pel) * 33.33 / 100.0)                   # 33.33% of forward distance (from n)

BCoord_X = x + Round(((Bgrey-128)/pel) * (100.0 - 33.33) / 100.0)         # 66.66% of backward distance (from n+delta)

FPixel = Frame_n[Min(Max(FCoord_X,0),Width-1)]
BPixel = Frame_nPlusDelta[Min(Max(BCoord_X,0),Width-1)]

# Self Check blurry stuff

ResultClipFrame_n[x] = Round((FPixel + BPixel)/2.0)                       # This frame needs shift to fix BAD frame @ Time=33.00%
So, how does that work for you ?

EDIT: I have not looked at the source, no idea how the self check blurry stuff works, just a wild guess.
(maybe [EDIT: likely] the other masks play a part)
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 25th April 2018 at 22:17.
StainlessS is offline   Reply With Quote
Old 25th April 2018, 23:09   #26  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by StainlessS View Post
You are an evil and repulsive person who delights in the suffering of others.
You understand me so well. This could be the start of a long and beautiful friendship.

Quote:
Originally Posted by StainlessS View Post
If you iterate over frame n for every x, putting pixels in frame n+delta at x + Some_Offset, you could end up eg setting only a single pixel many times (and all the rest would hold rubbish).

For forwards component pixel, You should actually iterate over the destination for every x, and use the vector grey
stuff to take the source pixel from frame n ...
Yes that's true. I think what you described above works beautifully with MCompensate where you can read a pixel color pointed by the vector for every pixel in the destination. I'm not sure you can use that method when dealing with MFlowInter where the vectors are scaled by the time. But certainly this "missing pixels" problem is something the algorithm has to deal with.

Quote:
Originally Posted by StainlessS View Post
[It also corrects the weird sign issue we had ...].
We will see about that.

Quote:
Originally Posted by StainlessS View Post
Code:
# Something like (where Time=33.33%),

# EDIT: Added lines below
FGrey = FGrey_at_nPlusDelta[x]                                            # @ head of Forward vector arrow, the horiz mask from fvec
BGrey = BGrey_at_n[x]                                                     # @ head of Backward vector arrow, the horiz mask from bvec
pel=2.0                                                                   # whatever pel is in use, as float

FCoord_X = x + Round(((Fgrey-128)/pel) * 33.33 / 100.0)                   # 33.33% of forward distance (from n)

BCoord_X = x + Round(((Bgrey-128)/pel) * (100.0 - 33.33) / 100.0)         # 66.66% of backward distance (from n+delta)

FPixel = Frame_n[Min(Max(FCoord_X,0),Width-1)]
BPixel = Frame_nPlusDelta[Min(Max(BCoord_X,0),Width-1)]

# Self Check blurry stuff

ResultClipFrame_n[x] = Round((FPixel + BPixel)/2.0)                       # This frame needs shift to fix BAD frame @ Time=33.00%
So, how does that work for you ?
I just run your code though my ASCII SIMULATOR 6000 (tm) and this was the output:

Code:
PROCESSING...

  -6    -5    -4    -3    -2    -1     0     1     2     3     4     5     6     7     8     9     
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|     |     |     |     |     |     |(9,0)|     |     |     |     |     |     |     |     |(0,0)|  forward vector
|  <-----------------B (66.6%)---------+-----F (33.3%)--->  |     |     |     |     |     |     |
|     |     |     |     |     |     |(-9,0)     |     |     |     |     |     |     |     |(0,0)|  backward vector
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+     
         
                                    +-----+                             +-----+           +-----+
                                    |     |                             |     |           |     |        
                                    |  A  |                             |  X  |           |  B  |        
                                    |     |                             |     |           |     |        
                                    +-----+                             +-----+           +-----+
                                      N+delta                            33.3%               N

         ...<<<<<<<<<......... backward vectors (-9,0) at frame N
         ...>>>>>>>>>......... forward vectors (+9,0) at frame N+delta
         000000000000XXXXXXXXX frame N                                    
         000000000XXXXXXXXXXXX interpolated at 33.3%
         000XXXXXXXXXXXXXXXXXX frame N+delta

         -->    forward offset (33.3%), read pixel from frame N
   <-----       backward offset (66.6%), read pixel from frame N+delta
         
      000000000000XXXXXXXXX-->       frame N with offset -3
         <-----000XXXXXXXXXXXXXXXXXX frame N+delta with offset +6
      000000000000XXXXXXXXXXXX average color

CONCLUSION: STAINLESSS IS A GENIUS
Oh wow, that actually works! So this example is using offset 9 to make the 33.3% and 66.6% nice integer locations. The vector length originally is 9 but it is scaled to 3 (forward vectors) and 6 (backward vectors) when time is 33.3%. To find the color for each pixel I moved the frame to the opposite direction of the vector so that the pixels to read from at each frame are aligned. And what do you know, they perfectly replicate the result we are hoping to achieve! Note that the rightmost portion of the frame has zero length vectors so that part doesn't need to be moved but it doesn't change the result. Well done mate.
zorr is offline   Reply With Quote
Old 25th April 2018, 23:45   #27  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Quote:
You understand me so well. This could be the start of a long and beautiful friendship.
HeHeHehe

Quote:
We will see about that.
That made me nervous, again.

Quote:
CONCLUSION: STAINLESSS IS A GENIUS
Nah, I might be if I understood anything from your ASCII gram, tis a total puzzle.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???

Last edited by StainlessS; 26th April 2018 at 15:13.
StainlessS is offline   Reply With Quote
Old 26th April 2018, 23:31   #28  |  Link
zorr
Registered User
 
Join Date: Mar 2018
Posts: 447
Quote:
Originally Posted by StainlessS View Post
Quote:
We will see about that.
That made me nervous, again.
Actually when I wrote that I was confident that I could crush your hopes and dreams about that algorithm, but when I went through it I realized it works... but I decided to leave that comment there in order not to spoil the ending.

Quote:
Originally Posted by StainlessS View Post
Nah, I might be if I understood anything from your ASCII gram, tis a total puzzle.
Fair enough, I was in a bit of hurry and didn't provide enough explanations. So let's go through it again but take it slow. Perhaps this will help someone else who is trying to figure out wft is going on with MaskTools2. Or in the year 3000 when the archeologists dig up the internet from the ruins they find this message and can perfectly understand our primitive technology.

Let's start with a screenshot of the two frames and the motion vector masks used when creating the interpolated frame. The interpolated frame is created at 33.3% between frames N and N+delta (like in the previous example).



I'm changing the XSTEP to 4.5 in order to make the math simpler, this results in 9 pixels of movement in two frames (delta is 2). Looks like this in ASCII:
Code:
PIXEL COLOR: 0 = BLACK, X = WHITE     
VECTOR DIRECTION: < = LEFT, > = RIGHT, . = ZERO LENGTH
	 
000000000000000000XXXXXXXXXXXXXXXXXX	FRAME N
000000000000000000XXXXXXXXXXXXXXXXXX
000000000000000000XXXXXXXXXXXXXXXXXX
000000000000000000XXXXXXXXXXXXXXXXXX

000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX	FRAME N+delta
000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX
000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX
000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX

.........>>>>>>>>>..................	FRAME N+delta FORWARD VECTORS
.........>>>>>>>>>..................
.........>>>>>>>>>..................
.........>>>>>>>>>..................

.........<<<<<<<<<..................	FRAME N BACKWARD VECTORS
.........<<<<<<<<<..................
.........<<<<<<<<<..................
.........<<<<<<<<<..................

000000000000000XXXXXXXXXXXXXXXXXXXXX	INTERPOLATED FRAME AT 33.33% BETWEEN N AND N+DELTA
000000000000000XXXXXXXXXXXXXXXXXXXXX
000000000000000XXXXXXXXXXXXXXXXXXXXX
000000000000000XXXXXXXXXXXXXXXXXXXXX
Let's take a closer look at the vectors. The magnitude (length) of the vectors is 9 (actually 9*pel which in this case is 18 but let's keep it simple), that's the distance the black/white edge is moving between frames N and N+delta in this example. The direction of the vectors can be determined by the brightness of the gray, and lighter than average means positive & pointing right, darker than average is negative & pointing left. The vectors are 2D so there is X and Y component but in this example there is only horizontal motion, therefore the Y component is zero.

Code:
FORWARD VECTORS CLOSEUP
... (0,0) (0,0) (0,0) (9,0) (9,0) (9,0) (9,0) ...

BACKWARD VECTORS CLOSEUP
... (0,0) (0,0) (0,0) (-9,0) (-9,0) (-9,0) (-9,0) ...
When the vectors are used in interpolation, their length is scaled by the time. Forward vectors are multiplied by time/100 (33.3/100 == 0.333) and the backward vectors are scaled by (100-time)/100 ((100-33.3)/100 == 0.666). Doing the calculations we get new forward vectors (3,0) and new backward vectors (-6,0).

These vectors can now be used to point the coordinates where colors should be read for the interpolated frame. We loop every pixel on the screen and read a pixel from the location pointed by the forward and backward vectors.

Code:
FORWARD VECTOR AT PIXEL (6,0)

   0     1     2     3     4     5     6     7     8     9     
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|     |     |     |     |     |     |(3,0)|     |     |     |     |
|     |     |     |     |     |     |  +----------------->  |     | 
|     |     |     |     |     |     |     |     |     |     |     |     
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+


BACKWARD VECTOR AT PIXEL (6,0)

   0     1     2     3     4     5     6     7     8     9     
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|     |     |     |     |     |     |(-6,0)     |     |     |     |
|  <-----------------------------------+  |     |     |     |     |
|     |     |     |     |     |     |     |     |     |     |     |     
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
The pixel pointed by the forward vector is read from frame N and the pixel pointed by the backward vector is read from frame N+delta. These two color samples are then combined (a simple average will do but MVTools is perhaps using something more advanced).

This is all the information we need to create the interpolated frame, but it's not very straightforward to do manually. We would have to look at every pixel and its vectors, look at where the vectors are pointing at and what the colors are at those locations.

Luckily there's a shortcut we can use. Since all the forward vectors are identical we can think of the operation as shifting the frame N in the opposite direction the vector is pointing at. So in this case the vector (3,0) means we should shift the frame left by 3 pixels. Once we have done that operation we can read the pixel color at the same coordinate as the target pixel is at. We do the same thing with backward vectors, this time we shift frame N+delta in the opposite direction, in this case 6 pixels to the right.

Code:
   000000000000000000XXXXXXXXXXXXXXXXXX		FRAME N
   000000000000000000XXXXXXXXXXXXXXXXXX
   000000000000000000XXXXXXXXXXXXXXXXXX
   000000000000000000XXXXXXXXXXXXXXXXXX

   000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX		FRAME N+delta
   000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX
   000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX
   000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX

000000000000000000XXXXXXXXXXXXXXXXXX...		FRAME N shifted left by 3 pixels
000000000000000000XXXXXXXXXXXXXXXXXX...
000000000000000000XXXXXXXXXXXXXXXXXX...
000000000000000000XXXXXXXXXXXXXXXXXX...
   
   ......000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX	FRAME N+delta shifted right by 6 pixels
   ......000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX
   ......000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX
   ......000000000XXXXXXXXXXXXXXXXXXXXXXXXXXX

Note: new pixels not visible before marked with '.'
Already we can see that now the black/white edge is aligned in both frames. Now we take the average color from both frames (special case is those new pixels which became visible, let's leave the result blank for those).

Code:
      000000000XXXXXXXXXXXXXXXXXX		AVERAGE COLOR OF SHIFTED FRAMES N AND N+DELTA
      000000000XXXXXXXXXXXXXXXXXX
      000000000XXXXXXXXXXXXXXXXXX
      000000000XXXXXXXXXXXXXXXXXX
	  
000000000000000XXXXXXXXXXXXXXXXXXXXX		INTERPOLATED FRAME AT 33.33% BETWEEN N AND N+DELTA (THE GOAL)
000000000000000XXXXXXXXXXXXXXXXXXXXX
000000000000000XXXXXXXXXXXXXXXXXXXXX
000000000000000XXXXXXXXXXXXXXXXXXXXX
And there's the result we were hoping to get, the black/white edge is in the correct position. One final note about that shifting trick we used: it should only be applied at those pixels where the vector was not zero. When the vector is zero we can read the source frame at the exact same location where the target pixel is. This would help fill the edges we now skipped.

Here's what it looks like if we also process the zero vectors (lowercase used where final pixel color was read with zero vector):

Code:
.........>>>>>>>>>..................	FRAME N+delta FORWARD VECTORS
.........>>>>>>>>>..................
.........>>>>>>>>>..................
.........>>>>>>>>>..................

ooooooooo000000XXXxxxxxxxxxxxxxxxxxx	
ooooooooo000000XXXxxxxxxxxxxxxxxxxxx
ooooooooo000000XXXxxxxxxxxxxxxxxxxxx
ooooooooo000000XXXxxxxxxxxxxxxxxxxxx
zorr is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:50.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.