Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 22nd May 2004, 15:37   #41  |  Link
avih
Capture, Deinterlace
 
avih's Avatar
 
Join Date: Feb 2002
Location: Right there
Posts: 1,971
Manao, this is sick

as some of u might know, i much prefere 50fps playback over 25fps when it comes to video (i.e. live sport events, etc). i usually capture <something>x576 interlaced with ffdshow with some processing thrown in, and i don't re-process my caps as i mainly use it as pvr (playback with ffdshow, deinterlacing using a dscaler filter).

now, i tried taking one of my encoded clips (400x576), take one field only, and MVConvertFPS (0.9.2.2) to 50fps, and it's awesome i reduces the threshold a bit (quite a bit), and i truely get a 'video-like' expreience still few glitches here and there, but generally, it rocks.

i'm now thinking of using ffdshow to capture, with deinterlacing filter (encode 25fps 'progressive'), and then use MVConvertFPS during playback to record full res at 25fps and playback interpolated 50fps. might save quite some bits during encoding i think.

although so far, MVConvertFPS doesn't play good in zoomplayer (both direct avs and through ffdshow with ebmedded avisynth script). vdub plays it well. i don't know what's the cause yet. i'll try to update the thread if i have new conclusions.

cheers Manao, great work

Last edited by avih; 22nd May 2004 at 15:41.
avih is offline   Reply With Quote
Old 24th May 2004, 10:57   #42  |  Link
violao
Registered User
 
Join Date: Feb 2004
Posts: 252
While I was playing with motion vectors calculated by SearchMVs I noticed the following problem that sometime makes vectors found in a certain area useless. Suppose you have an object moving in front and near the edge of a very dark static background area. Now suppose that beside that edge there is another static area with the brightness level similar to that of the moving object and that the distance between the object and this similar area is less than displacement of the object between n and n-1 frames. It appears that the vector search algorithm finds this closer background area more similar to the original object than the object itself in n-1 frame so the resulting vectors point from surrounding background to the object, instead from object in n-1 to object in n. Is it possible to do something about it? Perhaps increasing vector search space?

Another question, how is it possible to fetch vectors from higher levels? I'm familiar now to using BlockData class after SearchMVs, but I fail to see how to get field of blocks for higher levels. Manao?

EDIT: Fetching higher level vectors solved (I think).

Last edited by violao; 24th May 2004 at 11:35.
violao is offline   Reply With Quote
Old 24th May 2004, 12:00   #43  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
First, the easy question :
Quote:
Another question, how is it possible to fetch vectors from higher levels? I'm familiar now to using BlockData class after SearchMVs, but I fail to see how to get field of blocks for higher levels.
You access them by using a higher PlaneOfBlock. If GOP is a GroupOfPlane, GOP[x] return the xth PlaneOfBlock, counting from the lowest, and GOP[x][i] then return ith BlockData of the xth PlaneOfBlock.

Now, for the other one. There are several reasons that may lead to such results, could you provide a screenshot, it would help determine whether it's a failure of the search algorithm, or something inherent to block matching.

For the principle of the search algorithm, there is no such thing as "space search". The multilevel analysis allows to fetch long vectors, while the use of a non exhaustive recursive search ( One Time Search, Diamond Search ) tries to find the local minimum around the best predictor ( either the one from the multilevel analysis, or the motion vector from surrounding blocks ).

It you want an exhaustive search, in PlaneOfBlocks::SearchMVs(), replace
Quote:
blocks[i]->SearchMV(predictedMotionVectors[i],1,1,ONETIME, CURRENT);
by
Quote:
blocks[i]->SearchMV(predictedMotionVectors[i],1,radius,EXHAUSTIVE, CURRENT);
Where radius is the half-width of the search square. Also, set 'fth' to zero. But it will be very slow then. You'll then be able to know if the 'wrong' vector you're getting is due to the search algorithm or to the block matching itself.

avih : thank you for the kind words. I don't know what's wrong with ZoomPlayer, but if I had to make a guess, I would say it's the DirectShow interface.
Manao is offline   Reply With Quote
Old 24th May 2004, 14:42   #44  |  Link
violao
Registered User
 
Join Date: Feb 2004
Posts: 252
Quote:
Originally posted by Manao
It you want an exhaustive search, in PlaneOfBlocks::SearchMVs(), replace by...Where radius is the half-width of the search square. Also, set 'fth' to zero. But it will be very slow then. You'll then be able to know if the 'wrong' vector you're getting is due to the search algorithm or to the block matching itself.[/B]
Thanks. I'll try various search algorithms and will let you know what happens. Radius is given in block units, I suppose?
violao is offline   Reply With Quote
Old 24th May 2004, 14:52   #45  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
No, it's given in pixel unit. The search is made for all the vectors whose coordinates don't differ from the best predictor by more than 'radius'.

But be warned, a radius as small as 5 will imply 121 SAD computations by blocks, so it will be slow.
Manao is offline   Reply With Quote
Old 25th May 2004, 09:04   #46  |  Link
violao
Registered User
 
Join Date: Feb 2004
Posts: 252
Quote:
Originally posted by Manao
But be warned, a radius as small as 5 will imply 121 SAD computations by blocks, so it will be slow. [/B]
OK, I tried with radius up to 16 with no success. Other search methods give similar results. I suppose this clip I'm playing with violates basic assumption that is 'intensity consistency hypotesis'. It seems that whenever an object moves between different illumination conditions (various shades and similar) there are some funny vectors around. Another complication seems to be the bluring of faster moving objects. Bluring is itself a blending of an moving object with static background, that results in changing both objects shape and it's luminance.

Therefore all I need/can do is to supress the filter processing in the area of 'invalid' vectors. Only problem is - how to tell what vectors are invalid?
violao is offline   Reply With Quote
Old 25th May 2004, 09:24   #47  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
Quote:
Only problem is - how to tell what vectors are invalid?
If that answer was simple...

I have implemented two measurement of validity for a vector : the SAD, and what I call 'DifferenceFromNeighbours', which is basically by how much the motion vector is different from motion vectors of the surrounding blocks.

So you have to use both of them, but it won't be easy.
Manao is offline   Reply With Quote
Old 25th May 2004, 09:36   #48  |  Link
violao
Registered User
 
Join Date: Feb 2004
Posts: 252
Quote:
Originally posted by Manao
'DifferenceFromNeighbours'...
"Sum of quadratic differences between the motion vector of the block and all its surrounding blocks' motion vectors."

8 surrounding blocks? Is this a difference between MV length?
violao is offline   Reply With Quote
Old 25th May 2004, 10:01   #49  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
No, not exactly. If the value is high, it means that the motion vector is a singularity ( it doesn't belong here ), because it isn't homogenous with its neighbours. If it was a mere difference of MVLengths, vectors could be very different and the value still be low.

To compute it, you compute the difference between the motion vector and one of its neighbours ( hence you obtain a vector ), you consider its length, and you sum the square of the lengths of difference obtained for all the surrounding blocks.

For the moment, I detect scenechanges using only that value, and it works ( not perfectly, I will later use a combination of SAD and this value ).

Manao is offline   Reply With Quote
Old 25th May 2004, 10:55   #50  |  Link
violao
Registered User
 
Join Date: Feb 2004
Posts: 252
Quote:
Originally posted by Manao
To compute it, you compute the difference between the motion vector and one of its neighbours ( hence you obtain a vector ), you consider its length, and you sum the square of the lengths of difference obtained for all the surrounding blocks.
Are you assuming here that all neighbours are 'valid'? Wouldn't a 'distance from median' be more appropriate here? It would protect from possible outlier neighbours.
violao is offline   Reply With Quote
Old 25th May 2004, 11:24   #51  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
manao, is it possible, to create a function, that returns a float value of the currently present motion?

i.e. summarizing the vectors lenghts of the largest, moving object?

this would help, detecting framedrops, skips and duplications.

it also may help setting the fieldorder automatically...
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 25th May 2004, 11:30   #52  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
violao : you're right, but since it took more time to code, I prefered implement the easy version, i.e. the mean. I'll put that on my TODO list.

scharfi : yes, it's easy, but what do you want exactly ? The mean of the length of the motion vectors ? The median ? If not, what else ?
Manao is offline   Reply With Quote
Old 25th May 2004, 11:41   #53  |  Link
scharfis_brain
brainless
 
scharfis_brain's Avatar
 
Join Date: Mar 2003
Location: Germany
Posts: 3,653
I'll try to explain

imagine a constant horizontal scrolling video.

origonal
Frames: A B C D E F G H I J K L

now, frame F gets dropped:

Frames: A B C D E E G H I J K L

this means, the (simplified) output of the wished function should be:

Frames: A B C D E E G H I J K L
output: 1 1 1 1 1 0 1 1 1 1 1 1

another more weird sample:

a frame gets dropped, but isn't replaced with is predecessor. It is replaced with any of the previous frames:

Frames: A B C D E A G H I J K L
output: 1 1 1 1 1-5 1 1 1 1 1 1

as you can see, the dropped frame F gets replaced with A, this means, the output of my wished function should be the (at least) inverse of the previous value.

the calculation method of the length of those vectors, seems to be irrelevant for this kind of detection, I think....

What do you think?
__________________
Don't forget the 'c'!

Don't PM me for technical support, please.
scharfis_brain is offline   Reply With Quote
Old 25th May 2004, 12:00   #54  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
Alright, so you need the vector, not its length. It's not hard to do, it will be in the next release. But I wonder if this can't already be done with the Global Motion Compensation tools made by Fizick.
Manao is offline   Reply With Quote
Old 27th May 2004, 14:52   #55  |  Link
vinetu
Registered User
 
Join Date: Oct 2001
Posts: 195
An idea for 2 pass motion compensation process.

Lets assume the source is progressive.

1.upsize the source x2 using some advanced resizer (Smart_resize_filter or Didee's iiP or similar)

2.aplly the best denoiser on the upsized source (don't know which exactly)
-the targed is almost perfectly denoised and stabilized (without flickering) picture

3.apply some advanced sharpener filter (VirtualDub MSU Smart Sharpen Filter for example)
-the target is to produce an almost extremely contrast picture (looking like synthetic one)

4.run MVTools over that "synthetic" source,calculate needed data and save it in a "1-pass.log" file

5. 2-pass process - run it again over original (not enhanced) source,taking in mind scaling x2

Is it possible in theory ?


P.S. I'm not familiar with MVTools yet...

Best Regards!
vinetu is offline   Reply With Quote
Old 27th May 2004, 14:57   #56  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
It's possible in theory. No need to upscale I think ( anyway, MVTools can work at subpel precision ). But it would need be able to export motion vectors to a file, which isn't yet in the TODO list ( but since it interests a lot of people, it will soon get there I think ).

Now, I don't think results will be far better than without preprocessing.
Manao is offline   Reply With Quote
Old 27th May 2004, 15:06   #57  |  Link
vinetu
Registered User
 
Join Date: Oct 2001
Posts: 195
Quote:
Now, I don't think results will be far better than without preprocessing.
Let's hope it could be

Thank You!
vinetu is offline   Reply With Quote
Old 27th May 2004, 22:00   #58  |  Link
Fizick
AviSynth plugger
 
Fizick's Avatar
 
Join Date: Nov 2003
Location: Russia
Posts: 2,183
Vinetu:
For noisy source (also luma flicker, blotch, etc), pre-cleaning must give more regular vectors. But why to write vectors to file?

Manao:
I stil hope that you will release not only forward, but full backward compensation as general using function.
Fizick is offline   Reply With Quote
Old 27th May 2004, 22:16   #59  |  Link
Manao
Registered User
 
Join Date: Jan 2002
Location: France
Posts: 2,856
I think you can have it by using the following trick :
Code:
source.duplicateframe(1).reverse().mvshow(cm = true).reverse()
(I didn't invert it, I first saw it on a scharfi's script)

Hence, you'll be able to wait the next release in which it'll be possible.

Edit : oh, and the "file" idea : most codec still need two passes to work, that's why I was speaking of outputting it to a file. Anyway, as soon as I defined a structure to store the motion vectors data, it won't matter whether I output it in a file or in an array in order other filters to use it (i.e. client / server as you're doing with depan )

Last edited by Manao; 27th May 2004 at 22:20.
Manao is offline   Reply With Quote
Old 27th May 2004, 23:22   #60  |  Link
vinetu
Registered User
 
Join Date: Oct 2001
Posts: 195
Quote:
...But why to write vectors to file?
Finaly I think that the only necessary thingy to
improve MVectors detection is to split the input video
stream in 2 -one for proccesing and another for MVectors analysis.
This second stream could be resized,denoised,sharpened,luma-chroma stabilized
and even slightly blured ...I'll try soon if this "preprocessing" can help.

So generally there is no need for 2-passes...

But now I see another usefull trick :
at 1-pass you can load a completely different "reference" file and save MVectors.log and then
at 2-pass load the "video" file and procces it in relation to "reference" file...

I don't know yet where this could be used,but it's sound intriguing for SpecialFX compositing...

Best Regards!
vinetu is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:26.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.