Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
11th May 2004, 12:52 | #1 | Link |
·
Join Date: Jan 2002
Posts: 1,729
|
Motion Compensation
Manao has made the bold step of starting motion compensation filters. I would like to use this thread for two things, for users like me to present ideas, and for developers like Manao to explain the practical side, what is possible, how hard it is, etc. And of course, for developers to share ideas on the implementation of these filters.
This morning I have started to play with MVTools, and am currently encoding a supersampled (sorry, couldn't help myself ) motion blur, to see how it looks in realtime. I am impressed by the results so far. |
11th May 2004, 13:08 | #2 | Link | |
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,370
|
Quote:
|
|
11th May 2004, 13:15 | #3 | Link |
Registered User
Join Date: Jun 2002
Posts: 416
|
The problem was disscused before, maybe in the usage forum. Found it: http://forum.doom9.org/showthread.php?threadid=72895
If I remember the aim was different, but the approach... |
11th May 2004, 17:25 | #4 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
i'm attempting a motion-compensated deinterlacer a-la tomsmocomp.
not getting good results yet, but there's a glimmer of nice coming (pans are leaving no combing or stairstepping after just 4 mins of playing) [edit] argh! where's my brain? this is tough.
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 11th May 2004 at 17:51. |
11th May 2004, 23:07 | #5 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,654
|
Do you mean this:
Code:
function mvdeint(clip x, float "moblur") { mbl=default(moblur,0.1) x.separatefields() evn0=selecteven().converttoyv12() odd0=selectodd().converttoyv12() # compensate the even field to the temporal position of the odd field evn1a=evn0.mvinterpolate(nb = 5,bl = 0.5-mbl, el = 0.5+mbl, wf = "hat") evn1b=evn0.duplicateframe(1).reverse().mvinterpolate(nb = 5,bl = 0.5-mbl, el = 0.5+mbl, wf = "hat").reverse() evn1=overlay(evn1a,evn1b,opacity=0.5) # compensate the odd field to the temporal position of the even field odd1a=odd0.mvinterpolate(nb = 5,bl = 0.5-mbl, el = 0.5+mbl, wf = "hat") odd1b=odd0.duplicateframe(1).reverse().mvinterpolate(nb = 5,bl = 0.5-mbl, el = 0.5+mbl, wf = "hat").reverse() odd1=overlay(odd1a,odd1b,opacity=0.5) # chain original and compensated field evn2=interleave(evn1,evn0).trim(1,0) odd2=interleave(odd1,odd0) # reinterlace the strea, interleave(evn2,odd2) weave() }
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
11th May 2004, 23:48 | #6 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Wilbert : it can be done, but it's not very pratical with Avisynth ( no easy way to define where is the face at the beginning, since there is no GUI )
@all : I'm waiting for your ideas, mainly on how to treat uncovering of areas, how to use motion vectors to decide whether there is a scene change ( I have access to their length and the SAD assiociated with them ). Right now, vectors have to point inside the frame ( which means, that a block that move toward the border gets a wrong motion vector ). I'll try to change that, but that's not an easy thing. |
12th May 2004, 00:03 | #7 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,654
|
I'm waiting for your ideas, mainly on how to treat uncovering of areas
Ideas working out hidden areas. Try to imagine a static background. Maybe a landscape. now small object drives between camera and background from left to right everyframe, the object moves from left to right, you have to replace the leaved space with the background (this would the way, our brain is working) you could do that with a motionmask, that says: "this area was static a frame before / is static the next frame)" now imagin the background is panning (direction irrelevant, for now) a normal motionmask will fail, BUT a motioncompensated motionmask will work! compensate, lets say 5 frames, to the same temporal postion, build a motionmask, and figure out the areas, gone static within those 5 frames. Those pseudostatic areas can then be used to fill the uncovered areas. I hope that wasn't to irritating, trying to make the best out of my crappy english knowledge
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
12th May 2004, 06:50 | #8 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
Scharfi:
that's very cool indeed. bit of a dud on scenechanges i'm afraid i get 2 combed frames for 1 uncombed scenechange.
__________________
sucking the life out of your videos since 2004 |
12th May 2004, 09:21 | #9 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,654
|
this mvdeint puts out the naked uncorrected motioncompensated video.
correcting that will require using masks. A lot of masks. But I do not really like the output. even a whole frame isn't compensated correctly. there is almost stairstepping. much more than with kerneldeint.
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
12th May 2004, 09:43 | #10 | Link | |
Moderator
Join Date: Nov 2001
Location: Netherlands
Posts: 6,370
|
Quote:
|
|
12th May 2004, 10:06 | #11 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Stair stepping is to be expected : the motion during three consecutive frames isn't exactly constant, so the interpolation created from frame 1 and 3 won't be exactly aligned with frame 2.
The method should be the following : - fieldseparate - aligning fields ( moving the second field half a pixel up or down, if needed ) - compute motion - motion compensating odd frames on even frames, and merging a spatial interpolation with the result of the motion compensation. It can be done using only avisynth scripting, but I can't implement it today. Wilbert : I do not need the edges, a circle may be enough. But it will have to fit rather well ( ratio area of the face / area of the disk close to 1 ), in order to follow closely the face ( it will even be able to discard blocks inside the circle that don't belong to the face ). I provide the source code with the filter, there is a class GenericMotionFilter from with all my filters inherit. You could code it yourself, if you can make my code to compile properly. Fetching the motion vectors is quite easy ( look at MVBlur, for example ), and then the algorithm would be to make a mean of the motion vectors inside the circle, and then decide that all blocks whose vector is close to the mean are inside the face. You know where these blocks went since you have their motion vectors. So now, you have a set of blocks which should overlay the face on the next frame. You repeat the algorithm and it should work. |
12th May 2004, 10:35 | #12 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,654
|
Oh, I think, scripting this within avisynth on my 500MHz Machine will be pain...
I do not have acces to the faster one next days but: what about a motioncompensated 60i to 24p conversion? Code:
function mvconvert60ito24p(clip x, int "mode") { mode = default(mode,2) mbl=0.1 ya=x.mvinterpolate(nb = 4,bl = 0.5-mbl, el = 0.5+mbl, wf = "hat") yb=x.duplicateframe(1).reverse().mvinterpolate(nb = 4,bl = 0.5-mbl, el = 0.5+mbl, wf = "hat").reverse() y=overlay(Ya,Yb,opacity=0.5) interleave(y,x) mode0=selectevery(5,2) mode1=overlay(selectevery(5,3),selectevery(5,2),opacity=0.5) mode2=overlay(overlay(selectevery(5,1),selectevery(5,3),opacity=0.5),selectevery(5,2),opacity=0.3) mode3=overlay(overlay(selectevery(5,0),selectevery(5,3),opacity=0.5),overlay(selectevery(5,1),selectevery(5,2),opacity=0.5),opacity=0.5) (mode==0) ? mode0 : (mode==1) ? mode1 : (mode==2) ? mode2 : mode3 } mode0 1/60 sec mode1 1/40 sec mode2 1/30 sec mode3 1/24 sec If 1/120 sec shutter has been used while shooting, you'll get those simulated shutter speeds: mode0 1/120 sec mode1 1/60 sec mode2 1/40 sec mode3 1/30 sec the higher the mode (max 3), the lesser the mv-artifacts have fun.
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
12th May 2004, 15:23 | #13 | Link |
Retired, but still around
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
|
scharfis_brain, the script above would require kernelbob(7), correct? Like -
avisource("G:\final.avi").converttoyv12(interlaced=true) kernelbob(7) mvconvert60ito24p() /Add: Maybe you could start a thread in Avisynth user to discuss?
__________________
How to Optimize Bitrate for CCE multipass Last edited by DDogg; 12th May 2004 at 15:53. |
13th May 2004, 16:06 | #15 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Scharfi : concerning the process of uncovering areas, I would prefer if it wasn't involving more than two frames ( because searching for motion vectors is a rather slow process yet, and I would prefer if the filter was running at a few fps, not some mfps ).
Now, for the MVConvertFPS, I got an idea which works quite nicely. As you have notice, the interpolated frames have got quite a lot of artifacts, because the moving blocks don't pave well the frame. However, if I compute motion vectors on 8x8 blocks, nothing forbid me from moving 12x12 blocks, or even 16x16 one . That trick performs very well, and allows some artifacts removal. It now needs the SAD decision, as well as vectors able to point outside of the frame, and it will become usable. Wilbert : I gave a try for face tracking. It doesn't work well on close up, because the face then doesn't have a uniform movement. I'll try it on other things, it may prove better. @all : I'll try to make a good documentation on how to use the API I made for fetching the motion vectors. I don't know what you need in order to make a filter without having to compile my code : obj files, headers, and what else ? |
13th May 2004, 22:31 | #16 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,654
|
Manao: what about a server/client structure?
a filter analyes the stream and gives back a data stream with all motionvectors and blocksizes needed to do a compensation. the other filter will to the compensation then. this would allow us, to write our own postprcessing routines using avisynth's masking... I am thinking about a structure similar to fizicks depan().
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
13th May 2004, 23:03 | #17 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Scharfi : whatever the method ( client / server or using the API ), you still have to write a filter in order to do use the motion vectors. Then, the postprocessing needs to know what the filter did, so it has to be included with the filter.
It will require for an outside programmer the same amount of work to use the API, or to fetch the stream of vectors. So for the moment, it will stay like this. Later, if I see that some filters could be chained without interfering, and that they both would use the motion vectors from the original clip, I will implement the client / server solution. |
Thread Tools | Search this Thread |
Display Modes | |
|
|