Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
![]() |
|
Thread Tools | Search this Thread | Display Modes |
![]() |
#81 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,648
|
btw.: for standards conversion of interlaced SDTV (50i and 60i) there needs to be a deinterlacer applied.
mainly for those reasons: 1) scaling the fields between 480 and 576 lines while preserving static areas 2) doing a good motion vector search 3) getting material to replace MoComp mismatches 4) make your live easier ![]()
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
![]() |
![]() |
![]() |
#82 | Link | |
Registered User
Join Date: Sep 2005
Posts: 90
|
Quote:
|
|
![]() |
![]() |
![]() |
#83 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
hmm. motion-compensating a new field for deinterlacing doesn't _necessarily_ need a bob to start with - whatever is performing the motion compensation simply needs to be aware of the half-pixel shift between fields and take that into account. MVtools doesn't do this (why should it... it's not a deinterlacer after all), so we have to feed it with a smart bobbed clip where the half-pixel shifts are minimised as much as possible. and as scharfi says, it makes things easier because you can just re-use the smartbobbed clip to fill areas where motion compensation has produced artefacts.
[edit] @ castellandw: mvflow still works with blocks, but it moves each pixel when compensating. it just interpolates the vector field made from the block-matching process. this can really help with angular motion and even rotation though, so it's still got an edge over moving full blocks. though for interpolation, i think forward+backward with OBMC and a blocksize of 4 would be nearly perfect as well.
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 3rd November 2005 at 03:58. |
![]() |
![]() |
![]() |
#84 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,648
|
when compensating fields directly one doesn't achieve a goodsuppirxel accuracy.
When I use an ELA based smart bob instead, I have much better subpixel precision due to the ELA interpolation. (lesser jaggieness)
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
![]() |
![]() |
![]() |
#85 | Link | |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
Quote:
As for 1.: Once the motion compensation is in play, you simply have a vertical slice with (approx) known values at irregular intervals, and you want the values at particular points... AFAICS, the number and spacing of those points doesn't matter... so I think you can do the resampling at the same time as interpolation. (And use your favourite resampling method for the interpolation.) 4. Yours or mine ![]() Edit: I don't know how much you will be able to rely on the subpixel accuracy of the true motion algorithm, even on progressive content, anyway... if it looks like being a problem, do tell me because there are quality/speed trade-offs that can be made. Last edited by mg262; 3rd November 2005 at 10:50. |
|
![]() |
![]() |
![]() |
#86 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
Motion, 3 November 2005
Most of the changes are under the hood to speed things up. Half-pixel accuracy is now in assembly, and works about as fast as the integer version did before (~ 200 FPS). Half-pixel and assembly are now used by default. Bidirectional estimation/compensation is available like this: FindMotion(from = previous) or FindMotion(from = next) Compensate(source = previous) or Compensate(source = next) Both default to previous. So standard compensation (moving parts of frame n-1 to replicate frame n) is still done like this: Code:
#AVISource... Compensate(FindMotion()) Code:
#AVISource... Compensate(FindMotion(from = next), source = next) Here is an example interleave script for motion compensated denoising: Code:
#AVISource... Interleave(\ Compensate(FindMotion(from = previous,initialise = 4), source = previous),\ last,\ Compensate(FindMotion(from = next,initialise = 4), source = next)) #temporal smoother, radius 1 SelectEvery(3, 1) |
![]() |
![]() |
![]() |
#87 | Link |
Registered User
Join Date: Sep 2005
Posts: 90
|
mg262, it would be nice if you packaged a help file next time around along with the dll for all the functions in your DLL because it's gonna start to get messy to look through this long bunch of posts in this message thread.
|
![]() |
![]() |
![]() |
#88 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
yeeee! will try it immediately!
[edit] backward prediction seems to be compensating the wrong frame? try: back=compensate(last.deleteframe(0).findmotion(reset=100,from=next)) subtract(back,last)
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 3rd November 2005 at 14:23. |
![]() |
![]() |
![]() |
#89 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
@castellandw,
I thought about recappping all the function arguments, but realised there weren't any important arguments that weren't mentioned in the above post. So: FindMotion(clip, ...) from = previous/next int reset (default 0) int initialise (default 1)* Motion estimation usually relies on estimated motion vectors for the previous frame, but if e.g. reset = 50 then motion vectors are calculated from scratch every 50 frames. Increasing initialise increases the accuracy of vectors calculated from scratch. *There are more arguments, but they are IMO no longer useful except possibly for debugging... just there to avoid breaking scripts for the moment. Compensate(clip, clip motion, [source = previous/next]) DrawMotion(clip, clip motion) Example scripts for the first two are above; view motion like this: #source DrawMotion(FindMotion()) #source DrawMotion(FindMotion(from = next)) Edit: I have put together a filter recap, below, and I will update and relink with each substantial change. But I don't want to suggest that reading this is enough to jump into the discussion... the whole point of starting this thread was to obtain feedback to direct the development (and I have been given plenty of expert feedback -- thanks, guys!) Last edited by mg262; 3rd November 2005 at 15:54. |
![]() |
![]() |
![]() |
#90 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,393
|
Regarding artefacts in compensated frames:
>> Repair( compensated, reference, 4 ) is not the solution. But it's part of it. ![]()
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
![]() |
![]() |
![]() |
#91 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
@Mug Funky,
I'm a bit confused... your script produces something that shows a a big difference between the two clips, but on the other hand I can't tell what it is meant to do. You can't AFAIK use backwards-estimated information for forward compensation, for the following reason: there may be blocks in frame n+1 that don't map to any block in frame n, so it's not clear what you should put in for those blocks. To make that more precise: findmotion(reset=100,from=next) will find, for each block in frame 30, the block in frame 31 which is most similar to it (give or take mumbling about true motion).Now compensate(findmotion(reset=100,from=next), source = next) will replace each block in frame 30 with the block from frame 31 which is most similar to it. But, given the information from findmotion(reset=100,from=next), there is no way to replace blocks in frame 31 using blocks from frame 30... because we just don't have the right information. Given a block in the frame 31, we have no easy way to calculate which block from frame 30 to use. Does that answer your question? Or have I got the wrong end of the stick completely? Edit: there was a silly bug in the initialisation code... you won't see any difference in the results, but please grab this again anyway: Motion, 3 November 2005 (upload at 14:06) Last edited by mg262; 3rd November 2005 at 15:10. |
![]() |
![]() |
![]() |
#92 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
aaah, i missed the "source=next" thing. sorry. works a treat now
![]() thanks for your work. this is looking great. [edit] btw, how do i shot hpel? i can't seem to see it in the options posted here (am i blind?)
__________________
sucking the life out of your videos since 2004 Last edited by Mug Funky; 3rd November 2005 at 15:11. |
![]() |
![]() |
![]() |
#93 | Link |
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
It's on by default. You can use FindMotion(subpixel = false) to turn it off... but it's fast enough that I can't think of much reason to turn it off? (Similarly, if for some reason you want to switch off the assembly you can use useassembly = false.)
Edit: This is not documentation, in the sense that reading it isn't a proper substitute for the discussion in this thread, but it is a recap of all the filter options to save you having to look back: Basic Usage and Filter Options http://people.pwf.cam.ac.uk/mg262/po...on%20recap.txt I agree this thread has become too long to digest... + looking back, large parts no longer apply, e.g. thoughts on implementing backwards compensation. So if no one has any objections, I may start a new thread at some point when there is a new version, and repeat the main requests, etc? Incidentally, to let you write standards conversion scripts (your way), what do I need to implement? Just a progressive ConvertFPS function? Last edited by mg262; 3rd November 2005 at 19:23. |
![]() |
![]() |
![]() |
#94 | Link | |
Registered User
Join Date: Sep 2005
Posts: 90
|
Quote:
Last edited by castellandw; 3rd November 2005 at 20:03. |
|
![]() |
![]() |
![]() |
#95 | Link |
brainless
Join Date: Mar 2003
Location: Germany
Posts: 3,648
|
@mg262: Hmm. I seem to have contradiced myself. (I only had a few minutes before I had to leave for school)
I have the following in mind, because it 'might' be more reliable to find motion vectors: search the motion on the separated fields, but create (stable for static areas) a vector stream, that can be applied to the smart-bobbed clip. But only implement it, if it gives benefits over the current approach (smart-bobbing and then searching for motion) fpsconversion itself only needs to be progressive. reinterlacing can be done easily afterwards.
__________________
Don't forget the 'c'! Don't PM me for technical support, please. |
![]() |
![]() |
![]() |
#96 | Link | |
Registered User
Join Date: Sep 2005
Posts: 90
|
Quote:
source("blah.xxx") assume?ff() vectors=separatefields().analyse(interlaced=true) compensated=somesmartbob().mvcompensate(vectors) |
|
![]() |
![]() |
![]() |
#98 | Link | |||
Clouded
Join Date: Jul 2003
Location: Cambridge, UK
Posts: 1,148
|
castellandw,
Quote:
Quote:
Quote:
I don't mean to be rude... but I think you need to go and absorb a lot more about a) the nature and strengths of AVISynth and b) the uses of motion compensation (which is not an end in itself, but a means to other, specific, ends). I'm not saying this based on a single comment, but rather on the general nature of your posts; there is an implied context behind messages in this thread which you are often missing. Also, second-hand comments (aka argument from authority) are not terribly useful, not because they are right/wrong, but because we need the reasoning, intuition and often technical details that underlie them. Please don't take this the wrong way... I don't mean to have a go at you. But I do think you need to learn to walk before you can run. scharfis_brain, Your script was very clear and I understood it; rather, I should have said that I didn't understand point 2. (higher-quality analysis possible after bobbing) coming after the first request (analyse and then bob). The situation is now clear. I think that, as you imply, we will have to implement both and see which is better. Incidentally, it seems perfectly legal to upsample any clip (interlaced or progressive) to improve the quality of the motion vectors found. (This is just Mug Funky's supersampling again.) |
|||
![]() |
![]() |
![]() |
#99 | Link | |||
Registered User
Join Date: Sep 2005
Posts: 90
|
Quote:
Quote:
Quote:
|
|||
![]() |
![]() |
![]() |
#100 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
motion estimation and interpolation is meant as a replacement for convertfps. running convertfps after compensation (compensating what?) would be counter-productive. much better to create new frames using motion interpolation than the blending in convertfps (which for lack of a better word could be called temporal interpolation, but that's an ambiguous term).
i'm totally looking forward to fast motion-compensated standards conversion in avisynth - right now i'm doing a straight smart-bob > blend > resize > re-interlace. this is good as most regular standards converters and i'm happy with it, but if there's a way to improve something i'm always up for it.
__________________
sucking the life out of your videos since 2004 |
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|