View Single Post
Old 4th April 2011, 04:21   #6  |  Link
Mini-Me
Registered User
 
Join Date: Jan 2011
Posts: 121
Quote:
Originally Posted by TheProfileth View Post
I look forward to your finished product
I'm glad to hear it!

The biggest problem right now is that it's TOO good at lining up fields. In its current form, it can't differentiate between VCR jitter and a vertically panning camera, so whenever the camera pans vertically, it lines up the fields so they match. This eliminates a lot of combing in the woven image, but running it through a bob deinterlacer (or simply separating fields) shows jerkier vertical panning. Instead of smoothly panning up or down every field, it's only panning on even fields, and it's vertically aligning the odd fields with the even ones. (Doing clip.Stabilize.SeparateFields.Doubleweave() shows almost no combing on even frames and tons of combing on odd frames when the camera pans vertically.)

I'm thinking I should base the algorithm on a doubleweave and select the permutation that has the least combined combing between the two frames associated with each field. Coding it is going to get hairy, but it probably needs to be done.

UPDATE: I'm working on the doubleweave solution to eliminate the field-pairing bias, but it's pretty difficult. Since each field's shift decision affects two frames in the doubleweave, it affects the other shift decisions of those frames, which affect their neighbors, etc. Technically, I guess this is something like a sparse linear system with diagonal and near-diagonal nonzero elements in a gigantic numfields x numfields size matrix. Obviously that isn't solvable in Avisynth, but hopefully I can hack together something sane.

Last edited by Mini-Me; 4th April 2011 at 06:39.
Mini-Me is offline   Reply With Quote