Registered User
Join Date: May 2005
Location: Germany
Posts: 495
|
Oh, sorry, here is the text:
Code:
Before I will explain how these features should work, I give you four
points, that say us how the material has to be, that the feuture could
work:
1. If 1, 2, 3... are the frames, a1, a2, a3... are the
top-fields and b1, b2, b3... are the botton-fields, there
shouln`t be a match from b2 and a1 (for top-field first),
because b2 would step back in movement. (no float pictures)
2. a blending-field should be between to clear fields
(between the both source-fields for the blending), becouse
otherwise it wouldn`t be possible to detect the blending.
3. If a 0-step means the step between two matching fields,
a 1-step is the step of a whole frame and a 0.5-step is
the step between a clear and a blended field, there
shouldnīt be a 1.5-step in the source (because of point two)
4. Between two matchings (0-steps) there should be only one
(max.) full step (1-step) between two fields, otherwise
audio and video would be asynchron.
Ok, somebody could say, that for his source this four points
arenīt all right, but for this source also restore24 shouldnīt
work correctly.
Now I try to explain how this noblend-feature could work
(for top-field first).
All posible matchings in a source should be found, just by
trying to match the top-field (or botton-field, depends on
the parity) with the previous and the next field. All other
searchings for matching should be useless (so, TFM(mode=0)
should be enough, I donīt know for what the 4. and 5. way
are, maybe as debugger for a wrong parity of some frames,
but should be normally useless). But mode=0 is not enough
for bad telecined material. Let me explain:
If there is a c-match in frame 1, a2 could be a
blending-field. In this case it would be good to look for
a match for b2 with the next field.
a1 a2(b) a3 a4...
l /
l /
b1 b2 b3 b4...
match: c u
But this is only useful after a c-match (not after a p-match).
And now to the bigger problem. A frame has no matching-fields.
Itīs time for the postprocessor! But what field should be
choosen for postprocessing. We have to find out what fields
are clear and what are blends. There are many methods for
detecting blends. We could compare the number of edges, the
complexity, the number of colours or for example the
sharpness. Also we can do like unblend, but all these methods
arenīt accurate enough or just need to much time.
I thought about a method, that use the possibility of a
matcher and that should solve this problem. So easy it sounds,
so good should it work.
To detect a field as blended (or not), we just blend the both
fields around together and try to match this new field with
the tested field. If both fields really match, the tested
field is blended.
This is the theory.
To make it more practicable, we just have to test both fields
after the last possible match. The clear field is the field
where the match results in more combing than the other.
I try to make it more clear with some examples:
A. a1 a2 a3(c) a4 a5...
/ l
/ l
b1 b2(b) b3(b) b4 b5...
match: p c
1. frame 3 gives no c- or p-match
(u is unneccessary, because of the p-match)
2. less combing for matching b2 with blend of a2
and a3
3. more combing for matching a3 with blend of b2
and b3, so a3 should be reproduced.
B. a1 a2 a3(b) a4 a5...
/ /
/ /
b1 b2(c) b3 b4 b5...
match: p p
1. frame 3 gives no c- or p-match
(u is unneccessary, because of the p-match)
2. more combing for matching b2 with blend of a2
and a3
3. less combing for matching a3 with blend of b2
and b3, so b2 should be reproduced, through
it is a field of frame 2.
C. a1 a2 a3(c) a4(b) a5...
/ /
/ /
b1 b2(b) b3(c) b4 b5...
match: p p
1. frame 3 gives no c- or p-match
(u is unneccessary, because of the p-match)
2. less combing for matching b2 with blend of a2
and a3
3. more combing for matching a3 with blend of b2
and b3, so a3 should be reproduced.
4. also frame 4 gives no c- or p-match
5. more combing for matching b3 with blend of a3
and a4
6. less combing for matching a4 with blend of b3
and b4, so b3 should be reproduced.
D. a1 a2(b) a3(b) a4(c) a5...
l l
l l
b1 b2(c) b3(c) b4 b5...
match: c c
1. frame 2 gives no match (also no u-match)
2. less combing for matching a2 with blend of b1
and b2
3. more combing for matching b2 with blend of a2
and a3, so b2 should be reproduced.
4. also frame 3 gives no match (also no u-match)
5. less combing for matching a3 with blend of b2
and b3
6. more combing for matching b3 with blend of a3
and a4, so b3 should be reproduced.
There could be much strange pattern in bad telecined material, but
with such a feature TFM should be able to reproduces also such weird
material. Iīm no coder and knowledge about the functions and filters
of avisynth is to bad, to be able to create a function, that would
enable this feature. It would be great, if you could implement such
a thing in your filter, tritical.
The advantages over restore24, matchbob, doubletelecide and so on
are clear. It would be really faster and should be a bit more
accurate against blends.
And there is also an other feature for your tdecimate.
If the tfm would have the possibility to tell tdecimate, what matched
frames use a same field, for example if after a c-match follows a
p-match. This could be possible for example with the hints
(an use_hints option for tdecimate).
With such an option tdecimate could work a lot faster, if there
are given enough frames for decimating. And tdecimate should also work
better, because it would prefer these frames more then other, what
maybe are doubled just because of no motion.
It would be great, if this text could help you to tweak your
fantastic filters.
Last edited by MOmonster; 21st May 2005 at 12:09.
|