View Single Post
Old 2nd July 2008, 19:02   #28  |  Link
florinandrei
Registered User
 
Join Date: Jul 2006
Posts: 120
OK, I'm starting to understand a few things.

Quote:
Originally Posted by IanB View Post
Interlaced resizing is fast, but you pay a price for generating each new field based only on the original field. Any new pixels spacially between 2 original field lines will effectivly be a weighted average of only the pixels above and below in that field, i.e. a blur. Effectively all the pixels in the new fields are vertically blurred slightly.

Using the SmartBob/Resize/ReInterlace method, although slower, will give vastly superior results in static areas because each new field can be based on a full frame. In static areas there is no "spacially between 2 original field lines". Those new pixels are rendered from complete frame data. i.e. no blur in static areas.
In this message, the interlaced resizing is the second script, right? If that's correct, what I don't understand is the usage of SeparateFields() - if the source is interlaced, what fields are there to separate? Aren't they already separated?
And bob is the first script? If that's the case your prediction appears correct, the second script seems to create output slightly blurred vertically, compared to the first one (at least on a computer and on the PS3 + flat panel).

Encoding speed has no importance to me. I prefer something that's very slow but very accurate. Encode once, watch forever - so better encode it right.

So, in that context, I guess you're saying: "use the first script but with the best bob you can find."

Quote:
Of course in motion areas any difference can be attributed to how good the SmartBob interpolates the missing pixels. If using linear interpolators like in KernelBob or DGBob there will be no difference to Interlaced Resizing. i.e. a blur again. If using Edge Directed and/or Motion Compensated interpolators then the results can be a significant step up from bog interlaced resizing.
So, again, to obtain that result I just need to use the best bob available with something similar to my first script? E.g., mcbob() or something like that. I guess it's possible to simply remove LeakKernelBob() and just drop in something else? Something like this?

Code:
DirectShowSource("E:\video\birthday\STREAM\00000.MTS")
MCBob()
LanczosResize(720,480)
AssumeTFF()
SeparateFields()
SelectEvery(4,0,3)
Weave()
ConvertToYV12()
Can that be called a very accurate interlaced resizer HD-to-DVD, for real-life video (not computer generated) with no scene changes?

Quote:
So it is a little unfair to look at individual fields on a PC screen, you really should evaluate the results on an interlaced display device at normal speed.
I know. I guess I need to revive the old CRT and DVD player. That's a much bigger "project" than it seems.
OTOH, the DVD will be watched on progressive displays too, not just on old CRT. So I can't neglect either.

Quote:
Originally Posted by 2Bdecided
Of course mcbob to tgmc will do a better job and are almost always artefact free - but with these the slow down is considerable.
Not a problem. Accuracy comes first.

Quote:
All "smart" deinterlacing is based on assumptions - basically that the content changes little field by field - the camera is still pointing at roughly the same scene.
That seems to be the case with the material I'm scaling. The source is an HD camera and there are no scene changes (a scene change is always the beginning of a new file).

Quote:
added limitedsharpenfaster since the OP seems to want a sharp output
My fault, I should have phrased it better. I guess what I'm after is "accuracy". If the original is sharp, I want that sharpness preserved, as much as the scale-down allows. But the result should not be subjectively "sharper" than the original.
What I want is a reduced-size copy that looks, statically and dynamically, as close to the original as possible, aside from the smaller resolution. The master is interlaced HD video captured with a camera (not computer-generated) and has no scene changes. The result will be watched on a variety of screens, from plain old CRT to new smart upscalers + progressive HD panels.

I am aware that there might be conflicting requirements hidden here (such as accurate statically and dynamically). If that's true, that's one of the things I'm hoping to learn from this discussion.
florinandrei is offline   Reply With Quote