PDA

View Full Version : film -> video device


FredThompson
16th May 2005, 06:02
I'm developing a device to allow conversion of 8mm and 16mm film to video without the need for a projector. You WILL need a camcorder and an available printer port (gad, do I miss the Atari 8-bit game controller port I/O). There will be a little bit of device-specific software and some AviSynth scripting for post-processing. There will also be some script and filter chains for post-processing wrt image quality (noise reduction, scratch removal, etc.)

Hardware details are still being worked out and I'm not going to release the plans just yet. There's a chance this project might be attractive to O'Reilly's Make magazine, engadget, etc. so I'll start with a write-up similar to those from Radio-Electronics magazine. That means all the details will be shared, including the software, parts list, construction details, etc. Article will also promote doom9, neuron2's board, AviSynth, VirtualDub and freeware MPEG2/MPEG-4 encoders. If no one wants to print it, I'll release it to doom9, neuron2's board, etc. under some form of donation-ware (i.e., you use it, you like it, please send me a few bucks.)

If possible, I'll make all the parts available in kit form and, maybe, assembled units but I really don't care to do assembly if at all possible.

This is something I'm developing, regardless of anything else which might happen. There are between 600 and 700 reels of old family movies in my house. A couple of years ago I paid to have them converted to Hi8 and the people who did it screwed up the color balance. That was about $2000 down the drain. Tonight the gods of inspiration smiled on me and a "solution" for homebrew jumped into my head.

I know the obvious answer to this question is: "As cheap as possible" but I'd like to get an idea of what people might pay for such a hardware device. The goal is NOT speed, it's accuracy. Decent parts, especially optics, cost money and I'm wondering what people might be willing to pay for a kit. If the parts cost $200 and no one is will to pay that much, I won't waste my time figuring out how to put together kits. Kits would just a timesaver, really, not a way to make money. Having said that, though, It'd be nice to be able to gather a buck or 2 for eacdh full kit and send that to Avery, DG, the AviSynth team and doom9 to ehlp with bandwdith and show thanks.

SeeMoreDigital
16th May 2005, 19:01
Sounds interesting...

What type of optics/lenses do you require?


Cheers

FredThompson
16th May 2005, 19:25
Well, hey there, stranger!

Optics aren't that tough to find, Edmund sells them and I'm hoping to use generic lenses like the type Best Buy sells for camcorders. The real challenge is controlling the motion of the film itself, physical alignment and focal points.

mustardman
21st May 2005, 23:46
Perhaps the use of an old projector, then all the mechanicals of moving the film will be taken care of.

Of course, the motor would have to be replaced with a servo/stepper and the bulb would not need to be as bright.

But I get what you mean - feed the image direct into the camcorder, not via a screen, and control the frame rate of the film from the PC.

Nice idea - I'm currently getting about 3000ft of 16mm converted, and it is costing me about a grand. A "home remedy" would have been an attractive option, even at half the price.

MM.

FredThompson
22nd May 2005, 00:27
There will be a screen but not like projecting on a wall. Right now I'm having a challenge finding appropriate cool light. There are commercial methods but $1000 for a light is a bit much. We don't need to throw it across a room, just a few inches.

I've thought about canibalizing old projecters. That would be an inconsistent supply. The only parts which would really be usable for this idea are the reels and possibly the sprockets. The whole projection assembly is wrong. It's designed to throw the image a distance and we'd have to counter the optics more extremely than I'm hoping. Still, we'll see...

SeeMoreDigital
22nd May 2005, 10:14
Originally posted by FredThompson
There will be a screen but not like projecting on a wall. Right now I'm having a challenge finding appropriate cool light. There are commercial methods but $1000 for a light is a bit much. We don't need to throw it across a room, just a few inches. Are any of these high-brightness light emitting diodes any good now?

Some of them are bright enough to be used domestically as direct replacements for 20 watt halogens... but I don't know how "white" the light is?

I guess using a bunch of optical fibres to carry the light is not an option?


Cheers

mustardman
22nd May 2005, 14:10
Another option to reduce the amount of heat that a bulb gives off (the real problem) is infra-red cut filters. They look like a simple piece of glass, usually found inside slide projectors. But just try burning somthing with a magnifying glass and put one of those in the way!

"White" LEDs are not really white, they just look it. There is a serious deficency of red (unless they've improved in the last 12 months).

Low wattage halogens would work best, perhaps even those new(ish) torch lights that are halogen. That'd keep the watts down without changing the colour temperature...

MM

Arachnotron
22nd May 2005, 17:06
Right now I'm having a challenge finding appropriate cool light. There are commercial methods but $1000 for a light is a bit much. We don't need to throw it across a room, just a few inches.
What about the lamps used for those home build LCD projectors (http://www.lumenlab.com/) ? Probably way overkill for your purposes, but they are cheap.

tedkunich
24th May 2005, 01:49
Fred,

I'd be interested in what you have... $200-$500 is not out of the question... I have so many rolls of that doing them "profesionally" would cost an arm and a leg.


T

tedkunich
24th May 2005, 01:54
Originally posted by Arachnotron
What about the lamps used for those home build LCD projectors (http://www.lumenlab.com/) ? Probably way overkill for your purposes, but they are cheap.


Ummm... yea, a bit over kill ;) FYI, a 400W metal hallide is HOT... enough to cause a severe sunburn after 10-20 minutes with a foot or two proximity of the bulb. They put out a lot of UV.


T

FredThompson
24th May 2005, 04:03
I hadn't thought of fiber optics. That's an interesting idea but maybe something else will show up. I guess if that's the only option, they might work with a true-white bulb but the heat problem still exists.

There are diode arrays which claim to be proper white and adjustable. The camera-mounted rectangular ones are $1K. You'll see those ads in DV and other magazines. I'm really hoping to find something just a smidge cheaper, you know?

I'm looking at various microscope lights now. Maybe one of those technologies will be usable. Don't know how "pure" they are, though.

SeeMoreDigital
24th May 2005, 10:36
Have you had a look at the type of "back-light" used in scanners?

With regards to the "low-heat" LED approach, if the colour is off... maybe it could be "fixed in post" ;)


Cheers

FredThompson
24th May 2005, 14:32
yes, those are both good ideas. A custom filter chain is something which is already part of the mix. Color Mill (VIrtualDub) is part of the plan. I wonder if those cold cathod tubes sold for case mods might be useful. Hadn't thought of that. Maybe put the tube on the other side of some sort of diffuser like a translucent plastic.

SeeMoreDigital
24th May 2005, 15:11
Originally posted by FredThompson
Maybe put the tube on the other side of some sort of diffuser like a translucent plastic. Two or three years ago I visited a signage and photo software exhibition with DOP friend of mine...

Anyway, long story short. There was a guy their from Australia who had developed a new way of back illuminating certain types of signage. Normally such signs require mains voltage fluorescent tubes inserted into a metal or plastic light box. But his design used a thin (6mm/10mm) translucent board illuminated by dozens of low voltage white lamps positioned along the long edges of the board!

What was quite impressive was how his design had managed to maintain an evenly distributed quantity of light throughout the entire surface area. And some of these signs were as big as 1800 x 1200.

His portable light boxes for viewing "transparencies" were equally impressive ie: they were very bright and cool (cold even) to the touch!

I attempted to quiz him on how he had achieved this but he was understandably tight lipped (...something about a world-wide patent).

Sadly I can't find his details anymore but maybe his product is formally available now.

I had wondered whether he may have used some form of "prismatic" diffuser to transfer the light from the edge to the surface of the board...

Anyway... I hope this helps?


Cheers

FredThompson
24th May 2005, 15:45
Wonder what happened with that? Do you recall a business name?

I'll ask a buddy who collects arcade games if he has ideas. Those folks know a lot about decals and translucent panels.

SeeMoreDigital
24th May 2005, 16:29
Originally posted by FredThompson
Wonder what happened with that? Do you recall a business name?
Sorry no I don't :(

While looking through one of my electronics catalogues I found the following information about an "RGB Full Colour LED": -

http://www.maplin.co.uk/Search.aspx?criteria=n63ax&doy=24m5&source=15. Together with this PDF file (http://www.maplin.co.uk/Media/PDFs/N63AX.pdf).


Cheers

Sergei_Esenin
26th May 2005, 11:05
The goal is NOT speed, it's accuracy. Decent parts, especially optics, cost money and I'm wondering what people might be willing to pay for a kit. If the parts cost $200 and no one is will to pay that much, I won't waste my time figuring out how to put together kits. Kits would just a timesaver, really, not a way to make money. Having said that, though, It'd be nice to be able to gather a buck or 2 for eacdh full kit and send that to Avery, DG, the AviSynth team and doom9 to ehlp with bandwdith and show thanks.

I'd be interested, and willing to spend $200-$500 depending on the quality of results. I have a large collection of rare 1960's-1980's 8mm and 16mm erotica that I'm too darned embarrassed to get professtionally converted. :D

FredThompson
27th May 2005, 23:45
OK, guys. I think I've solved "half" the optical issues and eliminated the heat. The idea won't use a screen of any sort and the only optics issue is how to get a camcorder to properly focus at a short distance. Servo motors would be the best way to move the film. They need precise movement. I'm more than willing to take a second to move from one frame to the next if that is what is required. There's no sense in trying to "snap" the film movement for a short distance.

In the interest of time, I'm going to contact a projector tech who sells modified projectors for video conversion at ebay. Maybe it would make the most sense to find someone like that who can make the physical devices.

I'm very tempted to explain the entire idea here because I don't want to dedicate time to making the devices for other people. However, the initial comment about trying to make a few bucks and them to doom9, Donald Graft, the AviSynth team, etc. is something I'd prefer not to risk at this time. If we talk too openly about the method, there's always the risk it would end up on hackaday or some similar site which could weaken the ability to sell it as an article through Make magazine or something like that.

rfmmars
28th May 2005, 05:53
In the interest of time, I'm going to contact a projector tech who sells modified projectors for video conversion at ebay. Maybe it would make the most sense to find someone like that who can make the physical devices.

I was hoping that you were going to the whole thing from scratch, you don't learn much by using somebody's research and you sure don't break new ground.

I have built all my telecine units and learned alot. If you want to save and get a projector, then get a Sears 8mm and a Sears Super 8mm models built by B&H. Do not get dual format projectors. The top of these models have a machcanial speed control and a 4 blade shutter so the speed needed is 15 fps NTSC. Either shoot at the gate with one optic or a simple flat cardboard screen with two.

Scrap the camcoder idea and get either a machine vision camera or a Mintron star light color camera.

I was hoping you would stumble on something new by doing your own thinking but I guess that's not to be. I am planing to market a CD & DVD covering modifaction and procesiing for telecine ~ $19.95

richard
photorecall.net

FredThompson
28th May 2005, 06:13
You're either trying to throw people off the path of my idea or you're being presumptuous and condescending. I'll assume the former though it sure looks like the later.

eb
28th May 2005, 07:44
What about idea to use stroboscop lamp /that used by car service/, easy to control, cold and "cool" and acceptable price.

eb

EDIT
...and this connectyd to simple mechanism based on "elipse" wheel > up the glas > move film > down the glass > light shot > up the glas ...

FredThompson
28th May 2005, 07:54
Hmm...why use a shutter if you have a precisely controlled strobe? That's an interesting idea. It doesn't fit my idea but it would be an interesting way to do projection.

The only thing I'm still wondering about is properly controlled movement of the film itself. Servo would be best but I'm trying to avoid gear boxes. It will all make sense when the project is complete.

These would be nice: http://www.foveon.com/article.php?a=121

eb
28th May 2005, 07:58
when film is moving down it is centered by stable piramide tips
... and precise picture settings against black edge of the picture can be latter by computer

Arachnotron
28th May 2005, 11:39
I don't know the first thing about film conversion, but I do like tinkering :)

I was wondering: if high quality is your aim, why not go for a digital camera conected to a computer instead of a camcorder? You could attain higher resolutions for a low price, which in turn might come in handy if you want to do any frame-jitter (is that the correct term?) corrections later. And synchronisation would be easier. Also, you could use a macro lens. (I assume you will get some barrel-shaped distortion using a standard camcorder to film from a small screen in close-up)

rfmmars
28th May 2005, 17:02
Why are you reading into my post something that's not there? I would like to see your ideas, something new. It's ok to reinvent the wheel by paraell development, by not to try? I know that not you.

As far as my "Do it youself Package", it has been in the works for about two years, and not a threat to you or anyone. In fact I have shared a lot of it on this and other forums.

Read my post again with a different viewpoint. I was just again trying to point you in the right direction.

richard

FredThompson
28th May 2005, 18:08
Originally posted by eb
when film is moving down it is centered by stable piramide tips
... and precise picture settings against black edge of the picture can be latter by computer In theory, yes. I'm not sure the edges of the frames are sufficiently precise for this since the sprocket holes are what is used for alignment. There have been similar ideas for stabilizing shaky video. IOW, maybe the edges of the frames are more accurately considered analagous to overscan.

I'll look into it, though. Thanks for the idea.

edit: What I typed doesn't match what I was trying to say. The shutter repeat is hardware timed. Alignment is based on rate of flow of the film at the projection point. Accurate control of hte film movement is based on the sprocket holes.

FredThompson
28th May 2005, 18:19
Originally posted by rfmmars
Why are you reading into my post something that's not there? I would like to see your ideas, something new. It's ok to reinvent the wheel by paraell development, by not to try? I know that not you.

As far as my "Do it youself Package", it has been in the works for about two years, and not a threat to you or anyone. In fact I have shared a lot of it on this and other forums.

Read my post again with a different viewpoint. I was just again trying to point you in the right direction.

richard The first and next-to-last sentences of your post are the ones which irked me. I realized then, as I do now, that it was late for both of us so an inappropriate time to make a final conclusion, so to speak...er...type. It didn't seem characteristic of you, either.

I don't feel threatened at all. There is a lot of unique stuff in my idea for getting the images into the computer.

My interest isn't in building a machine shop and becoming a small parts fabrication expert. Basically, I think someone who already has a business selling tweaked projectors would be better equiped to make a couple of cutom pieces, especially the film movement mechanism. Canibalizing parts is far easier for someone who already has the sources for them.

Your idea of dedicated cameras has merit. A proper housing and fixed optics is definately more attractive than fiddling with a camcorder. There's also the distinct possibility of RGB instead of lossy compression. The trade-off is price, though. Still, I'll look into it. (edit: well, $1500 or more for a dedicated camera doesn't fit with the project goals. I'll have to stick with a camcorder for now. The idea is flexible enough, however, assuming whatever camera is used can make a file which AviSynth could read.)

I considered things like the Ambico prismed things and threw the idea away. The quality would be too poor.

rfmmars
28th May 2005, 22:48
The trade-off is price, though. Still, I'll look into it. (edit: well, $1500 or more for a dedicated camera doesn't fit with the project goals



Do you really think there is a cottage industry making telecine projectors? I know of only one company and they aren't going to stop production ($1200.00 to 2400.00) product to do some custom work at a reduced priced.

Instead of attacking me, you should have ask about the content of my post. You could have found, by asking, that you can buy a Revere 718 8mm projector for as little as $0.99 plus shipping which the whole mechanism comes out in one piece (gate,pressure plate, shutter, feed and exit sprockets,) and is so simple to do anything you wish, gate capture or project..

Here's the info on other projectors

The Sears projectors / Bell & Howell are the best for easy modifaction and built like a tank. Again so easy to modify which I have added audio tone bursts generators, so I know when editing, I can find the wrong exposures sequences in the final edit and delete. All this information was all for the asking but no all you did was to attack me for trying to help you.

$1500 for a camera? No, see links below.

http://www.polarisusa.com/cgi-bin/product_listings.pl?listing_category_id=45

http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=5777317123&rd=1&sspagename=STRK%3AMEWA%3AIT&rd=1

http://www.polarisusa.com/cgi-bin/view_product_detail.pl?listing_category_id=&product_category_id=1&template_id=144

Your aptitude is typical of many at this and other forums. You think you know it all even when you never walked the walk!

If you want to do it right, do it “VideoFred's” way if you have all the time in the world.

Good luck,

richard
photorecall.net

FredThompson
28th May 2005, 23:18
Richard,

I did not attack you. I said your post was received as insulting and condescending and I would give you the benefit of the doubt that such was not your intetion. You later clarified there was no insult intended and asked for clarification as to what aspects of the message would have been received that way. I told you which two sentences broke such a reaction and that they seemed out of character and it was late for both of us. I do not take any ownership for your choice of words. I gave you the benefit of the doubt that what you meant to communicate is not what was communicated by these words:
I was hoping that you were going to the whole thing from scratch, you don't learn much by using somebody's research and you sure don't break new ground.I was hoping you would stumble on something new by doing your own thinking but I guess that's not to be.
Apparently, you refuse to comprehend how those comments would be received as insulting and will not accept the fact they were.

My comments were about your choice of words in those two sentences, period.

You are now blatantly attacking me and mixing your subjects.

I never said anything about cottage industries or telecine projectors. I used the word "business." "Business" does not necessarily equate to "cottage industry", it is simply a person selling the fruits of their labor. People who make candles as a hobby and sell them have a business.

I've already stated the approach is not the common method and I have no intention of explaining it in an open forum. I've also explained why this is the case. No amount of goading or verbage by anyone will change my mind. If the project hits a no-go point or I find I cannot continue it, I will share the details but we have not reached that point.

I've already acknowledged the value of your comments concerning cameras and did some research. In no way have I prevented you from doing anything you want to do. Frankly, I was quite surprised you thought such a thing.

Take some time out, cool off, and look at what has happened. You are taking things way out of context and accomplishing nothing positive.

vhelp
1st June 2005, 01:42
I had an idea, (perhaps building on Arachnotron's) ...

How about using (as a test platform / debugging) a USB cam. They
usually have full-frame. My old QuickCam has 640x480 resolution
and could be controled to take pictures. The only problem to this
is that I don't know how low (fps) you can go with(in-between) pictures.
But, that fait would fall just as Arachnotron digital camera then.

And, at least you have progressive frames and not interlaced to
taint your theories and things. I'm assuming that your home-made
telecine engine will be taking pictures of full-frame sources anyways.
So, a good progressive camera or quickCam usb device or something
would work. I don't know. I'm just trying to throw ideas around :)

Well, just trying ta help a fellow user :)

Good luck in your fun endeavor.

-vhelp

FredThompson
5th June 2005, 05:08
Just a quick note to say this is still in development. I'm just swamped with work and will be traveling for the next couple of weeks.

JeremyIrons
21st June 2005, 14:53
I would pay in the $500 range, no question

Delphin
21st July 2005, 09:11
When this thread is revived here are some observations . . .

For the Film Transport -
A good quality stepper motor should give quality approximately equivalent to the pull-down claw in a standard 8mm projector in advancing frame by frame. Standard small frame stepper motors on NEMA23 frames can be had on the surplus market for only about 15 dollars. Of course designing around a surplus motor could lead to concerns about the 'reproducibility' of the design. This need not be a concern. Low cost surplus motors can be used for prototyping without fear because these are 'commodity' type units and another type23 frame motor can be substituted with equivalent performance from any of a dozen vendors. These motors usually run 1.8 degrees per step (200 steps / revolution) and can easily be 'half-stepped' by a simple change to the drive sequence.

To be compatible with the above stepper motor, the film would be advanced using a simple roller having a number of sprockets around the circumference which evenly divides into 200(full step) or 400 (half step).

This lets us get to the next frame by just indexing the stepper forward by an integral number of steps (or half steps).

Lets look at a practical example. Check out this link . . .

http://www.lavezzi.com/PDFs/SprocketPDFs/SprocketCat.pdf

The 0.15 inch sprockets are for 8mm film and the .1667 are for Super8.

The 0.15 and 0.1667 inch sprockets with 16 sprockets around the circumference and with 0.25 inch shaft holes (with included set screw) will quite easily directly attach right to the 1/4 inch shaft of a typical NEMA23 stepper motor.

No custom machine work so far!

Now lets look at how that 16 sprocket number would work with a standard stepper motor running in 400step half-step mode . . .

400half-steps-per-revolution / 16 sprockets(frames)per revolution = 25 half-steps/frame

The steppers are not perfect so the could be an error equivalent to a small fraction of one half step, but with 25 half steps per frame, the errors should be small and manageable.

Several methods can be used to improve frame registration, such as a “periodic error correction” scheme based on 'micro-stepping' the stepper motors and a software calibration procedure which uses the detected edges from a dark-frame calibration film.

With a high-resolution pickup, software frame registration correction can also be applied simplifying the design of the stepper drive circuits (by eliminating any requirement for micro-stepping).

For the Light source -
Way too much emphasis has been given to exotic light sources. A single high brightness White LED or Tri-COLOR (RGB) LED working through a diffuser is enough! The Tri-color solution would allow for better color-balance compensation. One LED might not seem like much light but it is more than enough. This is because it is not necessary to 'project' the image. The best quality would be to back illuminate the frame (like in a slide viewer) and then directly image it with a high resolution CCD/CMOS sensor.

The Pick-Up device -
A high quality 1-Megapixel or higher (real pickup resolution, not BS 'interpolated' resolution) USB/USB2 Webcam. This webcam should preferably employ a CCD pickup and offer full 24bit frame dumps (some use CMOS and only offer 'already compressed' intel video formatted output).

Optics -
On many Webcams the built in lens can be unscrew so far out that the camera will work in 'macro' mode at very close ranges. If the lens is extended far enough the camera may be able to directly image 8mm/Super8 frames with NO other optics!
If that is not possible only a simple lens equivalent to an un-mounted Hastings triplet will be required. Remember: We are not 'projecting' the frame, just helping the web cam to directly image the backlit frame like a 35mm slide in a slide-viewer.

Software -
Custom software would trigger a single frame capture through the Webcams API (VFW or Twain). Then the software would change the 4 pins on the parallel port used to drive the stepper in half-step mode. Standard capture through the VFW interface would be slow enough to allow real time capture and compression (HuffyUV, DivX, XVID codec). The software could also optionally support a frame-source plugin for AVIsynth in addition to direct AVI captures allowing access to AVIsynth processing filters.

So, to sum up (lets not make this more complicated than it needs to be)

1. Simple stepper motor driven by PC parallel port drives the sprocket, which pulls the film through a simple friction-feed film gate (the film gate holds film flat at focus plane and prevents slack from causing uneven film advance).

2. Film is back illuminated by a LED light source through a diffuser.

3. Webcam captures frame using a custom software driver which then advances frame by advancing the stepper motor by 25 half-steps to return to step 1 (on next film frame).

So this does not need to be that complicated folks, it’s as simple as 1-2-3.

Note: In regard to the capture device in step three using a high-quality digicam could give very high quality, but most of these use a hybrid electro-mechanical shutters (in addition to pure electronic shutters). It is doubtful that these mechanical parts were designed for the kind of duty they would see in video capture (tens-of-thousands of single frame exposures). Any high quality digital camera having a purely non-mechanical electronic shutter would be fine.

Hope this helps your project, let me know if you have any questions.

FredThompson
21st July 2005, 11:31
You're pretty close to the ideas I've had. I did some experiments with a filmstrip backlight from an old flatbed scanner.

There are some folks who are doing this sort of thing with flatbeds. I'm not convinced the frame alignment is good enough but there are some definite advantages wrt resolution, full RGB per pixel and economy of scale.

It really doesn't matter HOW we get the sequence of images, scanner or camera, as long as the light is consistent. A big challenge I saw with a camera was focal distance and ensuring the camera was aligned properly. A relatively simple control of film transport would small but the tolerances for manufacture are quite tight.

There are some other things I considered to reduce noise and variability. (Forget the magazine article idea, better to get the project going.) I was originally thinking of backlight and a standard video camera. Non-linear feeding of film would allow multiple images per frame which would be averaged together to reduce rounding variability. AviSynth script would do that as well as throwing out the video frames in which the strip is advancing.

I'm now quite partial to the idea of a modified flatbed scanner to produce a series of images because it's an economical way to get full RGB, proper focal distance and alignment with the film. A few weeks ago I contacted some of the people who have been doing this to see if we could combine efforts as I started approaching the problem from the film processing aspect and worked towards the physical transport and they're doing it from the other end.

http://www.jiminger.com/s8/index.html
http://www.public.iastate.edu/~elvis/8mm/film_scanner.html
http://truetex.com/telecine.htm

Having typed all of that, and re-reading your post, I like the idea of a webcam. That would allow multiple reads per frame and a fairly small package. The big challenge is the machining and precise control of the filmstrip. Yes, parallel-port driven stepper motor was another aspect of the original idea. I thought a short tube with a rectangular opening for the filmstrip and a backlihgt behind it combined with stepper-driven film reels would work just great. Then the big problem became alignment, optics and control of the filmstrip. The film needs to be moved a set linear distance at the "projection" point so some form of distance-traveled sensing must be used. I'm hoping there are parts which could be scavenged from old projectors.

Delphin
21st July 2005, 19:27
You're pretty close to the ideas I've had. I did some experiments with a filmstrip backlight from an old flatbed scanner.

There are some folks who are doing this sort of thing with flatbeds. I'm not convinced the frame alignment is good enough but there are some definite advantages wrt resolution, full RGB per pixel and economy of scale.


Without a transfer lens system of some sort (tricky to design to work with the moving linear CCD pickup) a flatbed scanner will be a little marginal on resolution.

This is because it is rare to find REAL resolution higher than 2400dpi on a flatbed.

Though some claim 3200 or even 4800 few really live up to those numbers.

The Epson scanner used by the first fellow claimed 4800 dpi but the images still looked fuzzy (like the scanner was delivering an effective resolution of 1200 or less)

The guy at the second link is using an Epson with a claimed Resolution of 3200 and his captures are also a little fuzzy.

The third guy concludes that his flatbed is not doing the job and uses a film scanner and STILL gets fuzzy scans.

My brother and I converted the family’s old 8mm films to video by just projecting them on a wall and shooting them with a hi-8 camcorder and got better results than the ones I saw. True, it was very tricky finding a setting that would reduce the flicker problem you get in this setup, but my projector has continuously variable speed which helps a lot (and now there are Virtual Dub and Avisynth filters to help fix flicker).

I think the reason these guys got fuzzy frames is that trying to grab 8mm frames from a flatbed scanner requires it to ACTUALLY perform to a pretty high DPI spec (which is rare given all the mechanical variables).

I think that the 2400dpi number quoted on these web sites is a bit too low for a ultra high quality capture if you want to do the software frame alignment talked about.

Even 35mm camera lens were frequently rated to as much as 50 line-pairs/mm

To resolve a black/white line pair actually requires 2 pixels so that’s about 100 pixels per/mm or about 2540 pixels/inch JUST FOR A SHARP IMAGE AT 35mm.

At 8mm you want EVEN SHARPER OPTICS, say at least twice that number (equivalent to 5000dpi REAL scanning resolution).

This would be what was needed to guarantee that film grain is the only limiting factor, assuming a source from a camera with good 8mm optics and finer grained film stock like Kodachrome 25, or Ektachrome 64.

A scanner uses a linear CCD pickup and associated linear 'lens' optics to capture a (theoretically) very finely focused image, but where are the precision adjustments to guarantee perfect focus?

The slop in the mechanical scanning mechanism, and all the other mechanical tolerance issues, conspire to make REAL resolutions of even 2400 DPI hard to achieve.

From the information I found on the web, the active part of a super8 frame is only about 0.22 x 0.16 inches at the most, so a good solid 2400dpi (honest resolution) would only give about 528 x 384 pixels in the sampled frame

This is actually marginally acceptable, but a little low if we also want to do software frame alignment, because we need to do sub-pixel adjustments to align the image, and, in that process, we will lose a little sharpness (in shifting and re-sampling the image).


Having typed all of that, and re-reading your post, I like the idea of a Webcam. That would allow multiple reads per frame and a fairly small package. The big challenge is the machining and precise control of the filmstrip. Yes, parallel-port driven stepper motor was another aspect of the original idea. I thought a short tube with a rectangular opening for the filmstrip and a backlight behind it combined with stepper-driven film reels would work just great. Then the big problem became alignment, optics and control of the filmstrip. The film needs to be moved a set linear distance at the "projection" point so some form of distance-traveled sensing must be used. I'm hoping there are parts which could be scavenged from old projectors.

Re-read my post and follow the link to the PDF catalog (and check the math) related to sprockets with 16 teeth for use with a .25 shaft motor. Advancing the motor by EXACTLY 25 half-steps of 0.9 degrees (for a standard 400 half-steps/rev motor) will move the sprocket wheel around by exactly ONE FRAME (within the accuracy limits of the steppers which should be pretty good). The half-steps are just that STEPS so position sensing should not be necessary.

So you don't need position sensing, you can just hit the stepper with a 25 half-step phase stepping sequence, between each frame to advance the film to the NEXT frame and everything should be hunky-dory.

One reason I specified a 1Megapixel Webcam or higher (again REAL sensor resolution NOT 'interpolated') was to guarantee that we could resolve to the film-grain limit, and also to provide a margin for 'software frame alignment' done later in post processing.

You had a good idea to do the capture at a higher frame rate and then decimate down with beneficial averaging.

I would design the PC parallel port with a 5th line in addition to the 4 lines providing phase drive to the stepper. This line would be able to shut down the LED backlight source, if you want to do this trick. This could be used to force a 'dark frame' between valid AVI frame capture groups.

Say we capture with a USB2 webcam at a full 30fps . . .

Then we budget capture frames like this ;

2 frames @ 30fps dark (guarantees at least one fully dark frame with async frame advance timing)

8 frames @ 30fps sampled (guarantees at least 7 frames for averaging with async frame sampling)

2 frames @ 30fps dark (guarantees at least one fully dark frame with async frame advance timing)

8 frames @ 30fps sampled (guarantees at least 7 frames for averaging with async frame sampling)

Sounds slow, but this is still 3fps or about 1/4 to 1/5 ‘real time’ for the slow 12-15fps 8mm frame rate (not too bad).

Then the valid frames could be extracted and averaged with a fairly simple AVISYNTH script.

Some Webcams (like my Phillips Vesta) also give the option to capture using a TWAIN driver (Thought I saw a Twain plugin for AVISYNTH somewhere). This can give better quality by transferring a full 24 bit frame, so this would be a good alternative if a plugin can be found or custom driver written.

The Webcam would definitely give a less cumbersome and more compact ‘professional’ looking assembly.

Given today’s low prices on computer related items a suitable Webcam should certainly be in the price range from 50-100 dollars.

As I mentioned before the surplus stepper should be only about 15 (to perhaps 30 dollars worst case) on the surplus market.

I don’t know how much LaVezzi gets for it’s sprockets (where is Spacely Sprockets when you need them?) ;)

FredThompson
21st July 2005, 21:34
Yes, I agree with you except I was planning for a full minute per frame. There will be some variability due to A/D conversion and I figured it would be a good idea to have plenty of frames. I was hoping to twist Donald Graft's arm to get a verion of his frame dupe replacer which doesn't have a max number of frames. The older version didn't have a forced reset period. The idea was to let a camcorder run then remove the junk frames (film advance) then run the remainder through dropout/scratch removal then averaging and keep only 1 of x frames which would be the "original" frames that would go through the final part of the filter for color restoration. The goal should be to do it "properly" then worry with optimizing speed. 8 frames might not be enough for averaging. I was thinking 60 frames so aberant pixels would have very little influence on the result. I'm not the most talented statistician and that was just a safe number.

8mm and 16mm should be supported, not just 8mm.

Audio, well, that's a whole other problem...

One of the guys I've talked with who did some of the scanner method told me he has a new machine shop setup. What it consists of, I don't know. Someone with a CNC setup who is willing to donate their time to our obsession would be greatly appreciated, wouldn't they? :P

A pure image webcam would be fantastic. I assume it would have to shoot more than the image portion of the strip and the user would mark the crop lines. Maybe the case of the webcam would be removed so the CCD could more accurately be positioned inside the device.

Delphin
22nd July 2005, 01:39
Yes, I agree with you except I was planning for a full minute per frame.


Uhhh ... 1 minute could beeeee jeeeusst a leettle bit on the steep side.

I am an engineer so here's the skinny on the 'averaging' thing . . .

When we apply multiple noisy (but correlated) signals to a summing node (like by adding them together mathematically and averaging) the noise power doubles for each two signals we combine (no advantage so far, twice the noise is twice the noise)

The advantage comes in because the correlated signals add more efficiently and so the ‘signal’ power QUADRUPLES. So we end up4:2 or 2:1 or 3dB ahead. This has to do with the fact that voltages of correlated signals will add in phase and the resulting signal power is proportional to VOLTAGE SQUARED. (Noise signals only add linearly)

Double the Doubling and you can only get another 3dB

So here’s the breakdown (Where N equals the number of combined signals and S/N is the signal to noise improvement).

N = S/N
------------
2 = 3 dB
4 = 6 dB
8 = 9 dB
16 = 12 dB
32 = 15 dB
64 = 18 dB
128 = 21 dB
256 = 24 dB

Or generalized : LogBase2(N) x 3dB

Going to your example:
1 minute at 30fps = 1800 combined frames

LogBase2(1800) = 10.8 x 3 dB = 32.44 dB

This would just about double the S/N of a typical cheap Webcam except . . .

Sadly, it just isn’t going to happen because we will just start to see nasty 'correlated noise’.

This will happen because parts of the noise (for example due to interference inside the camera)
are also correlated with each other (like the signal) and therefore don’t cancel.

So, if we suppress random noise well enough, pretty soon this gets to be the limiting factor. :(

At a guess, the advantage of combining frames could ‘max-out’ between 32 and 64 frames (maybe a little more for a REALLY good webcam (but a REALLY good web cam will have less noise to start with):)

Some Webcams are great (look like DV video with enough light) others are junk. I would definetely read Web reviews then buy from a store that will let you make a return if not happy ;)


One of the guys I've talked with who did some of the scanner method told me he has a new machine shop setup. What it consists of, I don't know. Someone with a CNC setup who is willing to donate their time to our obsession would be greatly appreciated, wouldn't they? :P

A pure image webcam would be fantastic. I assume it would have to shoot more than the image portion of the strip and the user would mark the crop lines. Maybe the case of the webcam would be removed so the CCD could more accurately be positioned inside the device.

With some care, probably the only part that would be so critical that it would need to be 'machined' would be the sprocket roller, and there seems to be a company that sells the ready made part (wonder what there price is?). I think other parts could be fabricated with careful work using common hand tools.

To support multiple movie formats, multiple rollers could be put on the same stepper shaft but some provision would have to be made for changing the film path and image scale (this could be as simple as repositioning an external triplet magnifier used with the webcams lens).

With the right optics the film frame will fill the Digital frame with only a tiny 'justification' border (which could be used for digital frame alignment correction). On the other hand with, say a 2megapixel camera there would be enough resolution that the frame could just be sized (magnification wise) for the 16mm format. Then we would just put up with the smaller frames used for 8mm and super8 using proportionally LESS of the frame (this would certinly be the simplest).

Arachnotron
22nd July 2005, 21:33
A pure image webcam would be fantastic. I assume it would have to shoot more than the image portion of the strip and the user would mark the crop lines. Maybe the case of the webcam would be removed so the CCD could more accurately be positioned inside the device.How about modifying a 35mm slide viewer and point a camera at that in a fixed setup? Something like this (http://stores.tomshardware.com/search_getprod.php/masterid=743798//).

Delphin
23rd July 2005, 03:18
How about modifying a 35mm slide viewer and point a camera at that in a fixed setup? Something like this (http://stores.tomshardware.com/search_getprod.php/masterid=743798//).
I used a simalar rear projection unit with my movie projector to capture 8mm but the idea here was to avoid the problems of having to project an image.

The 8mm or Super8 frame is tiny, and so is the CCD in a typical web cam, so why enlarge the image just to shrink it again?

With only a small external lens, a webcam (or other ccd camera) should be able to directly image a backlit 8mm frame with no problems.

We just need a reasonably precise 'film gate' and a little hardware to hold the frame in place and keep the webcam optics in alignment and a good way to advance the film by exactly one frame at a time (which is what I was proposing the stepper motor for).

Your idea is closer to the tried and true method that some folks have used for home movies, but what Fred and I have been working on is closer to how a professional Telecine machine works and therefore would hopefully give better quality.

Thanks for your interest, and the suggestion :thanks:

Arachnotron
23rd July 2005, 13:50
I used a simalar rear projection unit with my movie projector to capture 8mm but the idea here was to avoid the problems of having to project an image.Fair enough :)

I just have two last suggestions, and then I'll butt out :)

First, check out amateur astronomer websites. The typical rig involves a home made computer controlled setup with the telescope being positioned by stepper motors, while a cheap CCD (often a webcam) is used to collect multiple exposures to be averaged for optimim picture quality. So most of the software modules and interfaces you are going to need should already be available over there.

The second is to use a printer or pen-plotter as a source for the stepper motors and interfaces. Maybe not for the final product, but for development they have the advantage of being accurate while you can position them using basic printer commands so you don't need to develop any interfaces or software just yet.

Good luck with your project!

Delphin
23rd July 2005, 15:42
Fair enough :)

I just have two last suggestions, and then I'll butt out :)

First, check out amateur astronomer websites. The typical rig involves a home made computer controlled setup with the telescope being positioned by stepper motors, while a cheap CCD (often a webcam) is used to collect multiple exposures to be averaged for optimim picture quality. So most of the software modules and interfaces you are going to need should already be available over there.

The second is to use a printer or pen-plotter as a source for the stepper motors and interfaces. Maybe not for the final product, but for development they have the advantage of being accurate while you can position them using basic printer commands so you don't need to develop any interfaces or software just yet.

Good luck with your project!

Yes, those are very good suggestions . . .

It looks like you and I were thinking almost along exactly the same lines.

I was one of the first, several years ago to take the lens off of my Phillips Vesta webcam and hook it to my Takahashi Refractor. There were a few issues to work out to get really good quality (for example you need an IR filter with a Refractor) but it’s amazing how well this works.

Like some others, I found out you could get better image quality by ‘stacking’ images This just made sense based on my work in electronics, were adding correlated signals to reduce noise is commonplace (as I described above). At first I just used Photoshop to combine the images, but then some clever folks came up with about a half dozen programs (like Astrostack) to automate the process.

Then a guy named Steve Chamber's figured out how to modify the Vesta webcam so you could take VERY long exposures (several minutes), and things REALLY started to take off. That's one of the great things about the web, someone tosses out an idea, and someone else really runs with it!!! Steve deserves the credit for turning the webcam into a serious imaging tool for amateur astronomers. Now someone working with very low cost equipment can get images rivaling the beauty (if not the scientific accuracy) of some of the early Palomar astro-photography.

I also fiddled with the Mel Bartels Stepper controller (one of the telescope controllers I presume you were talking about), though I now use the commercial SkySensor 2000.

Mel's site was a great source of links with information about steppers. By comparison this project should be fairly trivial (though Bartles code might come in hand if it does turn out the we need micro-stepping and PEC to get perfect frame alignment).

As far as the obtaining the steppers goes, I found a big pile of very nice 'new surplus' NEMA 23 frame 6-wire Vexta motors at a local surplus yard so I am pretty well situated in that regard, but you are right, old plotters, printers (and some VERY OLD full-size 5inch 10-80MB hard drives) are all excellent sources of FREE stepper motors.

Another great source of Stepper driver info is to do a search on amateur CNC (Computer Numerical Control) milling machine links. I learned this when I helped someone do a Sherline mill conversion.

Again, thanks for your input, it will be interesting to see where this goes.

FredThompson
26th August 2005, 08:05
This is very similar to what I had envisioned except there would need ot be motors and gears and such behind the device. There's also the issue of aligning the camera perfectly parallel with the film path.

FredThompson
16th October 2005, 23:11
Got an email from someone who rekindled my interest in this project and saw this on Slashdot (http://games.slashdot.org/article.pl?sid=05/10/16/1544219&tid=100&tid=207&tid=137)

Craig writes "PSP Owners have long been interested in watching the UMD films and playing games on the TV, well now according to a report from Lik Sang they can, the new PSPTV being produced eventually by Gametech will be a no modification addon. From the article: 'The TV Adapter for PSP lets you hook up your PSP to your home television (NTSC and PAL) via Composite or S-Video and Stereo connectors. This adapter requires no modification of your PSP console. This new peripheral takes a completely different approach and clips on top of your PSP screen, with two screws to fit at the back of the handheld (in these two holes you can see on the top of the UMD drive). Some sort of pyramid grows from the base, with a precision lens and mirror system at the top, capturing the image and light, in a similar way a scanner or camera would. It then converts it into a video signal that is sent through video leads going from the adapter to your TV set.'"
Could a prism could be used to connect the "light table" portion to an optical sensor? Diffused light would go through the film strip, through the prism and into the sensor. No need for a bulky tunnel and, hopefully, easier to make with proper alignment.

There's still the issue of stepper motor control to move the film strip.

Arachnotron
17th October 2005, 21:31
Maybe you could use the (penta)prism from an old SLR?

FredThompson
17th October 2005, 22:05
Hmmm, maybe the body could be used as a mount, too. Might make it easier to get the prism/camera element alignment proper. Fo that matter, maybe some kind of macro lens. Hmmm...you just might be onto something there, webhead!

FredThompson
20th October 2005, 03:35
Here's stepper motor control through parallel port code: http://www.codeproject.com/vbscript/Stepper_Motor_Control.asp

dokworm
17th November 2005, 14:27
I was just wondering what the problem is with stripping down a $30 second hand projector?
You already have the gate, the drive system, (effectively) the trigger mechanism, the pressure plate, the sprocket feed and the reel holders - Just widen the gate out a bit.

All you need to do is replace the motor with a slow powerful motor like one cannibalised from an old cassette deck.

A piece of opal glass and a high brightness white LED (coupla bucks) provides the light source, and a webcam like the ToUcam for 640x480, or one of the new logitech true 1024x768 webcams (<$100) on the other side focussed directly on the film in the gate.
Run the projector at a few fps, have the PC triggered either by a cam and microswitch on the driveshaft, or an optosensor firing at the shutter and use cinecap (free) and you are done.

Half a days work, and maybe $200 and you have a true 1024x768 telecine rig. Done.

Spend a bit more and you can have a USB2 or Fireware CCD camera that can do true HDTV resolutions at up to 6-8fps.

Seems easier to use an already precision machined projector as your feed rather than try and build a feed mechanism from scratch.

To summarise.

Get a projector, slow it down, widen the gate, attach a trigger, stick an LED light source and diffuser in it, point a webcam with a decent lens on it at the film gate and run cinecap.

Now I know that 'old projectors' don't make for an easy 'kit' type setup, but there are plenty available cheap, and you could probably amass plenty of a particular type if you went hunting.

You could still make up a kit of almost all the parts, and people just bought their own projector to finish it off.
It could consist of the light source, the webcam and appropriate lens, the opal glass diffuser (also standard size for most projectors), the microswitch or optical switch, a low rev high torque motor, and the software etc.

The light source (a led on a board with plugs etc. to be a slot in replacement for standard 12v projector lamps) would be asy enough to churn out. Most of the other stuff is off the shelf, but many people would prefer to buy it in a kit rather than trying to track stuff down from everywhere.
Basically people could then just get a suitable projector and then retrofit it with the kit.

FredThompson
24th November 2005, 09:13
I've been thinking about your post. Seems like a good idea. I wouldn't mess with a trigger for the capture, though. Just use a lossless capture and an AviSynth script to throw out the junk frames which should have a consistent pattern. The original idea was to use hard drives/lossless compression/scripts to eliminate some of the physical components and allow averaging of multiple images.

A good idea, indeed. I notice VideoFred hangs out here. Think I'll point him at this thread and if he's interested.

Fizick
24th November 2005, 23:32
FredThompson,
Do you know about my GetDups plugin (for AviSynth) to select needed cuptured frames?
Only small 8mm projector modification is needed (removing obturator).

FredThompson
25th November 2005, 06:48
No, I hadn't thought about using that filter. I thought it would be something like keep 10, discard 10, repeat and the "kept" frames would be denoised and averaged. I do love your filters, though.

If we got VideoFred involved, we'd have 3 continents involved :P I guess we could add an AviUtl part of the filter chain to make it 4 continents...

There is a man who has been doing a number of experiments with flatbed scanners who returned my email a few months ago. He said he has some new machine shop tools. Maybe we could get his help for the physical portion and we could work on the filter chain. I'd be more than willing to send you parts if it is safe now. A few years ago it wasn't safe to send things into Russia. I guess an American return address was too tempting for theft. I could carry them to Italy and send from there.

Fizick
25th November 2005, 18:10
FredThompson,
If you understand my doc, please give me some correction of it. It was hardly for me write some therms in English.

Final comment: I am not Fred :)

FredThompson
27th November 2005, 16:06
Ah, I think we have a language challenge.

My original thought for the filter chain was to have the script keep some frames and discard some frames, repeating to match the pattern of the film capture. The first idea was to use a camcorder and slow film movement, pausing for each frame of film. I still think this is the best option but don't know exactly how to do it. Servo motor would be necessary.

Fizick
27th November 2005, 18:47
You may try various options.
I also firstly try to add electrical switch (gercon) to projector for per-frame capture.
But my plugin is not for such capture.

dokworm
30th November 2005, 05:08
Using an optosensor on the driveshaft of the shutter is just so easy and is only $3 worth of parts, I can post a little schematic and how to here.

It gets around complex software - you are only capturing useful frames, no need for complex algorithms then, most shutters are 3 blade, so capturing 3 frames for each frame of film is easy, or you could just capture as many frames as your webcam is capable of in realtime - just slow the motor down if you want more frames.

The beauty of a switch to tell the PC when *not* to capture is that it negates the need for stepper motors, or complex software, you only capture frames while the film is stationary in the gate.

You could also cover one of the shutter holes with a Neutral Density filter material, and do a HDR capture (i.e. effectively bracket the exposure) and average the results to capture more of the dynamic range of film than you could normally.

FredThompson
30th November 2005, 06:47
Yes, please throw together a schematic and how-to.

I don't understand the comment about dynamic range and ND filters. ND lowers the amount of light without altering the color, right? How would that improve dynamic range?

trevlac
1st December 2005, 22:09
Yes, please throw together a schematic and how-to.

I don't understand the comment about dynamic range and ND filters. ND lowers the amount of light without altering the color, right? How would that improve dynamic range?


There are more steps in the range from black to white on film.
There are 256 in 8bit video.

He suggests 2 captures grabbing the low and then high range. (I think)
Over expose the full range to get more details in the low.
ND filters would then under expose and grab more of the high.

I'm not sure if you would get better results with this, by using a simple averge especially if you just convert to 8bit anyway.

dokworm
2nd December 2005, 10:34
You get massively better results as you can capture the full range and then compress back down to 8bit. So rather than either missing out on the shadow detail (crushing the blacks) or losing detail in the whites (blowing out the whites) you capture the detail at both ends, then with some relatively simple processing, you can keep both in the one 8bit image.
Or better still, use openEXR and keep it a HDR image.
There are a lot of photography sites dedicated to HDR imaging if you want to do further reading.
http://www.cybergrain.com/tech/hdr/
http://www.hdrsoft.com/resources/dri.html


I'll put together a simple opto circuit for you guys then as well.

trevlac
2nd December 2005, 17:56
@dokworm

I said ... "I'm not sure ..."

The reason I'm not sure is that there are at least 3 things involved ...

1) The contrast of the scene: In this case the projection/reflection of Fred's device
2) The bit depth of the sampling CCDs (and the processing to put it into 8bit)
3) The limits of an 8bit (and eventually 4:2:0 DCT compression) target gamut

If 1 is low, 2 is good, and 3 is low ... I think "massively better" will be hard to achieve ... but I still don't know and I'd be happy to see how it works out for Fred. :)

-----------------------
On a similar issue ... oversampling the image resolution ...

I have an HDV camera and I do see some improvements in the resolution even when processed down to DV size on a STD tv. So, oversampling the analog is generally a good thing.

dokworm
5th December 2005, 11:43
It is more about remapping the film gamut back down to 8 bit in a sensible way, take a look at the openEXR home page to get a look at what I mean, basically, unless there is a very small range on the film, it will need remapping otherwise one end or the other (or both) will lose a lot of detail.
It doesn't really matter about 3, even at 4:2:0 compressed the idea is to put the detail into that scene, rather than losing it *before* you get to the compression phase.

A great example is here
http://www.cybergrain.com/tech/hdr/images/tone_1.jpg

The image is blown out at the stained glass windoes and the top round window.
By doing a multiple pass exposure and remapping, you can get the entire tonal range to be remapped to something that can be displayed in the 8 bit space like this
http://www.cybergrain.com/tech/hdr/images/tone_2.jpg

The problem with film getting onto DVD is the wide latitude that film has, it is very hard with a standard CCD to capture the full range in one pass, you tend to get effects like in the first image, with some details either blow out or two dark. By doing two passes (at least) one to capture the dark details, and the other to capture the light details, you can merge the two, and end up getting both represented.

I think you would agree that you would be better off starting with the second image and then dumping it down to 8 bit and then 4:2:0 etc. than starting with the first image.

From my experience it would be hard to capture the full range of film well without doing something like this.

rfmmars
5th December 2005, 19:08
It is more about remapping the film gamut back down to 8 bit in a sensible way, take a look at the openEXR home page to get a look at what I mean, basically, unless there is a very small range on the film, it will need remapping otherwise one end or the other (or both) will lose a lot of detail.
It doesn't really matter about 3, even at 4:2:0 compressed the idea is to put the detail into that scene, rather than losing it *before* you get to the compression phase.

A great example is here
http://www.cybergrain.com/tech/hdr/images/tone_1.jpg

The image is blown out at the stained glass windoes and the top round window.
By doing a multiple pass exposure and remapping, you can get the entire tonal range to be remapped to something that can be displayed in the 8 bit space like this
http://www.cybergrain.com/tech/hdr/images/tone_2.jpg

The problem with film getting onto DVD is the wide latitude that film has, it is very hard with a standard CCD to capture the full range in one pass, you tend to get effects like in the first image, with some details either blow out or two dark. By doing two passes (at least) one to capture the dark details, and the other to capture the light details, you can merge the two, and end up getting both represented.

I think you would agree that you would be better off starting with the second image and then dumping it down to 8 bit and then 4:2:0 etc. than starting with the first image.

From my experience it would be hard to capture the full range of film well without doing something like this.


It should be made clear at this point that the dual exposure process is only valid for use with a work printer, not a movie projector which will never run the speed twice unless it is a crytal controlled drive.

The new Sony EX-Eiew PAL block cameras come very close to the film's dynamic range. I use the "SuperHad" Block series cameras which do very well.

Very little low light detail is lost.

richard
photorecall.net

dokworm
6th December 2005, 00:33
No it doesn't have to be a workprinter per se, just any projector slowed down to say less than 6fps (depending on your capture device speed of course), and fitted with a trigger mechanism (i.e. A projector that is modified to work on the same principal as the workprinter, without the expense).
Then you can capture two or three images (at least) in a single 'pass'.

Although the new CCDs come close to the film range, getting it to work in practice isn't so easy.

Regardless of the capture device, your illumination source is often the problem, and it is difficult to get the dark areas lit enough, without flaring and other problems happening in the lighter areas (especially in scenes with pure white as well as a lot of shadow detail). The bright areas can cause a washout over the entire frame, dropping the ANSI CR of the shot considerably, a multipass solution can avoid that. (good masking and light control on any surfaces is also a must to stop light reflection washing out the scene.

It isn't of course necessary to get a 'good' transfer, but if you want an excellent transfer, it can make a big difference. I'm looking at it from a professional's perspective, rather than a system for transferring 'home movies' so my design constraints would be different to others.

Depending on your sources and aims, it may not be worth your while to develop a tool any better than the current workprinter/sniper solutions out there, any system with the ccd looking directly at the film frame yeilds a better result than the other consumer level deviced that just project the image onto a 'screen' and have the camera shoot that.

But, if you want to have something that gets close to a Rank, then a multipass process (perhaps incorporating a wet system, or IR pass) will be what you want.

rfmmars
6th December 2005, 01:21
No it doesn't have to be a workprinter per se, just any projector slowed down to say less than 6fps (depending on your capture device speed of course), and fitted with a trigger mechanism (i.e. A projector that is modified to work on the same principal as the workprinter, without the expense).
Then you can capture two or three images (at least) in a single 'pass'.

Although the new CCDs come close to the film range, getting it to work in practice isn't so easy.

Regardless of the capture device, your illumination source is often the problem, and it is difficult to get the dark areas lit enough, without flaring and other problems happening in the lighter areas (especially in scenes with pure white as well as a lot of shadow detail). The bright areas can cause a washout over the entire frame, dropping the ANSI CR of the shot considerably, a multipass solution can avoid that. (good masking and light control on any surfaces is also a must to stop light reflection washing out the scene.

It isn't of course necessary to get a 'good' transfer, but if you want an excellent transfer, it can make a big difference. I'm looking at it from a professional's perspective, rather than a system for transferring 'home movies' so my design constraints would be different to others.

Depending on your sources and aims, it may not be worth your while to develop a tool any better than the current workprinter/sniper solutions out there, any system with the ccd looking directly at the film frame yeilds a better result than the other consumer level deviced that just project the image onto a 'screen' and have the camera shoot that.

But, if you want to have something that gets close to a Rank, then a multipass process (perhaps incorporating a wet system, or IR pass) will be what you want.

Well yes a triigger device would pertty much make it a work printer. I modify all my projectors with new 150 watt halagen bulbs soft foucs to lessen hot spot, connected to a separate 21.5 volt transformer. A wall dimmer is connected to the transformer primary so the lens stays set at it's best sharpness. I control the exposeure with the dimmer. The projector has a tone generator that puts a burst on the audio channel so if I have to re-expose a section of film, I have the tone code to show where to delete those spots.

The 21.5v transformer for the bulb, lets me unload the motor so the speed is not effected by change of the resistance of the bulb as it warms up.

After software dymanic and color correction, the end result is a 2-5:1 enhancement over projected film with just one pass in the projector.

This seems to be an easier way to go about it.

Look into the specs. of the Sony EVI 371 PAL or later cameras. You can control the camera and setup with software via RS 232 port.

EDIT: You can get OLD Revere 718 8mm and Revere AZS 830 Super 8 projectors on E-Bay for nothing. I just got the Super 8mm today for $0.99

http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&ssPageName=ADME:B:EOIBSAA:US:11&Item=7563732127

The big deal with these is that the entire transport,shutter,and gate come out as one assembly letting you put the light source where the lens once was, and the capture device were the lamp was. it's backward but makes super easy access. You all so get a speed contol pot and a super motor, plus one the best lens out there if you want to go the other way.

richard

hepi
7th January 2006, 04:45
Hi there Happy New Year
I am following this with interest Thanks Fred for starting it.
Would any of this Cameras be able for this, some have build in Infrared LEDs Wonder how that would work for telecine. http://www.protectiondepot.com/bulletcameras_menu.asp
Herb

morsa
8th January 2006, 14:38
two stupid suggestions from an stupid guy who works with stupid film stuff every day.

Light source, just use a calibrated fluorescent tube (bulb) with a flicker free power source.(flicker free in fact means 100HZ and above).
Phillips and Osram manufacture them.Just look for their codes.I guess they cost around 4 dollars here.
Small variations in color balance from the light source can be easily compensated with the (in)famous "White Balance"

Camera.Get any USB or Firewire one with the resolution you want.Then you can make as many exposures as you want from the same frame.
For blacks =l onger exposure time.For highlights=shorter exposure time.
Then use masktools or simillar to merge both exposures and compress the dynamic range to fit it in 8 bit.

Frame steadyness.Use a register pin.Film perforations have two uses.
1) Film transport
2)Frame registering.

hope this helps

videoFred
20th January 2006, 09:08
Oh my God!!

I missed this very interesting thread!:scared:
But here I am :D

Working on this for two years, now.
Some things to consider:

1) capturing 1:1 (frame accurate, one filmframe=one AVI frame) or not?
I work 1:1, but then we need frame rate conversion afterwards.
Fizicks MVFlowfps does miracles, here.

2) the camera: I started with the Philips Toucam Pro: a real CCD webcam with a resolution of 640 x 480. The Philips is USB1.. comes with fine software. This is a very cheap step-in solution: 50 Euro's for the cam. It MUST be the ToUcam Pro, all the others are CMOS!.

All still frames on my website are taken with the Philips.
This was about two years ago...
In the mean time, I learned a lot...
Now I can take much better frames with this Philips.
Time for an upgrade of my site.

I had to re-align the CCD of the Philips, and made a new box for it.
Everything is on my website. I'm working on a new Regular-8 transfer device, and I'm going to use the Philips for it.

Then I bought an Imaging source machine vision camera with a resolution of 1024 x 768 pixels. This is a fire wire camera and it gives great results. The clips on my site are taken with this cam. I made these clips before I discovered MVFlowfps(). So quality of the clips can be discussed..movement is not 100%. I have better ones, now. I can use some tips for optimal WMV compression, too.

3) the lens: I look straight at the film frame.. I use a 25mm lens for the Philips (1/4" ccd diagonal), a 35mm lens for the machine cam (1/3" ccd diagonal). Both lenses with extention tubes, of cource. Otherwise it is impossible to focus on this distance and this small (6x4mm) filmframe.

You find an original, not manipulated 1024 x 768 bitmap frame here:
taken with the machine camera and the 35mm lens with extention tubes.
This is the real sharpness of film, it can be artificial sharpened with the sharpness setting of the camera, and with limitedsharpen() afterwards.

http://users.telenet.be/ho-slotcars/testmap/sailing.bmp

In the beginning, I tought there was something wrong with my system, but I'm very shure by now: this is how film looks. Of cource it depends on the used filmcamera: The 1970-1980 models of Canon had very good optics. But this specific 'sailing' 1976 frame was taken with a Eumig Vienette camera.


4) the backlight: I have the best results with a standard 12V 35W halogene spot, at full brightness, powered by an old computer power supply. Because the machine camera runs at 15fps, I need this stabilised power supply for the spot. Otherwise I get 50-100hz flickering. For diffusing I use opal (white) glass. The best! Heat is no problem with the 35W spot.

5) dynamic range: film has more dynamic range then digital can capture. But in most cases, with good exposed film, it is possible to capture most of it in one pass. A histogram while capturing is very useful. The software of the machine camera has a histogram option. Hovever, for difficult cases, I have developed a double capture system. Someone wrote a small application for me to use with the machine cam. Every time the projector gives a trigger, the cam takes two bitmaps of the same film frame, with different electronic shutter speeds. As you all know, importing the stills and overlay them is very well possible with Avisynth.

For normal capturing I use Cinecap, this program creates an AVI file, and fills the file with frames. The trigger is a switch on the projector, connected with a modified mouse. Capture speed is slow: 4fps.

Richard goes another way, but he has lots of expierence, too, and he is always very helpful and ready to share everything with others. I agree his system works faster.

So, Fred, if you want to discuss anything, here I am.;)
I still have to learn so many things myself.
Mostly about color space, resizing etc...

The script I use for converting my (huge) Huffyuv 1024 x 768 AVI files:

-resizing - MVDenoise - MVFlowFps - Depan - LimitedSharpenFaster.

All in one script, it runs at 3fps. I save the 720 x 576 result with Huffyuv again, then I do some color correction with Virtual Dub and save this result with the Panasonic DV codec. These are my masters for my NLE program.

But, again, I have lots of questions myself to find the best possible workflow and also to (maybe) automise levels, white balance etc...

Here are some frames, taken from the Mpeg2 file, created from the DV masters with Magix video de Luxe.

1972, kodak film:
http://users.telenet.be/ho-slotcars/Frames/1972_kodak_003.jpg

http://users.telenet.be/ho-slotcars/Frames/1976_Kodak_002.jpg

I still have a long way to go.. I need help with upsizing, downsizing, colorspace conversion, overlay etc...

Working with real film is FUN! It's like a time machine...
Quality of more recent (1985) and even today's film (yes, some guys are still filming the old way) is incredible, and even beats digital! (OK, it's softer.. but the colors and the latitude!)

More WMV clips here: (temporal)
http://users.telenet.be/ho-slotcars/WMV/

The problem with WMV is: I must tweak contrast and sharpness before converting to WMV, otherwise the result looks very flat. But I have tweaked sharpness a little to much on these clips. Any help is very welcome.

One more last thing to consider:
Do we want to make film look like digital?
I don't think so...


Fred.

videoFred
20th January 2006, 12:58
Now, about the double capture principle:

dark:
http://users.telenet.be/ho-slotcars/Double_Cap/WIM_DARK.jpg

bright:
http://users.telenet.be/ho-slotcars/Double_Cap/WIM_BRIGHT.jpg

used mask:
http://users.telenet.be/ho-slotcars/Double_Cap/WIM_MASK.jpg

overlay:
http://users.telenet.be/ho-slotcars/Double_Cap/WIM_OVERLAY.jpg

I used the normal overlay() Avisynth command.
To create the mask, i used invert() and greyscale()

Something like this:
mask = greyscale(invert(clip1)).tweak(bright=-90,cont=1.2).blur(1.0).blur(1.0)

I bet with masktools we can do a much better job!
(but I do not know how to use it :confused: )

Fred.

rfmmars
20th January 2006, 15:15
Excellent post Fred. How mush time would you say it takes to capture a 3" reel and process if it is good quality film stock?

I and Fred are super users of Magix's "VideoDeluxe NLE" for our film work. When ever I have to back up the projector for re-exposure, my modified projector puts a tone code on one of the audio channels for quick post editing. Fred did you know you can multi frameserve from Virtualdub(xxx) to Magix?

Richard
photorecall.net

videoFred
20th January 2006, 15:44
Hi Richard,

Well, the projector runs at 4fps, the script at 3fps. It takes houres...
If I would have to live from it, I'd go for a setup like yours.
To transfer films for home use, there is no need for 'super' resolution either.
400 - 500 TV lines looks great on a stand alone TV-DV player.

Or I (my banker:scared: ) would buy a Flashscan8...(25000 Euros)
This new developed machine has everything we can think of.
It works with continue run, and flashes the frames with a Led matrix.
It can run at 25fps 1:1 or slower with automatic pulldown patterns.

But nothing beats the pleasure of making something of your own, with your own hands!

No, I did not know one could frameserve to Magix.
This would save several compression steps!

But then going forwards in Magix is very slow, I guess.

But I must go, now.
I'l be back next monday.

Have a nice weekend!
Fred.

rfmmars
20th January 2006, 16:00
Hi Richard,

Well, the projector runs at 4fps, the script at 3fps. It takes houres...
If I would have to live from it, I'd go for a setup like yours.
To transfer films for home use, there is no need for 'super' resolution either.
400 - 500 TV lines looks great on a stand alone TV-DV player.

Or I (my banker:scared: ) would buy a Flashscan8...(25000 Euros)

No, I did not know one could frameserve to Magix.
This would save several compression steps!

But then going forwards in Magix is very slow, I guess.

But I must go, now.
I'l be back next monday.

Have a nice weekend!
Fred.

Not slow at all, infact loading is very fast. I have used 12 Virtualmpeg2 frameservers at one time feeding one Magix NLE. On our forum, the record is 80 plus at one time. It would be great if we could frameserve out of VideoDeluxe. You can "Smartrender" export mpeg2 at 15,000 bps lossless just like "Direct Stream Copy" in VD(xxx).

See this long post.

http://support.magix.net/boards/magix/index.php?showtopic=12122

I am working a show this weekend for my telecine work, so I will crash Monday.

Best to you,

Richard
photorecall.net

burnselk
21st January 2006, 02:43
Perhaps the use of an old projector, then all the mechanicals of moving the film will be taken care of.

Of course, the motor would have to be replaced with a servo/stepper and the bulb would not need to be as bright.

But I get what you mean - feed the image direct into the camcorder, not via a screen, and control the frame rate of the film from the PC.

Nice idea - I'm currently getting about 3000ft of 16mm converted, and it is costing me about a grand. A "home remedy" would have been an attractive option, even at half the price.

MM.
Mustardman.....how did the dvd's turn out? Was the quality any better than the film projected on a screen?

I'm thinking about having my regular 8 mm film put on dvd's. Do you have any suggestions as to where I should go to get that done?

I like Fred's idea. But, I bet the cost is going to be pretty high....guess we'll have to wait and see. I'm certainly interested....I must have 2,000 ft. of regular 8 I want to put on dvd.

I'd like to hear about your experience with the company that put your film on dvd....would you share that with me?

videoFred
23rd January 2006, 07:57
I like Fred's idea. But, I bet the cost is going to be pretty high....guess we'll have to wait and see.

You do not have to wait...
My transfer unit is running very fine.

About the cost (if you make it yourself):

Old Eumig: 25 Euro
Philips Webcam: 50 Euro
Old 25mm lens: free
Slow drive engine: free (old Telefunken casette player motor)
All together with some additional other parts: about 100 Euro.

If you want a better system: the cost of the machine cam and lens was about 800 Euro..

If you want a company to do the transfer for you:
Try Richard ;)

Fred.

johnmeyer
26th January 2006, 20:34
I own an 8mm/Super8 Workprinter. Great device, but slow. I now have to do 16mm conversions, so I bought my own projector. With very little work, I now have a device that produces frame-by-frame film to video transfer (like Workprinter) but at full 24 fps, and with no need for the cam/switch/mouse approach that uses Stop Motion animation capture. Benefits include the ability to simultaneously capture fully synced sound, much faster throughput (capture is real-time), and the ability to capture to HDV (which cannot be created with stop-motion).

The projector modifications only require removing the projector shutter, and, of course, getting a diffuse light source (if you are going to shoot back into the projector using an aerial lens or similar arrangement). I just replaced the MR16 ELC projector bulb in my Eiki SSL-O with a 30W 24V MR16 (similar color value, costs $2.50), and stuck some frosted glass on the end.

The secret to the whole thing lies in the knowledge accumulated in the AVISynth section of this forum, and in a few critical insights I gained while trying to understand why real-time transfers using a 5-bladed shutter (something I didn't want to do because of the lousy quality) were better than with a 3-bladed shutter. After careful study, it became clear that most of the reasons usually offered were incorrect and misleading. I started to sketch out on a timeline what happens at each moment in time when a film is projected (on-off-on-off-on-off-on-off film moves etc.), and when an unsynchronized CCD video camera photographs that mess, and does so with various video camera "shutter speeds."

I have to go to a meeting, so I'll leave you hanging for a moment, but if you're interested, I'll give you the rest of the story. There has to be a downside to what I'm doing, but so far I haven't found it. Some of the post processing ideas for getting closer to film gamma are still going to be extremely useful, although I can do some of this in my FX1 when I capture.

rfmmars
26th January 2006, 23:48
I own an 8mm/Super8 Workprinter. Great device, but slow. I now have to do 16mm conversions, so I bought my own projector. With very little work, I now have a device that produces frame-by-frame film to video transfer (like Workprinter) but at full 24 fps, and with no need for the cam/switch/mouse approach that uses Stop Motion animation capture. Benefits include the ability to simultaneously capture fully synced sound, much faster throughput (capture is real-time), and the ability to capture to HDV (which cannot be created with stop-motion).

The projector modifications only require removing the projector shutter, and, of course, getting a diffuse light source (if you are going to shoot back into the projector using an aerial lens or similar arrangement). I just replaced the MR16 ELC projector bulb in my Eiki SSL-O with a 30W 24V MR16 (similar color value, costs $2.50), and stuck some frosted glass on the end.

The secret to the whole thing lies in the knowledge accumulated in the AVISynth section of this forum, and in a few critical insights I gained while trying to understand why real-time transfers using a 5-bladed shutter (something I didn't want to do because of the lousy quality) were better than with a 3-bladed shutter. After careful study, it became clear that most of the reasons usually offered were incorrect and misleading. I started to sketch out on a timeline what happens at each moment in time when a film is projected (on-off-on-off-on-off-on-off film moves etc.), and when an unsynchronized CCD video camera photographs that mess, and does so with various video camera "shutter speeds."

I have to go to a meeting, so I'll leave you hanging for a moment, but if you're interested, I'll give you the rest of the story. There has to be a downside to what I'm doing, but so far I haven't found it. Some of the post processing ideas for getting closer to film gamma are still going to be extremely useful, although I can do some of this in my FX1 when I capture.

Waiting for your next post with details of your system.

Richard
photorecall.net

johnmeyer
27th January 2006, 02:39
I'm a little shy about posting all the details until I run a lot of film to verify that all is going well. I will share, however, a few of the insights that led to my "invention" (I am doing nothing that hasn't been done before, but no one -- I don't think -- has put it all together this way).

1. Why is a 5-bladed projector shutter better for doing full-speed film to video transfers?

I had no desire to go this route because of the fuzziness that comes from frame blending, but I was fascinated by all the discussion about needing a 5-bladed shutter if you wanted better results. Most discussions referenced the flicker you perceive when watching the actual film projected onto a screen with a 5-bladed shutter. This clearly isn't something that matters to the video camera.

I also noted that all the "cookbooks" on capturing video directly from a projected image all made it clear that you had to use a shutter speed of 1/60 of a second on your video camera.

That got me to thinking, and before I could formulate a complete answer to my first question, I ended up asking a second question:

2. Why do you need to use a camera shutter speed of 1/60 of a second?

This one I figured out pretty quickly. If you use a really fast camera shutter speed, then you might capture a moment when the projector shutter is completely closed. That means that field would be completely black.

3. But wait, what does it even mean to have a shutter speed on a video camera? Isn't this thing scanning?

I was hung up on this for a long time. Obviously a traditional TV set plays the video one scan line at a time for each field, and then does the same for the other field, 1/60 of a second later. Each scan line occurs at a slightly later instant in time than the line above. Older TV cameras used to do the same thing when capturing. For them, the idea of a "shutter speed" was meaningless. It took me awhile to have that big "duh" moment and realize that the CCD in modern video cameras is no different from the CCD in still cameras (which is why video cameras usually have some sort of still photo capture capability). Thus, each field is captured in the same manner as an image is captured on traditional film, with the entire image area EXPOSED AT THE SAME MOMENT IN TIME. The light actually "imaged" onto the sensor is the average of the light that falls on that sensor during the time each pixel in the CCD array is enabled.

This then, at last, gave me the answer to the first question about why a 5-bladed shutter is better.

It also lead me to the discovery that is at the heart of the invention.

First, when taking a picture of a series of still images that are flashed on the screen for brief periods, the actual image that gets transferred to the sensor is the AVERAGE of the intensity for the time the sensor is enabled. If, during the capture, the shutter closes for awhile, and if the sensor is enabled for the entire period of one field of video (1/60 of a second for NTSC), then rather than getting a black image on some of the scan lines (like an old camera would have recorded -- this is what hung me up for awhile), instead, the black pixels get averaged with whatever light appears during the time the projector shutter is open. The longer the duty cycle (i.e., the larger the percentage of time the shutter is open), the brighter the picture. Since the video camera and the projector cannot be synchronized, there is no way to know at what point in the projection of each frame the video camera will actually make its 1/60 second snapshot. Depending on where it starts, and on how many times the projector shutter opens and closes for each frame of video, the CCD may see a very large percentage of all black (shutter closed) or a very small percentage.

Here's the thing: the more blades, the less the possible variation in that duty cycle, and therefore the less the flicker.

Great, so where does this all lead? I don't want any flicker, and I want to capture individual frames, just like my 8mm/Super8 Workprinter. I have this new (to me) Eiki 16mm projector, why don't I just slow it way down and proceed to pretty much mimic what was done on the Workprinter?

But I have cans and cans of 16mm film and this project is going to take forever if I do it that way. What's more, I find out that this projector uses a synchronous AC motor, and there is no way I can electrically slow it down like was done with the DC motor in the 8mm projector. I can try to use belts and pulleys, but that is going to be hard because the gear ratio results in pulley sizes that won't fit on the machine. I can use another motor (I briefly ran the projector using my electric drill), but this is getting ugly fast.

There has to be a better way.

Well, if I remove the shutter, I can get rid of all that time when the frame is black, and I won't have to slow things down that much. I start sketching things out on a timeline, and indeed, I can run a lot faster. However, I keep running into timing issues, because with a 1/60 of a second camera shutter and 24 fps coming out of the projector, depending on how long it takes for the film to advance, I have to always have enough time left to snap two successive video fields. Without going into all the math, I conclude that I can run the projector no faster than 12 fps. When I look at other factors, about how the stop motion capture software works, I realize why the Workprinter is only rated at about 6 fps.

And then it dawns on me.

Why the heck do I need to use a 1/60 shutter? The only reason for that shutter speed was to keep the shutter open for the entire time it takes to capture one field of video in order to provide the maximum averaging for the projector shutter opening and closing during the capture (this is why I gave you that long explanation above).

But I've removed the projector shutter!

Why, instead, not use a camera 1/1000 shutter. If I do that, then I can get two samples only 1/60 of a second apart. Think about it, the camera captures one field right now: blink 1/1000 of a second and its got the picture. From that moment, exactly 1/60 of a second later: blink 1/1000 of a second, another frame is captured. Even if the projector is running at full 24 fps, no matter when the video camera snaps a frame relative to the projector I can be absolutely positively guaranteed to get two fields (i.e., one complete frame) of video of a motionless frame of film. After those two fields, depending on timing, I will either get another good field of the same film frame, or I might get a snap of the film as it is moving to the next frame (which will now be a clear snap rather than a blur, although it will be out of registration). What I will get is 3 good fields followed by 2 suspect fields followed by ...

Can you hear it coming?

Yup. It's 3:2 pulldown. What I've captured is ALMOST the same as what 24 fps video looks like when pulldown has been applied so it can be shown at 29.97 interlaced.

And what do we do, class, when we want to recover the 24 fps video from 29.97 interlaced video?

IVTC.

So, folks, that is my invention. It is so simple that I guess maybe it shouldn't be called an invention. Others have implemented various parts of it before, but none of them -- I think -- were familiar with IVTC software to have the solution seem so obvious.

This technique can be used whether you project onto a screen, use a telecine box, or capture looking directly into the projector lens. It can be used with any projector, sound or silent, once the projector shutter is removed (although most IVTC software is "tuned" for 24 fps operation).

The post processing techniques and exposure issues still all need to be addressed, as discussed earlier in this thread. The key thing this approach provides is:

1. Throughput. You capture at the full speed of the projector.

2. Frame-by-frame capture. You get the same benefits of a Rank Cinetel or Workprinter capture (although the Rank has lots of features you can't get with a projector).

3. HDV capture (maybe). If you want to capture 16mm using HDV, the 5-bladed projector approach is a bad idea because the blended frames will make the MPEG encoding go nuts. I am not sure (because I haven't yet tried it) whether the spurious frame captured while the film is moving (which will eventually be discarded by the IVTC software) will screw up the HDV encoding or not. Hopefully not, because HD capture of 16mm is a holy grail.

4. Capture of synchronized sound. With the Workprinter, if you have sound film you have to capture the sound separately and then sync up in your editing software. With this approach, you are capturing the sound at the same time as the picture, so they are -- by definition -- in sync (assuming your projector is in sync). After applying IVTC, if you set the flag in the AVI file to 24 fps, everything will playback in perfect sync. You can then encode directly to DVD, setting the 24 fps flag, and you are finished.

Let me know what you think.

videoFred
27th January 2006, 08:18
Thank you for this very interesting explanation and for sharing it.
The big question: what do you do with the out of registration fields/frames?

Second question: if I do this with my 15fps progressive machine camera, I still get blurred frames.....

Maybe I should try even higher shutter speeds?
I can go up to 1/4000. Of cource the projector must run at 15fps also.

Fred.

rfmmars
27th January 2006, 13:48
Thank you for this very interesting explanation and for sharing it.
The big question: what do you do with the out of registration fields/frames?

Second question: if I do this with my 15fps progressive machine camera, I still get blurred frames.....

Maybe I should try even higher shutter speeds?
I can go up to 1/4000. Of cource the projector must run at 15fps also.

Fred.

Yes they are GREAT details and fast coming,

Haven't try this so far, my concern would be reduced light sensistivty with a fast shutter. Using a 3CCD camera, I am at 15 lux. F 1.2

Richard
photorecall.net

FredThompson
27th January 2006, 19:24
Boy, this thread is turning into a real dream come true :P

@VideoFred, I've been meaning to drop you a note after a buddy and I stumbled across your discussions in another forum. Glad to see you're here, too. Great name, btw. Us Freds gotta stick together.

@all recent posters, thanks. Great ideas. I'm still enamored with the idea of multiple captures of each frame but realtime audio capture is also important.

johnmeyer
27th January 2006, 20:43
Actually, the shutter speed really doesn't need to be all that high. I used 1/1000 just to illustrate the point. The main reason for reducing it to less than 1/60 is to reduce the total capture time so that you can always be assured of getting two good captures (one for the top field of video and one for the bottom field) for every time each frame of film is at rest in the gate. Actually, even 1/120 would probably be sufficient. Remember that the only captures of interest are those where the film ISN'T moving, so having a higher shutter speed has nothing to do with "freezing" the action. I probably mislead some by making the off-hand remark about being able to freeze the film if you happen to capture a frame while it is moving.

As for IVTC, I didn't go into much detail (that's the part I'm withholding for now until I make sure it works 100% of the time), but if you've ever used any of it, you know that it compares fields, and keeps those that have very little difference.

videoFred
31st January 2006, 12:00
As for IVTC, I didn't go into much detail (that's the part I'm withholding for now until I make sure it works 100% of the time), but if you've ever used any of it, you know that it compares fields, and keeps those that have very little difference.

I begin to understand...
So IVTC removes bad fields/frames?
Those with to much difference? (moving)

EDIT: no, I do not complete understand.. What's IVTC?

Fred.

FredThompson
31st January 2006, 15:55
No, IVTC is related to removing the padding frames which are added to film source when it is converted from 24 fps to 29.97 for NTSC TV format. It has nothing to do with using an NTSC camera to record film real-time display. Real-time playback of the film should give proper audio but the frame rates are different so you get corupted frames in the video.

videoFred
31st January 2006, 16:15
No, IVTC is related to removing the padding frames which are added to film source when it is converted from 24 fps to 29.97 for NTSC TV format.

I see... But why remove them if you are making a NTSC DVD of it?

(experimenting with 4200K halogene light source, now.)
(also comparing Workprinter Sony 3CCD transfer with my system)
(hope my system is not too bad)
(also using special testfilm to test sharpness etc..)

Fred.

FredThompson
31st January 2006, 18:18
If you make an NTSC DVD of a film source you can encode at 24 fps and set the pulldown flag. The playback will look proper on NTSC equipment or a software player. The padding frames are inserted when a 24 fps film source is converted to an actual 29.97 fps NTSC stream so it looks like the proper playback speed. (I can't, for the life of me, comprehend why history shows don't properly pad early film so the movement isn't jerky...) PAL has a progressive mode and most of the time film is broadcast as the original frames but at 25 fps instead of the original 24 fps. Progressive images are going to be easier on the eyes and you don't lose bandwidth storing padding frames.

If you use an NTSC camera to record film at real-time speeds, you'll have video frames which don't exactly correspond to film frames. The hope is to get rid of the junk extra frames with a result of the proper framerate. This is why I am interested in slow capture or synchronized computer camera capture instead of NTSC video capture. Make sense?

johnmeyer
31st January 2006, 19:21
Answer to the IVTC question.

Film is 24 frames per second. Video is 60 fields per second, where two fields make up one frame. Since you only have 24 pieces of motion each second, yet you need to display 60 fields every second, you have to come up with a scheme that duplicates each of the 24 frames. Since 24 doesn't go into 60 evenly, what is done is to repeat one frame of film for three fields of video and then repeat the next frame of film for two fields of video. You then repeat this over and over with subsequent frames of film. Thus, you get two frames of film spread out over five fields of video. Since 60/24 = 5/2, this makes the film project at the proper speed. Your eye doesn't detect the fact that some frames are shown for a fraction of a second longer than others.

Of course video is really 59.94 fields per second, so there is a slight adjustment that gets made, but this is the basic idea.

Inverse Telecine software merely removes the duplication and "recovers" the original 24 original frames. This is extremely important software to use if you want to encode a movie for DVD, but your only source for that movie is a tape you made over the air or a VHS tape you purchased. If you merely encode that, the encoder goes nuts with the staggered fields. This is especially true because every frame of video is made up of two fields. Because five fields of video are used to encode two frames of film, one of the frames of video that results from blending two of those fields together must contain video from two frames of film. When projected, this looks just fine, but the encoder sees this "blended" frame as having abnormally high motion. It takes more bits to encode. What's more, the encoder must encode 30 frames every second, instead of 24. If you only have to encode 24 objects a second, you'll have more bits available for each frame, which results in a higher quality encode.

Back to my film project. When you use a video camera to capture film in the manner I described, you end up with exactly the same 3:2 pattern. The only difference is that on some captures, depending on the timing between the camera and the film, that one blended frame (which is the one you want to throw away) may include a field that was captured while the film was advancing. All that does is make the frame even easier to detect and throw away.

tedkunich
31st January 2006, 21:33
@videoFred

I was looking at your site for making your own works printer and am interested in the obective lens you used - the site translation was difficult to understand. Would you know of any sources for such a lens?

Thanks


T

videoFred
1st February 2006, 07:45
@videoFred

I was looking at your site for making your own works printer and am interested in the obective lens you used - the site translation was difficult to understand. Would you know of any sources for such a lens?

Thanks


T

Sure, here:
http://www.1394imaging.com/en/products/optics/megapixel/

But if I find the money, I might try a macro lens, too.

Fred.

videoFred
1st February 2006, 07:52
@ Fred and John:

Thank you both for explaining!
Living in Pal Europe, I never have to deal with this.

(Experiment with 4200K Halogen spot:
Yes, it's better. More easy to white balance, and better colors, too.)

Fred.

smok3
1st February 2006, 09:23
well, tnx all for an interesting thread (iam still with 'how to use my 35 leica lens for dv cam - to decrease the dof" thought, but this thread did give me some ideas . i think...)

hepi
20th March 2006, 05:11
Just to keep this alive
A link I found to bad it is only for 35 mm and could not find how much this camera cost. http://kmpi.konicaminolta.us/eprise/main/kmpi/content/cam/cam_category_pages/DigitalFilmScanners
Herb

hanfrunz
20th March 2006, 23:47
I think there is a super8 gate for this baby (http://www.thomsongrassvalley.com/products/film/spirit_4k/):D

videoFred
2nd June 2006, 07:38
Hi, FilmFolks ;)

Any new developments?
I changed my URL: www.super-8.be
Added some new filmframes on my site, too.
If the English intro has bad errors, please let me know.:scared:


I'm working on a new regular-8 unit:
Same capture camera, but this time I'm gonna use a Sony 45mm macro lens.
Pointed straight at the film frame.
Gives a very sharp picture, also in the corners.

The camera will be mounted on a precision machine slide, for optimal focussing.

This projector will be able to run stepless between 1 and 20 fps.
And I'm going to make removable shutter blades.
Because I still want to try to capture at high speed.
This way, I can test 1 - 3 - 5 blade shutter setup.

With a one blade shutter, there are no moving or blurred frames.
But a darker frame, now and then.
Depends on the shutter speed from the capture cam, of cource.
And the speed from the projector.

Within a few weeks, I can begin the experiments.
I do not have to worry about fields, because my cam is progressive.
I hope Avisynth can help to remove eventual duplicate/dark frames.

And if it's not good enough, I still can capture frame by frame.
This has proven to be the best system for now...
But slow...:eek:

Fred.

johnmeyer
2nd June 2006, 16:18
New developments? Well sort of.

I have worked out all the kinks in my film to video transfer. It was a good thing I didn't rush right in and try to do the whole thing back in January, becuase some newer versions of software, and also a little reflection on my design and technique have allowed me to perfect the whole process. The results truly are to die for.

First, the newer versions of TIVTC (which includes TFM) really made a difference in the process of removing the pulldown and extra frames. That process now works very well. I also was able to solve the problem of adjacent fields being offset because of residual "squirming" of the film as it comes to rest. With other types of film to video transfer, you would not see this, but because I am "snapping" the film image using a 1/1000 shutter speed. the residual movement as the film comes to rest is not blurred out over time. I solved the problem by using motion estimation software, which almost perfectly aligns the two fields horizontally and vertically. I then vertically shift the second field to put it spacially back in the correct place (one scan line lower).

Finally, I decided to eliminate the front mirror and aerial lens that I took from my Workprinter and was using for this project. What were we all thinking? What a horrible thing this device is. It introduces all sorts of chromatic aberrations (colored fringing) as well as keystoning and other distortions. I now just point the camera at the projector, about six inches away from the lens. The picture is upside down, but that requires exactly one single line of AVISythn code to fix. Boy, the picture sure is sharp.

I also spent a lot of time learning all the controls and settings on my new Sony FX1 HDV camera, and found various settings that really make a difference for film transfer. I auto white balance on the white light itself. I use the ND filters to get into the mid range of exposure. I make sure that the auto-gain is set to 0dB, so that all exposure is made with the aperture and the ND filter. I use a 1/1000 shutter speed. Most important, I use a combination of the "spotlight" feature, coupled with the autogain reduction. With the zebra pattern turned on, I adjust the autogain reduction until the zebra pattern just disappears. This makes sure that I always get proper exposure in the highlights. I then adjust the shadows and midtones using the correction tools in my video editing program (Vegas). I set up six different levels of gain reduction in the "PPV" menus, so I can instantly dial in exactly the amount of reduction required to just make the zebras disappear.

As I said, the results are unbelievable. And, I can capture at full 24 fps, and yet get Workprinter/Cintel frame-by-frame results.

rfmmars
2nd June 2006, 19:39
@ Fred and John:

Thank you both for explaining!
Living in Pal Europe, I never have to deal with this.

(Experiment with 4200K Halogen spot:
Yes, it's better. More easy to white balance, and better colors, too.)

Fred.

What I do is to install a 300 watt halogen 82V bulbs to all of my modified projectors with a separate mains transformer with the primary side having a wall dimmer. Yes they say you can't use a dimmer with a motor or transformer, Yah right.

This allows me to 'Light Blast" the dark scences" and also allows the projector light level to be set for the sweet point of the pickup camera, which I am using a 2/3" 3 CCD Sony DCX 755 with analog color balance and black clamp and iris control.

What I see happining here is that people are getting caught up in thinking that they are dealing with professional shot home moves, which they are not.

These old cameras had wind up spring motors or battery operated drives. Trying to worry about pulldown and other like things is a waste of time, you are dealing with non standard, varying film rates. Wait until you have to deal with "Sound Super 8 or 16mm. All those details goes out the window when you are dealing with sound film shot with weak batteries. Now your dealing with as low as 6 fps and varying across the entire clip.

Richard

photorecall.net

dokworm
6th June 2006, 02:45
No I think you misunderstand, we are talking about using a pulldown style software side to allow us to capture the film in at full speed, and yet end up with one video frame for each film frame (without having to resort to single frame advance and capture)

John, are you just pointing your Sony video camera at the *lens* of the projector?!?
I have been removing the lens altogether and pointing the camera directly at the *film*.
I'm suprised it works shooting through the projector lens as well, what sort of zoom level are you running on the video camera?

dokworm
6th June 2006, 02:49
Yeah I wish the FX1 had a true progressive mode, I hit the same problem with film movement between fields even when experimenting with an old TRV900E about a year ago when I tried coming in full speed and shutterless much in the way you describe.
I'm surprised the motion estimation works so well, can you share your setup?

johnmeyer
6th June 2006, 03:26
John, are you just pointing your Sony video camera at the *lens* of the projector?!?Yup. I set the camera about 6-10 inches from the lens. I focus the projector lens about midway in its travel and then zoom in with the camera. It doesn't require much zoom. Like I said in the earlier post, I have no idea what I was thinking, using that aerial lens, other than that was what Roger Evans used with the Workprinter I bought from him. I think the reason he used it is that some of his products are designed to be used without any computer involvement, and therefore flipping the image right-side-up would be a big deal.

I am surprised you can zoom in far enough to get the 16mm frame full size. I am quite certain that my camera would not be able to get something smaller than a postage stamp to fill the frame all the way.

Yeah I wish the FX1 had a true progressive modeActually, if it did, I couldn't use it. You'd have to draw out the diagram to see what the timing looks like, but if the video camera is not synced in some way to the projector (which it is not in my arrangement), then there is no way you could ever get a 24p camera to get a single frame of film, without the pulldown (if you remove the shutter) or without getting part of another frame (if you include the shutter).

I'm surprised the motion estimation works so well, can you share your setup?Well, I haven't cleaned up the code, so what follows is REALLY ugly. However, it works. This is the code for the second pass. In the first pass, where all the fun stuff happens, I actually figure out which fields I can discard. I put this information in a text file, and then massage that text file externally to make sure all the decimation decisions are correct. I then read that file back into TFM in this pass and do the actual decimation. The decimated frames get read into the motion estimation. Since the motion estimation unfortunately corrects in the vertical direction as well, I have to shift the fields (complementparity). The thing that I was stumped on, and which someone in these forums (in Germany) helped me, was to figure out how to ONLY have the second field compared with the previous field, and then have the motion estimation reset. Otherwise, you end up with motion stabilization which is not a desired result at all.

OK, here it is. Don't blame me if it doesn't make sense. When I'm finished with all this, perhaps I'll post the entire "recipe." Note that I use "autogain" to get better contrast for the TFM and motion estimation algorithms, but I actually then operate on the original, unadjusted footage. Neat trick (at least I thought so) to improve the quality of these filters.

I put this project on hold for most of last month, and I can't remember at the moment whether the normal operation of the script is to return "k" or to return "i". I was playing around with a lot of things. I think "k" is the final result. I think I was trying to fix a residual problem where the result is shifted by one frame. Not a big deal, but I wanted to fix it and then got side-tracked.

# Script to recover film frames from film projected on shutterless 16mm projector.
# Second Pass
# Copyright 2006 John H. Meyer
# Revision May 9, 2006
#-----------------------------

loadPlugin("c:\Program Files\AviSynth 2.5\plugins\TIVTC.dll")
loadPlugin("c:\Program Files\AviSynth 2.5\plugins\Depan.dll")
loadplugin("c:\Program Files\AviSynth 2.5\plugins\MultiDecimate.dll")

AVISource("D:\OPRFHS\Intersquad 1st reel)0002.avi")

FlipVertical()
converttoYUY2(interlaced=true)
AssumeBFF()
tfm(display=false,micout=2,mode=0,cthresh=45,mi=1600,\ pp=6,metric=0,field=1,micmatching=2,slow=2,\
blockx=256,blocky=256,sco=-1,\
debug=false,input="d:\tfm.txt")

a=MultiDecimate(pass=2)
b=AssumeFrameBased(a)
c=b.separatefields()
d=c.colorYUV(autogain=true)

e=interleave(blankclip(c),selectodd(c),selecteven(c),blankclip(c))
f=interleave(blankclip(d),selectodd(d),selecteven(d),blankclip(d))

mdata2 = DePanEstimate(f,range=1,dxmax=32,dymax=32,pixaspect=0.9091,info=false)
g=e.Depan(data=mdata2,matchfields=false,offset=1,info=false)
h=g.selectevery(4,2,3)
i=h.ComplementParity().weave().AssumeFrameBased.AssumeFPS(24)

j=i.ComplementParity().DoubleWeave().SelectEvery(6, 0, 0, 2, 3, 4)
k=j.AssumeFPS(29.97) # not a problem, since you have no audio
k
#i

rfmmars
6th June 2006, 05:57
No I think you misunderstand, we are talking about using a pulldown style software side to allow us to capture the film in at full speed, and yet end up with one video frame for each film frame (without having to resort to single frame advance and capture)



What I am saying is don't get caught up in the details when your source has a varying frame rate, because is was shot with crude equipment by today's standards.

Richard

videoFred
6th June 2006, 10:35
to capture the film in at full speed, and yet end up with one video frame for each film frame


Maybe you are talking about 16mm only?
This was made at 24fps...

Regular 8 and Super-8 can be 15-16-18-24 fps and even something between it, and even all these speeds together :) , like Richard says.

But I agree, a frame accurate (1 film frame= 1 avi frame) and progressive transfer is the best way to start.

But then, for DVD use, it must be converted again.
MVFlowfps is the tool I use for this.
It smooths out zooming and panning very accurate.
I use Depan, too.

OK, sometimes MVFlowFps gives you wierd frames, on scenes with fast motion, but you barely see these artefacts on TV, at 25fps. The same should be true for NTSC users.

A really perfect solution would be an electronic synchronisation between the projector and the digital camera. It exists, but I hear it is not working 100%.....

Fred.

dokworm
6th June 2006, 12:57
Yeah I am talking 16mm mainly and 24fps Super8.
For DVD I am happy to just use the PAL methodology and speed it up to 25 fps.
Thanks John for the detail, it explains what I was missing, I was ending up with stabilised frames and couldn't work out why (Duh!)
Now it all finally falls into place in my thick skull.

I can zoom in far enough because I have a removable lens setup, (not on the FX1 obviously) so extension tubes and manual focus get me there easily.

videoFred
6th June 2006, 13:39
Hey guys,

It's time to show us some results!
A few frames would be fine!

Fred.

dokworm
7th June 2006, 20:57
Wouldn't a sync setup just be a matter of stripping out the sync pulse from the output of the camera and driving a stepper motor to advance the projector one frame (using the sync pulse as the trigger).
You could then have the projector being driven by the stepper motor at 25fps (for a PAL camera) and the camera just grabbing the frames when the stepper was stationary.
You could then come in at full speed without the need to discard any frames.

johnmeyer
7th June 2006, 21:26
A stepper motor would be cool, but a lot of work to engineer and install. Also, if driven through belts, the mechanism would still slip.

As for " source has a varying frame rate, because is was shot with crude equipment by today's standards ..." what I am doing doesn't matter one bit whether the film was shot at 16, 18, 24, or whether it varies over time. I don't care if it was hand-cranked. Nothing in my process depends on the original frame rate. That can be adjusted later on by whatever pulldown I decide to apply. All I care about is getting one film frame onto each frame of video. When played back, without pulldown, it will look really clean, but also playback way too fast. I then apply the proper pulldown and the speed is then correct. If there is sound, that can be stretched to fit whatever pulldown I use.

videoFred
8th June 2006, 07:20
No need for a stepper motor, but we need an electronic regulated projector, of cource with a DC motor.
And feedback from the main axle.. the shutter is fine for this.

The sync system is available here:
http://www.laendchen.de/

It works with the time pulses from the camera, these pulses are regulating the speed of the projector. This system works only with certain electronic speed projectors like Bauers etc.

I do not have it, so I can not judge it.
Some people say it works perfect, others are complaining.....

Of cource, with this system, the result is interlaced and with a pulldown pattern on it.... And the play speed is fixed to 16,...fps. (frame rate is 25fps, of cource)

But back to Johnmeyers system:
In my case, with my 15fps machine cam, I must capture at a lower speed, right? To be sure to have all frames captured at least one time. Then remove eventual duplicates and/or blurred frames with Avisynth, right?

Fred.

johnmeyer
10th June 2006, 00:09
I still feel like I am sometimes talking at cross-purposes when it comes to the two independent subjects of capture speed and playback speed.

The capture speed depends on the technique you use. You can capture at full speed (16, 18, 24 fps) if you just point the video camera at the projector screen and take what comes. Some adjacent frames are averaged, but with a 1/60 (NTSC) shutter speed, you minimize the flicker. In this process, the shutter speed needs to be exactly equal to the field duration.

However, if you use a capture technique that results in one individual frame of film into one video frame, then there is no overlap, no flicker, and the result is very sharp. If you play this back using standard NTSC equipment, without doing anything else, then it will play back too fast by a factor of 30/15 or 30/18 or 30/24 (OK 29.97/15 etc.). However, if you want to play it back on your computer, all you have to do is set the frame rate flag in the AVI file, and it will playback at exactly the correct speed. Even if the projector speed varied from 1 fps to 24 fps and back again during the capture process, as long as your capture scheme captures each frame of film to one frame of video, and doesn't miss any and doesn't duplicate any, then the playback speed will be PERFECT, controlled entirely by the clock in your PC.

Since the video is progressive (regardless of whether the file header says interlaced or progressive, the actual material is progressive because there is no temporal displacement between the upper and lower field, if you captured one frame of film to each frame of video), it will look FABULOUS when played on your computer. Exactly the correct speed.

So, whether you captured the film at one frame a second, or at 24 frames a second, none of that matters during playback. The only thing that matters is the playback flag, and that is easy to set, using either AVISynth, or various header patch utilities (to change it after the fact).

videoFred
12th June 2006, 06:57
Hello Johnmeyer again,

I know all this:D

I capture for two years with the mouse-microswitch, just like the Workprinter works... But with my 1024x768 cam, capture speed is very low: 4fps.

I was asking about recording straight in AVI modus.
Then my camera records whatever it see at 15fps.

As far as I can see, the only good way to work like this, is by using an industrial servo-motor with a special power supply. Because these motors are giving feedback, it is possible to let them run exactly at a given speed.

Fred.

johnmeyer
12th June 2006, 15:51
I was asking about recording straight in AVI modus.
Then my camera records whatever it see at 15fps.
This thinking is a common mistake. Everyone assumes that because 15 is exactly half of 30 (the nominal NTSC video frame rate) that you will somehow get a better capture than if the projector is running at 16, 18, or 24 fps. It really doesn't matter. The reason is that, no matter how precisely you can control the speed of the projector, it will never be exactly the speed you think it is. It will be off by a few fractions of a percent. In addition, there is no way to sync the video camera to the projector, so you may be capturing exactly two video frames for every frame of film, but most likely you will get one video frame that has exactly one frame of film, followed by the next video frame that has that same frame of film, blended with the next film frame. I fail to see the advantage of running the projector at less than its rated speed (with the slight exception noted below). You are still going to capture blended frames.

If you want to capture video from a projector, without using the "Workprinter" technique (which DOES sync the projector and camera, as you know, since your device does this as well), or without using the technique that I have developed (which achieves the sync using an adaptation of IVTC software in post production), you simply set the shutter speed of the camera so that it EXACTLY equals the field rate of the camera. For NTSC, this is 1/60 of a second (1/30 of a second per frame, two fields per frame --> 1/60 of a second per field). This way, the shutter is "open" (the shutter is actually electronic) for the longest time possible. This lets the camera capture an average of the video from the projector.

Now, this is the part where all sorts of confusion exists. People have often recommended using a specially modified projector that has a five-bladed shutter, rather than a three-bladed shutter, in order to reduce flicker. This is true, but it has nothing to do with the original reason for adding more blades to the shutter. The original reason was to INCREASE the flicker rate from 18 or 24 fps, which caused headaches to two, three, four, or five times that number. This was done for the sole reason of reducing the perception of flicker when watching a movie in a theater. However, the extra blades have a completely different positive effect when capturing video using a 1/60 video camera second shutter.

If you have very few blades, you have to draw out the timing diagram, but what happens is that you can have some captures where the exposure includes the entire time the shutter is closed, with the remaining time being the exposure of the next frame of film. The actual exposure of that film will be much darker, because the total light hitting the sensor is accumulated during the time the video camera shutter is open. If, during most of that time, there is no light, and then, for just a short moment, the video camera sees the frame of film, that frame will be quite dark. However, the next frame of film might not have the projector shutter closed at all, which will result in a much brighter image. This light/dark image change results in pretty nasty flicker. However, if you had a theoretical (doesn't actually exist) 100-bladed projector, then almost all the captures would have 30 or 40 shutter closures. Some might have 35, others would have 37. The percentage difference in exposure would be imperceptible. For those familiar with the term, the percentage difference in "duty cycle" becomes far less as you add more blades to the shutter.

While 100-blades is impractical, 3- and 5-bladed shutters are fairly easy to find, and these greatly reduce the difference in exposure from one frame to the next.

You can remove some flicker using software in post production. Also, if you can vary the speed of the projector slightly so that the "beat" pattern of when the exposure is the greatest and when it is the least varies slowly over time, rather than repeating every few frames.

So, a 15 fps projector isn't going to help much, and in fact is going to be a pain in the neck, because you are going to have to adjust the speed in post production, and unlike the Workprinter captures where you are dealing with discrete film frames and therefore can use standard pulldown to change speed, the results here are going to be not so good, because your original captured frames are sometimes going to be blends of adjacent frames.

Finally, if you do capture discrete frames, here is a pulldown script. I tried over half a dozen scripts, some using doubleweave, and some using changefps. The all produce the same thing in the end. I like SelectEvery, because it lets me see exactly what I'm doing.
AVISource("d:\frameserver.avi")
AssumeFrameBased

separatefields()

# Pulldown for 24 fps
# SelectEvery(8, 0,1, 2,3, 2,5, 4,7, 6,7)

# Pulldown for 18 fps using: normal-repeat-normal-weave-normal
# Might be a little sharper, but also a little "jerkier"
#selectevery(6, 0,1, 0,1, 2,3, 2,5, 4,5)

# Pulldown for 18 fps using: normal-weave-normal-weave-normal
SelectEvery(6, 0,1, 0,3, 2,3, 2,5, 4,5)

weave()
AssumeFPS(29.97, true)

dokworm
13th June 2006, 05:50
I think we all need to clear ourselves up.
Videofred and I understand everything you said, we both know that the speed of the original film is irrelevant, we want to end up with 1 frame of film on one frame of video (or computer image)

Fred has a camera that can *only* grab 15fps in progressive scan.
I think his question was how can he use that camera to utilise your method. i.e. How he might be able to capture faster than his current system which tops out at about 4fps.

He uses the workprinter method where the projector triggers the camera to grab a frame, so ends up with exactly one frame in the computer for each frame of film.

This is also what I currently do.

I had tried realtime methods but the film movement between fields caused me headaches, but you seem to have solved this.

My projector has a drive shaft that protrudes through the back of the projector, so it easy to attach an external motor.

Stepper motors are very easy to control, even if you have no electronics skills, you can buy a kit to interface to your PC for less than $50.

I figured if I had the *camera* trigger the projector stepper motor then I would end up driving the projector at exactly the speed of the camera, i.e. one frame of film per video frame (25fps for PAL).

Then you could bypass any need for frame culling later.

Also if you remove the shutter on the projector, and pick the right shutter speed on the camera, you should get only the clean unmoving film frame, and no flicker/exposure issues. (If using manual exposure settings)

videoFred
13th June 2006, 06:33
Fred has a camera that can *only* grab 15fps in progressive scan.

You got it ;)

Fred.

johnmeyer
13th June 2006, 06:51
Thanks, I understand now. I was stuck thinking he was using an NTSC DV camera for capture.

videoFred
13th June 2006, 07:27
Now, about the stepper motor:
A stepper motor does not give feedback.
If it is missing a few steps, you have a problem.

A servo motor gives feedback: you can use this feedback to fine adjust the real speed it is running on.

But I do not know how accurate this all can be.

There is also the frame changing problem. Suppose the projector is running at 100% the frame rate of your camera. Then suppose the camera is capturing a frame every time at the moment it changes... Then you get a complete blurred video file.

So we need some kind of regulation for this, too.

I think I'm gonna try the 640x480 modus from my 1024x768 camera. Then I might be able to capture frame by frame at higher speeds, maybe 8fps or something.

My Avisynth scripts will run faster with 640x480 files, too, and I will get much smaller files.

Fred.

dokworm
20th June 2006, 05:05
The accuracy isn't a problem if you remove the shutter. The film is stationary for a long time, and only moving for a short time, also if using the timing pulse from the camera itself, then you will not have it take an image of moving film, only when the film is still.

The feedback from the stepper isn't really a problem when rotating them in 360 degree increments.

I already built this exact setup with a stepper motor synchronised to a video camera for a 3D TV project many years ago, so I know it works perfectly re synchronisation.

I just need now to get a bigger motor and build it into the projector.

johnmeyer
20th June 2006, 05:43
also if using the timing pulse from the camera itself, then you will not have it take an image of moving film, only when the film is still.
I can see how this would work if you can get a timing pulse from the camera. But, how do you get such a pulse? Hmmm ... I guess one could make a simple circuit that would trigger off one of the sync signals in the composite video and then output that. Is that what you are planning to do?

videoFred
20th June 2006, 10:01
I already built this exact setup with a stepper motor synchronised to a video camera for a 3D TV project many years ago, so I know it works perfectly re synchronisation.

I just need now to get a bigger motor and build it into the projector.

This is very very interesting!
Could this work with my fire wire machine camera, too?
This kind of cam is using another protocol....

Fred.

dokworm
24th July 2006, 07:55
Don't know, you would need to be able to get a sync pulse from it somewhere.