Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 3rd February 2013, 20:30   #1  |  Link
dirtyharry2
Registered User
 
Join Date: Jan 2013
Posts: 4
Stackhorizontal - detect/remove overlap and make doublewide clip?

I have a 100' course that I want to video, and will do so from 2 fixed cameras alongside.

I know I can stack horizontal. I should be able to do it with no borders, and align them smoothly.

But I want it to look seamless. That means I'll probably have a marginal amount of L/R overlap on the cameras. It'll be a setup I use a lot, so I'm trying to avoid something manual where I input the amount to crop off one camera, and have it detect in some fashion.

TLDR I'm wanting to take the 2 vids, make a doublewide single height. I'll then only display the "appropriate" width at any given time, and pan across it. I think I've got that part figured out.

It's the joining of the 2 videos, and deleting "overlap" that I need help with.
dirtyharry2 is offline   Reply With Quote
Old 5th February 2013, 08:07   #2  |  Link
vcmohan
Registered User
 
Join Date: Jul 2003
Location: India
Posts: 890
Do the cameras run along the course in parallel and are the frames synchronised? Is it only along the width of frames of 2 clips there is overlap and the heights match exactly?
__________________
mohan
my plugins are now hosted here
vcmohan is offline   Reply With Quote
Old 5th February 2013, 10:00   #3  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
dirtyharry2, your cameras are fixed, so just stack two clip together and manually crop one until they blend seamlessly.
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 5th February 2013, 16:18   #4  |  Link
dirtyharry2
Registered User
 
Join Date: Jan 2013
Posts: 4
Quote:
Originally Posted by vcmohan View Post
Do the cameras run along the course in parallel and are the frames synchronised? Is it only along the width of frames of 2 clips there is overlap and the heights match exactly?
I'll probably have a similar problem with height. They will look syncronized but I doubt any manual setup can get it pixel perfect regardless of what "calibration" steps I take. I'll be utilizing this setup dozens of times in dozens of locations, so a software solution is what I"m looking for.

The two cameras are fixed, looking parallel.

Mystery: you have the solution I want, minus the word "manual". The purpose of the thread is to explore any way to do it in a script.
dirtyharry2 is offline   Reply With Quote
Old 5th February 2013, 16:42   #5  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
You just need to find the number of pixels to crop once. That's as much manual work as you need to do.
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 5th February 2013, 16:57   #6  |  Link
dirtyharry2
Registered User
 
Join Date: Jan 2013
Posts: 4
Quote:
Originally Posted by Mystery Keeper View Post
You just need to find the number of pixels to crop once. That's as much manual work as you need to do.
Yes. And I'd rather not stack them with a 20 pixel crop, see that it's off by a bit, restack with 9, 4, 2, and then 1 until it looks seamless. And do that again and again, dozens or hundreds of times. Because each time the video is setup, the offset/overlap will be marginally different.

From the OP:
Quote:
It'll be a setup I use a lot, so I'm trying to avoid something manual where I input the amount to crop off one camera, and have it detect in some fashion.
The entire purpose of the thread is to get avisynth to "find the number of pixels to crop once". I know I can do it manually.
dirtyharry2 is offline   Reply With Quote
Old 5th February 2013, 17:51   #7  |  Link
wonkey_monkey
Formerly davidh*****
 
wonkey_monkey's Avatar
 
Join Date: Jan 2004
Posts: 2,496
Your problem doesn't really seem like one that fits AviSynth philosophically, to me - I fully expect to be proven wrong in the next few posts but if you passed two videos to a filter/function, how would you expect it to find their alignment? You presumably don't want or need something that will analyse the whole video - if the cameras are well fixed a single pair of images will be enough to get the alignment. But then, which images?

But ignoring that, have you got a couple of sample images? I'm a big fan of image stitching and the like, and I may have some tangential ideas on getting the best out of AviSynth. For example, do you do anything about the perspective shift between the two images?

David
wonkey_monkey is offline   Reply With Quote
Old 5th February 2013, 17:52   #8  |  Link
StainlessS
HeartlessS Usurer
 
StainlessS's Avatar
 
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
Can you put up a sample pair of frames (or a couple of pairs).
Can possibly do a fix so long as frames are parallel.
I'm a bit busy but shall try to take a look.
__________________
I sometimes post sober.
StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace

"Some infinities are bigger than other infinities", but how many of them are infinitely bigger ???
StainlessS is offline   Reply With Quote
Old 5th February 2013, 18:06   #9  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,867
The major effects are, lens distortion, 2d image registration, histogram matching, vignetting, time sychronization. Some minor effects are, variable image sharpness, minor geometric mismatches, color differences, dynamic image changes (like vibration)

0 Physical setup. Keep on a parallel plane as much as possible, and use similar lens. Keep same orientation but only horizontal shift. There must be some 10-20% overlap in the images at least for image processing. Use same aperture (but reduced vignetting) and zoom, and synchronize shutters. Isolate vibration. Fix or sync camera parameters.

1 Geometric matching. You have to first remove lens distortion from each camera. This is calibrated once per camera, with debarrel.
Now you have to align the images. This is done with depan. It gives the x/y offset of similar images. There is also the question of zoom and rotation. You can reduce this with the physical setup, but also process in software. There can be further problems like perspective or volume anamorphisis.

2 You have to blend the images. First you have to adjust levels. Then you can blend with a straight cut, but there will seem to be a line between them. With an overlap you might see some blurriness when objects pass through. A more sophisticated way to blend is with image fusion. There is also the question of color matching, and vignetting, especially at the edges.

3 You have to synchronize in time. You can sort of do this with a variable motion compensation, but much easier to synchronize the cameras. If you use Canon cameras, see if the Magic Lantern firmware supports 3d synching through usb. Otherwise find some other sync solution (usually a pair is synched for 3d purposes, so search for that).

4 Testing. Film a checkerboard printout, stationary in the field of both cameras. This tests static geometric matching. Then with the paper moving. This tests time sychnronization. Then an object with various shades (like a greyscale, vertically oriented). This tests the levels. Also film a color checker in both cameras, this can sync colors. Finally with real objects.

5 Problems. Speed, and whether to correct per-setup or per-frame in some parameters. Depan can have mismatches, so that is the risk of doing it per-frame. Etc. Need samples.

plugins:
depan
avisynth.org.ru/depan/depan.html

debarrel
avisynth.org/vcmohan/DeBarrel/DeBarrel.html

fusion
http://forum.doom9.org/showthread.php?t=152109

magic lantern
http://www.magiclantern.fm/

color checker
http://www.amazon.com/X-Rite-MSCCPP-ColorChecker-Passport-Software/dp/B002NU5UW8

I wrote a script to match brightness and contrast based on two points, or you can use histogram matching
avisynth.org/vcmohan/HistogramAdjust/HistogramAdjust.html

Last edited by jmac698; 5th February 2013 at 18:42.
jmac698 is offline   Reply With Quote
Old 5th February 2013, 19:54   #10  |  Link
dirtyharry2
Registered User
 
Join Date: Jan 2013
Posts: 4
Wow, lots of info.

First, I had hoped to avoid really caring about the perspective. One's shooting the course from 1-25' (located around 13') and one's shooting 25-50' (located around 38')(with a very minor overlap between the 2). As I'm not doing a "normal" panorama (same camera location, 2 different angled shots) I didn't think there'd be much issue in that respect. Lighting etc should be very similar between the shots. I had some brief thoughts about distortoin/vignetting at the edges, but wasn't sure how serious of an issue it will be in practice.

I'm going to be using 2 808hd #16 cameras lens D (120 degree; Horiz Fov of aproximately 90) (which haven't arrived yet; damn internet shipping). http://www.chucklohr.com/808/C16/

I will shoot the course on Friday to generate a sample run for you guys. I'll probably just use 2 identical iphones to generate the sample.

I agree that as the camera is fixed, I should be able to get the information about croplocation from any given set of 2 syncronized frames. Synchronization shouldn't be a problem, as I'm starting them automatically and electronically based on a microcontroller (arduino). Millisecond synchronization should equal 30fps synchronization.

Jmac: Wow. Lots of info.

The cameras will be identical, shooting identical distances/views (just offset by 25') so hardware should be ok.

Using depan to get the x/y offset is exactly what I'm looking for, I think. I could potentially put a "logo" on the course around the 25' mark and have that visible in each camera, to allow for the comparison for x/y.

In blending, how much level adjustment is likely to be needed? If I'm left with a single vertical line as a result of the crop-in, that's likely not too bad (and maybe something that can also be dealt with in seperate post-processing). Bluriness is worse.

What I'm videoing is dog-racing (flyball). Here is a sample racehttp://www.youtube.com/watch?v=6VseDVRqXpQ. The camera is slightly elevated, is tighter than needed, and is focussed on the "inner" lane rather than the outer, but it gives an idea. Backing it up and focussing on the across lane will allow me to cover the whole course with just the 2 cameras. The video shows probably 9 ft of runback (5 are marked), plus 7' or so of run (6' to first jump), in the near lane. Far lane would get me more coverage. So even on the provided coverage I can cover my 25' each in 2 cameras easliy (assuming I cut most of the runback).

The "interesting" parts of flyball are the nose-to-nose-pass at the start/finish line, and the perfect "box-turn" (http://www.youtube.com/watch?v=OSjucfi4MXs). Watch at 2:30-3:00 or so for some cool slomo of turns).

(independant topic, I'm also wanting to automatically generate their insets, using data I'm capturing with a bunch of infrared linebreaks on the arduino I've got. I can generate the data (dog's start-time, total race-time, acceleration at various points, specific millisecond a pass occurred on) using the arduino, and I think I can autogenerate a text file formatted like an .avs and then import those variables to tell me what/where to inset. That's part 2 of this project. It's also irrelevant unless I can get a good "overall" shot of the race (and this side-on view looks SO much better than an overhead or "down the line" view). If I can pull sets of numbers from a textfile like this, I can "ESPN" the whole thing, automatically . I know I'll get a doublewide video. I'll then crop it so at vidstart it's 99% cam1 1% cam 2, at vid50% it's 99% cam2 and 1% cam1, and back to cam1. That will give the illusion of a normal-resolution camera being slid down and back a track following the action, and I think that'd look awesome.

(edit: corrected distances. Total course run is 102', so it's only 51' long)

Last edited by dirtyharry2; 5th February 2013 at 21:57.
dirtyharry2 is offline   Reply With Quote
Old 6th February 2013, 00:43   #11  |  Link
jmac698
Registered User
 
Join Date: Jan 2006
Posts: 1,867
Hi,
Don't let that scare you, I was kinda brainstorming, but I do have experience with bits and pieces of it. I have a script where I used depan to find the full frame version of a movie within the widescreen version. I highlighted that portion and you can sit back and watch as the director panned the image, which is pretty cool.

I've also worked on a script where someone was trying to combine a laserdisc and DVD release because the DVD cut off stuff, and in that case the colors were quite different, besides finding the exact overlap between them.

Finally I've been working with raw images with my camera and see how much vignetting there actually is, that's corrected in-camera by the jpg's. Also using Adobe Lens Profiles to correct distortion. So that kinda worried me.

We just need samples, it all depends how much you want to work on the final image.
jmac698 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:25.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.