Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.


Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Development

Thread Tools Search this Thread Display Modes
Old 19th February 2009, 11:05   #1  |  Link
Registered User
Join Date: Dec 2008
Location: reviving a dead thread
Posts: 31
[W.I.P.] De-Anaglyph with MVTools?

AnnaFan777's question from the fauxD thread has led me down this path.

As previously mentioned recovering left/right stereo-pairs from an anaglyph image isn't entirely trivial. Most of the links in that post referred to the recovery of a 'new' right frame, given the original left frame and its stereo anaglyph. Only one software package claimed to recover stereo-pairs from only the anaglyph image, but its not free and the demo expired before I ever cared to test it.

I tinkered with edge detectors as I thought about what would be required to synthesize new color channels (views) for each stereo anaglyph image. (Uncomment the return line of your choice.)
# Sobel.AVS : Sobel edge/feature detector (straight from the AVISynth docs)
# Others detectors can be found at: homepages.inf.ed.ac.uk/rbf/HIPR2/featops.htm
# ----------------------------------------------------------------------------------------------------------------
function SobelH(clip s) { # Horizontal (Sobel) edge detection:
return s.GeneralConvolution(128, "
      1  2  1 
      0  0  0 
     -1 -2 -1 ", 8) }

function SobelV(clip s) { # Vertical (Sobel) Edge Detection:
return s.GeneralConvolution(128, "
      1  0 -1 
      2  0 -2 
      1  0 -1 ", 8) }

function Sobel(clip s) { return Overlay(s.SobelH, s.SobelV, mode="Blend") }
# ----------------------------------------------------------------------------------------------------------------
f = "D:\somepath\stereodepth_A_S.mpg"  #~6.5MB, found at www.archive.org/details/stereodepth
c = DirectshowSource(f).ConvertToRgb32(matrix="PC.601")
#return Sobel(c)

red = MergeRGB(c.ShowRed, c.blankclip, c.blankclip ).ConvertToYUY2(matrix="PC.601").Tweak(bright=25, cont=2.0, coring=false).ConvertToRgb32(matrix="PC.601").subtitle("Red(rightview)",300)  #increase brightness/contrast to better match the other eye's greyscale
cyan = MergeRGB(c.blankclip, c.ShowGreen, c.ShowBlue).subtitle("Cyan(leftview)")
#return StackVertical( red.Sobel.GreyScale , cyan.Sobel.GreyScale )

#return Interleave(red.greyscale, cyan.greyscale).AssumeFrameBased
return Interleave(red.Sobel.greyscale, cyan.Sobel.greyscale).AssumeFrameBased
Eventually I realized it would take a lot of block-matching and interpolating to transform the color channels correctly, and I seemed to remember there was already plugin that could do all those things. Slowly it dawned on me that MVTools has the required facilities built-in (just like AnnaFan777's original suggestion).

So here's my first attempt at such a script:
# DeAnaglyph.AVS: De-Anaglyph proof-of-concept test script (for RED-CYAN videos)
# eslave(2009) - based on an idea from AnnaFan777
LoadPlugin("D:\somepath\MVTools.dll")   #MVTools v2.31
f = "D:\somepath\stereodepth_A_S.mpg"  #~6.5MB, found at www.archive.org/details/stereodepth
#f = "D:\somepath\samplehoshinobg4.jpg"  #uncomment the appropriate red/cyan 'fix' lines for this image ( forum.doom9.org/showthread.php?p=1243776#post1243776 )
# ----------------------------------------------------------------------------------------------------------------
global mat = "PC.601"
#global mat = "Rec601"  #causes MAnalyse to fail on some test clips
function myRGB(clip c) { return c.ConvertToRgb32(matrix=mat) }
function myYUV(clip c) { return c.ConvertToYUY2(matrix=mat) }
function DeAnaglyph(clip c)
c = c.IsRGB32 ? c : c.myRGB
c = (c.height%2==0) ? c : c.CropBottom(1)   #make height even for YUV operations
c = (c.width%2==0) ? c : c.Crop(0,0,-1,-0)   #make width even for YUV operations

#Separate views & create manual or auto-leveled clips (hopefully with similar relative intensity for MAnalyse)
red = MergeRGB(c.ShowRed, c.blankclip, c.blankclip )    #seen by RIGHT eye through CYAN filter (single channel)
red2 = MergeRGB(c.ShowRed, c.ShowRed, c.blankclip )    #try increasing reds 'energy' by duplicating the channel (and make it similar to cyan's intensity)
redfix = red2.myYUV.ColorYUV(autogain=true, autowhite=true).myRGB #.Subtitle("RED(rightview)", x=300)  #most videos
#redfix = red2.Sharpen(0.5).myYUV.Tweak(bright=-80, cont=1.0, coring=false).myRGB #.Subtitle("RED(rightview)", x=300)  #(JPG sample)

cyan = MergeRGB(c.blankclip, c.ShowGreen, c.ShowBlue)       #seen by LEFT eye through RED filter
cyanfix = cyan.myYUV.ColorYUV(autogain=true, autowhite=true).myRGB #.Subtitle("CYAN(leftview)")   #most videos
#cyanfix = cyan.Sharpen(0.5) #.Subtitle("CYAN(leftview)")   #(JPG sample)
#return StackVertical(redfix.greyscale, cyanfix.greyscale)  #uncomment to see stacked views (adjust red/cyan FIX parameters until they appear very similar)

#Make double frame-length clips of alternating L/R views for MAnalyse
mixRC = Interleave(redfix.greyscale, cyanfix.greyscale).AssumeFrameBased
mixCR = Interleave(cyanfix.greyscale, redfix.greyscale).AssumeFrameBased #reversed frame-order for vector testing
#return mixRC  #uncomment to see alternating views (keep adjusting red/cyan FIX parameters until they appear very similar)

#Get per-block vectors for compensation
vecR = MAnalyse(mixRC.myYUV.MSuper, isb = false, blksize = 4, overlap=2).SelectOdd       #vectors for matching RED to CYAN
vecC = MAnalyse(mixCR.myYUV.MSuper, isb = false, blksize = 4, overlap=2).SelectOdd       #vectors for matching CYAN to RED
#vecC = MAnalyse(mixRC.myYUV.MSuper, isb = true, blksize = 4, overlap=2).SelectOdd  #for MFlowInter
#return MShow(mixRC.myYUV.MSuper, vecR).SelectOdd #uncomment to see motion field (test red/cyan FIX parameters)

#Synthesize new channels (views)
newred = MFlow(red.myYUV, red.myYUV.MSuper, vecR)       #looks almost OK
newcyan = MFlow(cyan.myYUV, cyan.myYUV.MSuper, vecC)    #looks less than OK
#newcyan = MFlowInter(cyan.myYUV, cyan.myYUV.MSuper, vecb, vecf, 100.0)     #looks even stranger

#Fuse synthesized and original channels into 'full-color' left/right views
mat = "Rec601"  #seems to reduce some color shimmering
left = MergeRGB(newred.myRGB.ShowRed, cyan.ShowGreen, cyan.ShowBlue)  #new LEFT frames
right = MergeRGB(red.ShowRed, newcyan.myRGB.ShowGreen, newcyan.myRGB.ShowBlue)  #new RIGHT frames

#uncomment your preferred output method (or Import fauxD.avsi for other stereo frame types. i.e.: AnaglyphRedCyan, AnaglyphYellowBlue)
return StackVertical( left , right )  #over-under
#return StackHorizontal( left , right )  #side-by-side
#return Interleave(left, right).AssumeFieldBased.Weave.BilinearResize(left.width*2, left.height*2)  #for some shutterglasses (do not vertically resize output)
# ----------------------------------------------------------------------------------------------------------------
src = (Findstr(UCase(f), ".JPG") > 0) ? ImageSource(f, end=24*30, fps=24)  : DirectshowSource(f) #.Trim(Int(5*30000/1001), 0) #cut straight to the stereo part
Sample results: (the URLs for these sources are in the script)

Although the motion vectors failed to appear in image #2 you can still see the final result is correct, which would not have occurred without motion vectors.

I certainly know it could use improvement, I'm just uncertain exactly how. It works well enough with my shutter-glasses, although visible errors such as bad vectors, occlusions, and post-anaglyph ghosts still remain. Also, some images have little or no information in large areas of one color channel (source view). The most troublesome part is fixing the RED and CYAN images to look similar enough that MAnalyse can do its thing, and such fixes would probably require adjustment every time the scene or lighting changes.

Any ideas?


Last edited by eslave; 23rd February 2009 at 07:28. Reason: added sample images as requested
eslave is offline   Reply With Quote
Old 21st February 2009, 18:38   #2  |  Link
Registered User
Join Date: Apr 2008
Posts: 51
Thanks, I haven't read/tested your script, but

Supposedly Anaglyph images would only consist of horizontal movements,
lelf image <-> Right image

Maybe we can find a way to ignore/discard false/large vertical motions, this
would make matching easier and more precise.

I don't know if Fizick cares to implement this (to limit vertical search range)
(independent X,Y search range parameters?) for MVanalysis.

If not, maybe we can do this by croping both images into several
horizontal strips and process separately. and then stack them back up
when it's all done

Ex: match R,C 640x480 -> crop to (640x8) x60 piece.(maybe some overlapping?),
now if motion block is 16 or 32?, then there would be no(?)/less vertical vectors
produced by MVtool, and by doing this, we could also make MVanalysis more precise.


on R,C edge matching -> how about apply some warpsharp/smoothing on both R,C edges?

source -> * edge preserving smoother (&deblock?) -> edge image -> smooth again
( * -> to keep jpg/vid compression artifacts from turing into edges )


on fixing R,C(edge) luminance variations (for better analysis):

Adjust R/C luma level to make sum-of-luma_R_edge = sum-of-luma_C_edge

(I remember you can calculate sum of luma (average) using avisynth or plugins, forgot which)

(supposedly R,C have similar edge definitions)

Maybe we can do this:

1. (sum-of-luma_C_edge) / (sum-of-luma_R_edge) = luma_gain_for_R

2. Adjust level of R_edge_image according to luma_gain_for_R

(I think using general convolution instead of level, would be simpler)

(to avoid clipping (R>255), both edge_images should be dimmed beforehand..)


just my thoughts,,

Last edited by AnnaFan777; 22nd February 2009 at 05:42.
AnnaFan777 is offline   Reply With Quote
Old 21st February 2009, 21:33   #3  |  Link
Registered User
Join Date: Apr 2008
Posts: 51
Another Note:

Edge Detection using general Convolution, Sobel or whatever,
is flawed, very flawed. don't know how it work for your script.
but I think you are going to miss out on a lot of edges.

From my experience, I recommend tritical's TEdgeMask

http://web.missouri.edu/~kes25c/ (it uses gradient vectors)

Make a comparison, you'll be surprised.

Last edited by AnnaFan777; 22nd February 2009 at 06:11.
AnnaFan777 is offline   Reply With Quote
Old 22nd February 2009, 05:45   #4  |  Link
Registered User
Join Date: Dec 2008
Location: reviving a dead thread
Posts: 31
Thanks for the input. I've been looking for other edge detectors, but I'm still unsure exactly how I'll be able to utilize such output with MVTools. When testing MVTools produced better results with the 'fixed' clips than with edge-only clips. I'll see if I can implement those luma gains you mentioned, but I'm still more than a little YUV-stupid.

Also, I'm not sure if all apparent vertical motions should be discarded, but a threshold setting would be nice. I thought I once read a post once about a vertical threshold option when converting to DePan vectors, but not for MAnalyse itself - however I cannot find it now. Sometimes occlusions or poor registration can cause MAnalyse to yield vertical vectors. The description page of Peter Wimmer's Deanaglyph states:
...horizontal parallax must be within 20 pixel and the vertical parallax within 5, else the algorithm will produce artefacts. Only pixels within a 16 pixel wide border can be processed. This limitations...
I'm guessing that's because perfect two-camera registration is nearly impossible (except for CG renderings). So your 8-pel strips with overlaps concept sounds pretty close to what he's talking about. However, at the moment I have MAnalyse running at top precision (I think) on adjusted greyscale clips and I'm still getting more error than anticipated. (4x4 blocks with 2 pel overlap)

I've browsed through the source for MVTools more than once and damned if I can spot exactly how or where the per-block vectors are manipulated. I checked before settling on DePan vectors when I wrote fauxD. I know such operations should be obvious, but I'm overlooking them somehow.

eslave is offline   Reply With Quote
Old 23rd February 2009, 00:40   #5  |  Link
Registered User
Join Date: Apr 2008
Posts: 51
I don't have a lot of material to test on..

Can you put up some video caps? before & after , good result & bad results..

Last edited by AnnaFan777; 23rd February 2009 at 04:55.
AnnaFan777 is offline   Reply With Quote
Old 23rd February 2009, 04:46   #6  |  Link
Registered User
Join Date: Dec 2008
Location: reviving a dead thread
Posts: 31
I don't have much to test with either, thats why I chose a publicly available source to test with, as well as your JPG. I could probably check NASA, flickr, and...um, er, that other thing... Google (Images), too.

Most U-Toob anaglyph videos I tried were too soft/blocky for decent results, others had very little detail in the red channel. I've also seen some poor anaglyph encodes with severe crosstalk/ghosting, and even a few interlaced anaglyph videos (yuk!). (It's been a long time since I've visited this site and I think he's reduced the framesize and re-encoded his ever diminishing on-line collection yet again. I don't see any more interlaced samples, but I still get a kick out of the videos.)


Last edited by eslave; 23rd February 2009 at 08:19. Reason: Added sample images to post #1.
eslave is offline   Reply With Quote

anaglyph, deanaglyph, stereo-3d, stereoscopic

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +1. The time now is 22:59.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.