Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th December 2011, 19:34   #1  |  Link
Yellow_
Registered User
 
Join Date: Sep 2009
Posts: 378
Progressive stream -> two exposures -> Flicker -> HDR

My query is with reference to this on my blog:

http://blendervse.wordpress.com/2011...lossless-h264/

Basically a camera firmware that captures a h264 progressive stream where the frames alternate between two exposures captured in camera with the aim of generating a wider dynamic range output than possible by the same camera at one exposure. 2x 1/2 frame rate videos in one stream with differing exposures.

Current methods include using Interframe to generate interpolated frames, to create the 'missing' ones for each exposure and then export the two now full frame rate videos as 2x RGB image sequences and use a HDR tool like enfuse in hugin to make a Psuedo HDR. Then encode back to 8bit video. Time consuming, multiple steps, ghosted output and color space conversions at 8bit for the fusing process.

However all the motion has been captured albeit maybe low exposure affects the quality of that, it's the exposure 'flicker' and color data that varies, it seems to make more sense to interpolate / normalise / equalise those rather than motion and to make use of the exposure range from the two frame sequences in a 16bit file?. Exposure difference is user defined as either 2, 3 or 4 EV about the ISO value set.

So in theory, from my limited perspective and over simplified :-) would an alternative approach be to interpolate the luma and chroma to generate the wider dynamic range and use Dither Tools / avs2yuv to handle '16bit' frames encoded out to 10bit h264 and stay YCbCr all the way.

Any thoughts on approach, AVISynth plugins and functions applicable. Fundemental flaws. :-)

Last edited by Yellow_; 24th December 2011 at 20:59.
Yellow_ is offline   Reply With Quote
Old 24th December 2011, 22:50   #2  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,407
You've lost me with a link to a page with lots of links to pages with lots of links. In case there somewhere is one of those "interleaved double exposure" clips available to download, please post an obvious link for downloading.

Using Flow-Interpolation between the half-sets is quite error prone. In many frames there will be no exact match between the flow-interpolation and the (different exposure) reference.
It should be investigated if direct compensation can be made possible, perhaps using highpass-tricks or something thelike.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 25th December 2011, 00:27   #3  |  Link
Yellow_
Registered User
 
Join Date: Sep 2009
Posts: 378
hi, sorry yes many links. It was here:

http://www.http://vimeo.com/33987353

But as it requires Vimeo login and is 70mb I've done a Fast Recompress of the first 100 frames here:

http://www.yellowspace.webspace.virg...com/sample.avi

And a link to a paper on exposure blending implemented in applications like enfuse:

http://research.edm.uhasselt.be/~tme...on_reduced.pdf

Last edited by Yellow_; 26th December 2011 at 09:41.
Yellow_ is offline   Reply With Quote
Old 26th December 2011, 12:48   #4  |  Link
wonkey_monkey
Formerly davidh*****
 
wonkey_monkey's Avatar
 
Join Date: Jan 2004
Posts: 2,817
Quote:
However all the motion has been captured albeit maybe low exposure affects the quality of that, it's the exposure 'flicker' and color data that varies, it seems to make more sense to interpolate / normalise / equalise those rather than motion and to make use of the exposure range from the two frame sequences in a 16bit file?. Exposure difference is user defined as either 2, 3 or 4 EV about the ISO value set.
Maybe I'm not quite following you... but it seems like the whole point of this camera mode is to capture more dynamic range, over 2 images, than is possible to capture within 1 image. I don't see how you can do that without interpolating and combining neighbouring frames. If all you're going to do is normalise exposure, wouldn't you just get alternating crushed white/crushed black frames?

Incidentally, this may be of some interest

http://forum.doom9.org/showthread.php?t=152109

(it's an enfuse-like Avisynth plugin)

David
wonkey_monkey is offline   Reply With Quote
Old 26th December 2011, 14:56   #5  |  Link
Yellow_
Registered User
 
Join Date: Sep 2009
Posts: 378
David thanks for the link to Fusion, I'll give it a try.

Yes Interpolation is needed but my understanding is that the progressive stream is being discussed and approached generally elsewhere as 2x 12.5fps streams in one 'interleaved' file and the assumption interpolation must be done with a tool like Interframe to generate in between frames completely including the motion, talk of varying shutter speeds between exposures, motion blur calculations and optical flow.

But in reality the stream is 25P, with the motion intact, but with a flickering exposure, an exposure which is controlled in camera, known values ie: 2, 3 or 4 EV between the ISO value set not a sporadic shift that has to be analysed, so an approach that involves interpolating / merging /masking luma and chroma shifts in values between exposures seems a more suitable approach, then applying that shift to the necessary frames to generate a final wider dynamic range output into a 16bit file.

What does seem important is that there is no 'underexposed' scene both need to be sufficiently exposed to ensure motion is captured. 1 Exposure for shadow detail, alternative exposure for highlights. Minimum crushed black and blown highlights.

There is also the availability of capturing LOG based files in camera with controlled alternating exposures if that was a more suitable method of interpolating exposure values and the LOG capture lifts all shadow detail to a starting point of about 16 to then be pulled down in grading later and rolls off the highlights a bit too.

Does this make sense, or is my thinking flawed. Fusion looks like a very good point to start at. Although I notice the input 'must' be RGB? I'm hoping to find a solution that works with Dither tools LSB/MSB stacked 16bit functions and merging in linear light.

Last edited by Yellow_; 26th December 2011 at 15:13.
Yellow_ is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 23:18.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.