Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
20th July 2006, 12:05 | #101 | Link | |
Registered User
Join Date: Jan 2006
Posts: 3
|
Quote:
Here an example: http://rapidshare.de/files/26290105/Beispiel.m2v.html Could it be a typical MOIREÉ effect? Last edited by saibo; 20th July 2006 at 16:32. |
|
8th September 2009, 20:50 | #102 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
Fizick,
would it be possible to expose just the forward / the inverse transformation as functions (like the left part of 'show')? That would make it possible to me to experiment with manipulations in the frequency domain - like overlaying a mask... |
11th September 2009, 18:05 | #103 | Link |
AviSynth plugger
Join Date: Nov 2003
Location: Russia
Posts: 2,183
|
martin53, sorry for late respond.
it is (almost) not possible right now. 'show' represents not raw transformation, but only real part of it, and logariphmed. FFTW and I use complex float as transformation format. 1. You must store transformed image data somehow - well, it may be packed in any fictive format (e.g. RGB32). 2. You must provide means to manipulate this data - it is not possible in fictive format. Try ask avisynth core developers to implement float (complex float) image format. It will not in avisynth 2.6. I already asked it. May be in myphological v3.0 But It is possible to use integer storage format for DCT (not for FFT used in Defreq) with some scaling, but 8 bit is too anyway low IMO. Try program C code to experiment.
__________________
My Avisynth plugins are now at http://avisynth.org.ru and mirror at http://avisynth.nl/users/fizick I usually do not provide a technical support in private messages. |
11th September 2009, 19:27 | #104 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
Fizick,
a big thanks for your response! That's really bad news, but I kept feeling there was a problem like this. Do you think it is possible that you multiply, or signed add respectively, a mask to the frequency domain? I am almost sure that it would be useful to subtract a small amount of the high frequency area - similar to what is done in subtle audio noise reduction in frequency domain. And the multiplication with a mask could be an universal approach to your today maximum search and automated suppression of the peak in frequency domain, because one could just paint the suppression filter. One more aspect: The transformation transforms something that is 'everywhere' in the 'time' or 'xy' domain to one point in the frequency domain, and vice versa. A blurred image due to camera shake contains the shake path (of the camera movement while the shutter was open) like an infinity of parallel paths in the whole image. In fact the image lacks high frequency along the axis or the path of the shake. And I wonder if this kind of information is also transformed to a certain location or localized figure in frequency domain that can be fixed. It should be like a dark area in the frequency domain. If so, FFT/CDT could provide an approach to sharpen images which suffer from this. Of course it is impossible to reconstruct all information fully - it was lost during the capture. But it might be possible to transform the ugly paths of the lit points into more standard shapes, and just leave some soft focus. I know I can at least do some research in that direction with the help of 'show'. I'll add my results to this thread. martin |
13th September 2009, 12:16 | #105 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
Follow-Up
I made an analysis script so everyone can examine the effects of simple blurring on spectral density for himself or herself.
Code:
pi = 3.1415926536 #====== USER INPUT ===== #Motion blur length (integer) [ 0 ... ] # the value indicates the length ot the motion trail # 0 produces an uncorrelated random noise clip # variation changes the number of notch bands in the spectral density blur= 7 #direction of blur in radians # variation changes the angle of the notch bands in the spectral density global phi= 0 #pi/20 #deltaPhi adds to the angle of the blur on its path and results in a bent motion #values != 0 change the linear notch bands to elliptic(?) figures global deltaPhi= 0 #1.0/blur * pi/4 #oversample factor (integer) [1 ... ] for the blur, compensates the Overlay(...) integer offset error. #8 seems the best compromise for good correction and expense. #no need for oversample with phi=0, phi=pi/2 etc global oversample= 8 #clip size (integer) [32, 64, 128, 256 ... ] #dimension of the quadratic original clip. # 64 is fine to see the principles dim = 64 #====== SUBFUNCTIONS ===== #mBlur performs the motion blur by recursively overlaying the semi-transparent, slightly shifted original function mBlur(clip shifted, clip original, int i, int blurLength) { Overlay(shifted, original, x=round(cos(phi+i*deltaPhi)*i*oversample), y=round(sin(phi+i*deltaPhi)*i*oversample), opacity=1.0/(i+1), pc_range=true) return i >= blurLength ? last : mBlur(last, original, i+1, blurLength) } #====== MAIN SCRIPT ===== #create a random noise clip BlankClip(width=dim+2*blur, height=dim+2*blur, color=$808080).AddGrain(2500) #apply a motion blur to the clip - oversampled operation for better math precision - oversample > 1 ? Lanczos4Resize(oversample*(dim+2*blur), oversample*(dim+2*blur)) : last blur > 0 ? mBlur(last, last, 1, blur) : last oversample > 1 ? Lanczos4Resize(dim+2*blur, dim+2*blur) : last #cut off the border of the original clip where the motion blur could not be applied properly crop(blur ,blur ,-blur ,-blur ) #make DeFreq() calculate the spectral density of the blurred noise #and display the clip and its spectral density distribution ConvertToYUY2() stackhorizontal(last, crop( DeFreq(show=2), 0, 0, -dim/2, 0) ) - movies with perfect quality have an even spectral density, i.e. a constant gray in "show" mode. (of course not one single picture, but the average over a complete movie) - For real pictures with good quality, the limited bandwith makes the average of the spectral density dim to the right. Acoustically spoken, this would sound a bit dull. I verified that with some movies in place of the random noise. - a remarkable motion blur has an affect that reminds my acoustical mind to interference with a slightly delayed echo, i.e. some frequencies are boosted because the original and the echo add with the same polarity, while the frequencies inbetween are extinguished because the echo has the inverse polarity and always adds to zero with the original. martin Last edited by martin53; 13th September 2009 at 19:59. |
13th September 2009, 14:51 | #106 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
2nd Follow-up
Fizick and Mohan,
I have some ambitions to enhance some of my photos and movies with spectral-based concepts. I have photos in my archive where I could not keep the camera steady. And when I shoot some video clips, it sometimes appears that during a pan, the frames are blurred because it is dark and the camera uses a long exposure time. When I experimented with the script from my 1st follow-up, I was reminded of equalizers. While those equalizers (those with many sliders, or the parametrical ones which allow more precise control over fewer filters) only need to compensate a one-dimensional frequency response, the task is more complex in the two dimensional domain. At least, beside the feature to attenuate some frequencies, all equalizers also need to amplify other frequency ranges. So I kindly ask both of you: could DeFreq() and/or FFTQuiver() be developed further to allow amplification of frequency areas? As far as I understand, both are designed to kill some frequencies up to now. The question needs to be answered: "Where to derive the filter settings from". I would like to present my thoughts here, it is not yet a perfect description for 1:1 implementation. When working with photos, there is no chance to scroll through the frames and use DeFreq()'s show=2 averaging mode. Of course, the specific information in the image makes up a complex spectral density, and just normalizing to that must return random noise, not to a processed image. But specific with the motion blur, there is one transformation which is neutral to the problem, i.e. could help: It should be possible to rotate the original image horizontally or vertically through the frame, and thus get a vast number of frames for analysis: Code:
overlay(stackhorizontal(last, last), x=...) # x = -1 ... -(width-1) After that, a blurring should provide what I'd like to call the "defect mask" even from one source image. I'm still unsure if F2QMask could already use the inverse of this with some gamma change as "equalizer mask", because of the black-to-white range and the fact that F2Quiver multiplies (or convolves?) with the mask instead of dividing by (i.e. normalizing to) the mask. The proportional or inverse proportional characteristic of the multiplication is essential here! When working with movies, any manual process and maybe even the slow approach above is impractical. But I think that it is not too complicated for a filter chain to generate the "equalizer mask" automatically on a frame-by-frame basis for DVD or HD resolutions. Again, on a frame-by-frame basis, there is no averaging approach for obtaining the spectral density. But from all the above, we learned that the gradients in the frequency domain are extremely small, i.e. it is not neccessary at all to have a mask with edges. The mask is much more like a blurry cloud. And there is one more important fact: The very right of the frequency domain deals with hard edges of the image. It is not reasonable to correct too much there. But the more left hand areas of the frequency domain represent larger distances between pixels - this is where motion blur affects the image mostly. In few words: it should be sufficient to blur the obtained frequency representation of a frame massively (and I really mean massively) and to restrict it to the left hand part in order to get the "equalizer mask". Again, the "equalizer mask" needed to be a point-by-point-divisor in the frequency domain. Of course I'm also still eager to see the "globally subtract a little bit from the luma spectrum" feature in FFTQuiver() or DeFreq(). I experimented a bit with the script from the previous post and a chain of blur()'s instead of the motion blur, and it seems to me that it would be useful to have DeFreq() search for the point in frequency domain where there is the least spectral density instead of the most, and then subtract this amount from all frequencies before retransforming. It should help reduce the noise from those tiny digital camera sensors. (to others who might come across this topic: don't forget to read the thread about the other capable FFT filter FFTQuiver from vcmohan: http://forum.doom9.org/showthread.php?t=107725 martin Last edited by martin53; 13th September 2009 at 15:20. Reason: omission |
14th November 2009, 18:31 | #108 | Link |
AviSynth plugger
Join Date: Nov 2003
Location: Russia
Posts: 2,183
|
Mounir,
are your full script and "error" top secret ?
__________________
My Avisynth plugins are now at http://avisynth.org.ru and mirror at http://avisynth.nl/users/fizick I usually do not provide a technical support in private messages. |
Thread Tools | Search this Thread |
Display Modes | |
|
|