Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#21 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Indeed I use padding during the search, and I still don't use it in the filters. I'll correct MVCompensate tonight ( I didn't experience a crash while testing it, I've been lucky )
The padding is used in almost every motion compensation based codec ( at least, MPEG-4, H264 ), because it allows a far better behavior of the motion vectors on the frame boundary. |
![]() |
![]() |
![]() |
#22 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Alright, new version is up : http://jourdan.madism.org/~manao/MVTools-v0.9.5.3.zip
Mainly a bugfixe ( several filters were affected by a silly bug ![]() A new feature, as asked by Fizick, for mvcompensate : "scbehavior", a boolean set to true by default, will allow you to keep the previous frame over a scenechange if you set it to false. Enjoy ( and report, because chroma MC means more bugs ![]() |
![]() |
![]() |
![]() |
#23 | Link | |
AviSynth plugger
Join Date: Nov 2003
Location: Russia
Posts: 2,183
|
Manao, it works! Thanks.
Now i want ask you about future deblocking. Do yon know about "Overlapped block motion compensation". The base article is: MT Orchard and GJ Sullivan, "Overlapped block motion compensation: An estimation-theoretic approach", IEEE Transactions on Image Processing, vol. 3, no. 5. There are many citations of it (I have no its content). http://citeseer.ist.psu.edu/context/43092/0 Quote:
__________________
My Avisynth plugins are now at http://avisynth.org.ru and mirror at http://avisynth.nl/users/fizick I usually do not provide a technical support in private messages. |
|
![]() |
![]() |
![]() |
#24 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Yes, OBMC is implemented in MVConvertFPS, MVBlur and MVInterpolate, but not in MVCompensate, since it wasn't the same purpose. However, it seems that I do OBMC in a different way ( I was moving for example a 12x12 block instead of a 8x8 )
MVCompensate is still buggy, I'll release a version which should work during the day ( hopefully, it will come with some other goodies ![]() I'll see if OBMC is useful for plain motion compensation, but other things have to be added before. |
![]() |
![]() |
![]() |
#25 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,393
|
Quote:
![]()
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 19th September 2004 at 14:40. |
|
![]() |
![]() |
![]() |
#26 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
New version is up : MVTools 0.9.6.1
Lot of bugfixes for the existing filters. No OBMC for MVCompensate, but I'll keep the idea in mind. So, for the filters already existing, those who should work are MVMask, MVShow, MVCompensate, MVDenoise and MVSCDetection, and MVAnalyse. The three others ( the same as always ) still doesn't check whether the MV goes out of the picture and so may crash unexpectedly. Though it shouldn't happen to often, since the king of OBMC implemented in them somehow check that ( but not properly, hence the warning ). Now, for the three new filters. Two have nothing to do with motion compensation, but I didn't want to put them in separate binaries, since they'll mainly be used with filters from this package. The third one uses vectors, and integrates somehow the two others. But let's cut to the point * QDeQuant(clip c, int quant, int level) : takes a clip and quantizes it, using an approximation of the H264 DCT. It filters the three planes ( 4x4 blocks for each of them, so the chroma isn't processed as in H264 ). It's not exactly the H264 DCT because at q1, it's lossless, and a q51 it's not that bad, but you can raise quant over 51. Level is the reference level of the picture. By default it's zero, but it can be set, for example, to 128. The picture is then treates as if pixels were ranging from -128 to 127, hence avoiding errors around 128. * Deblock(clip c, int quant, int aOffset, int bOffset ) : takes a clip, and deblock it using H264 deblocking, as if the picture was made only of inter blocks. This time, quant ranges from 0 to 51 as in H264, and has the same impact. aOffset and bOffset allow to raise / lower the quant when deciding for some internal thresholds. They are set by default to 0. Be warned that the filter should do nothing at quant < 16, if aOffset and bOffset are both zero. It's a wanted behavior ( thus it respect the partially the norm ). * EncDenoise(clip c, clip vectors, bool scbehavior, int quant, int aOffset, int bOffset, int thSCD1, int thSCD2) : it merges Deblock, QDeQuant and MVCompensate, taking from them the name and behavior of their parameters. It basically does a h264 encode as if all blocks were 8x8 inter blocks. Reference frame is the previous frame output by the filter ( if it is the correct one, else it's the previous frame of the source ), mvs are those given by mvanalyse on the source. The reference frame is compensated by the vectors, then the residual difference is quantized / dequantized and added to the result of the motion compensation. Finally, the frame is deblocked, and serves as reference for the next one. It's only because in loop filtering could not be done with avisynth that I made this filter, else the three others taken alone would have done the job. So, of course, it's bound to be buggy ( don't set quant to zero, you'll end up with a division by zero for example ![]() Have fun... Last edited by Manao; 20th September 2004 at 22:14. |
![]() |
![]() |
![]() |
#27 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,393
|
Hey, where are the ME-denoising-people shouting hooray? This is at least worth a comment, isn't it?
Big thanks for those interesting new tools, Manao! Won't come to testing before today evening, though. For now, let me throw in some thoughts & questions: - your above direct link doesn't work for me - seems to need the referer from that site itself [edit] no, a typo in the URL - Leak prooved me being blind ![]() - I think I noticed in v0.9.5.3, that chroma was compensated incorrectly [edit] by MVCompensate [/edit], giving a sort of colored borders around objects. Is that fixed, or was it another error on my side? - Sorry, but I've to take the steam a little down. Even if this method of quant/dequanting the residual error after motion compensating yields good results at first glance, it is conceptually wrong IMO. (You know, there's still a promise pending ![]() - Is it somehow possible to get a short-but-more-indepth explanation about the motion estimation parameters? I'm a little lost about "one-step-search" vs. "n-step-search" vs. "diamond search", and how the step size affects each of these. Could imagine I'm not alone with that... (a good link would do it as well)
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) Last edited by Didée; 20th September 2004 at 11:57. |
![]() |
![]() |
![]() |
#28 | Link | |
ffdshow/AviSynth wrangler
Join Date: Feb 2003
Location: Austria
Posts: 2,441
|
Quote:
![]() Just a simple tyop and not referrer-checking... ![]() np: Mikkel Metal - Nepal (Kompakt Total 5) Last edited by Leak; 20th September 2004 at 11:36. |
|
![]() |
![]() |
![]() |
#29 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
@All : there seems to be a crash with EncDenoise, perhaps because of scenechanges. I'll try to correct that tonight.
@Didée : As Leak said, I switched a - by a dot. For the chroma behavior, it's one of the bugs which should have been corrected in MVCompensate. I agree when you say quant / dequant for denoising isn't theorically good. However, if it's especially true for MPEG-4's dct, the h264 behaves better in this regards, because it doesn't add ringing. Moreover, the quant / dequant isn't used alone, there is also the deblocking ( if you want, you can try without deblocking : just set aOffset and bOffset to (10 - quant) ) For the ME settings, especially the search type : the algorithm used is EPZ. Its principle is rather simple : We scan the image horizontally. For a block, we test 5 different vectors, called predictors. These 5 vectors are found like that : - vector of the left block on the current frame ( it has already being computed, since we scan the frame left to right, top to bottom ) - vector of the top block, current frame - vector of the top right block, current frame - vector median ( ie each of its coordinates are the median of the coordinate of first 3 vectors ) - vector computed by the hierarchal analysis : the ME is made first on block 128x128, then 64x64 then 32x32 and so on. From the vectors for the blocks 128x128, we can interpolate a predictor for the blocks 64x64 ( a simple bilinear interpolation is enough ). Once the best vector in this set of predictor is found ( here, best means the one which minimizes the cost : sad + lambda * length ), we'll use it as the start point of a refinement algorithm. That algorithm is defined by the parameter SearchType. You'll find a description of the different algorithms available in this article. XviD and EPZ use a diamond ( ie logarithmic ), and I also advise this one. I'll try to make the documentation clearer on this aspect, or to find a better article to link to. |
![]() |
![]() |
![]() |
#30 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,393
|
![]() Thanks for the quick run-through. (Well, that are the very basics, aren't they ![]()
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
![]() |
![]() |
![]() |
#31 | Link |
interlace this!
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
|
weeeee!!! thanks for the new filters!
i tried hacking in the encoder-MVdenoise using DCTfilter and MVcompensate, but the results weren't as good as i got with 4 offset Xvid encodes. @ didee: explain why the quantization approach isn't that good for denoising? i know simple transcoding is a dumb idea, but my experiments with using several encodes overlayed (with different offsets) didn't give blocking, kept a good image and made compressibility (not surprisingly) a huge amount better. it didn't strike me as all that elegant, but i'm warming to the idea. i'd love to see frequency based noise reduction in images for removing harmonics like those in TV interference (there's WNR for virtualdub, but that tends to take out way too much).
__________________
sucking the life out of your videos since 2004 |
![]() |
![]() |
![]() |
#32 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
If the reference frame you're using is noisy, and the frame you want to denoise is also noisy, the quantization of the difference between the frame and the reference frame motion compensated will suppress noise from that difference. But noise will come back when you add the result od the quantization to the motion compensated frame which is noisy.
That's why the reference frame has to be denoised. Then the difference will contain only noise from current frame, and quantization / dequantization will partly remove it. |
![]() |
![]() |
![]() |
#33 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 5,393
|
Manao explained one half of the story. Without integrated denoising, some noise will escape the process - therefore denoising is an important aspect of this technique. But then, through denoising also some detail will be lost, which brings the danger up of unintentional altering some detail. Most probably only weak detail, but still.
But I was looking at it from another side: Even the best imaginable block-based ME can only work on orthogonal squares or rectangles. But, the presented textures mostly are not fully identical from frame to frame. Imagine an object that, as it is travelling its way through the frame, also is i.e. slowly rotating over any axis in space. This leads to small texture changes between the frames, and these small changes are likely to appear just as noise after compensation, and are in danger of being filtered away together with the noise as well. (I imagine something like not-constant texture smearing, or flickering, on moving objects.) (On top of that, the whole thing definetly needs to work with at least halfpel precision, since all texture information actually appears in an anti-aliased representation of a higher resolution source. Therefore, each moving texture actually IS shimmering, if we look at it on a pixel level.) Now, I don't say that this "new" method is just bad. In contrary, I can imagine it yields quite pleasing results if everything is done right. (but hey, wait for me ... first I need to test it, after all ![]() I only want to say that the method is not fully optimal. What I call an "optimal denoiser" is ... the human eye, or better said, the brain: We have little problems in detecting even very weak detail way below the noise level, if there is any at all. And I think the only way to get as close to this as possible, is good old temporal averaging - because that's mostly what our brains are doing with noisy input. Of course, it needs some more adjustments, opposed to "simply" doing MVCompensate().TemporalSoften(). I'm trying since some months to find the time to elaborate a little on that :|
__________________
- We´re at the beginning of the end of mankind´s childhood - My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!) |
![]() |
![]() |
![]() |
#34 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
I found a bug ( which was plaguing both MVCompensate and Encdenoise ). It should solve at last Mug Funky's issue. They will be a release tonight, hopefully with some speed improvements, because right now it's sloooow ( plain c code for everything )
|
![]() |
![]() |
![]() |
#35 | Link |
AviSynth plugger
Join Date: Nov 2003
Location: Russia
Posts: 2,183
|
BTW, the are not - block matched estimation-compemsation,
but flow method (pel based). But it will be another plugin...
__________________
My Avisynth plugins are now at http://avisynth.org.ru and mirror at http://avisynth.nl/users/fizick I usually do not provide a technical support in private messages. |
![]() |
![]() |
![]() |
#36 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Alright, silent update with bufixes for MVCompensate and EncDenoise ( no change of version because the only whange was a 'width' becoming 'height' as it should have been in the first place ).
No speed optimizations yet, because my favorite bug hunter ( aka FuPP ) found something rather disturbing ( huge slowdown ) with masktools + deen. But anyway, since I'm in the same office as fenrir which made x264, I think optimizations for fdct / quant / dequant / idct will come soon enough. I forgot to correct the link on the previous post, it's done now. And finally, EncDenoise is a temporal denoiser, far more temporal than TemporalSoften, because it uses frames up to the previous keyframe. |
![]() |
![]() |
![]() |
#37 | Link |
Registered User
Join Date: Mar 2004
Posts: 95
|
Hi,
Is possible with one of MVTools filters or with a combination to obtain a P frame or a B frame (or a close appearance of one of those)? To make myself more clear: it is in Xvid section a thread about obtaining a good quantization matrix using DCTune on several frames of the movie and averaging the results. One problem with that aproach was that for inter matrix it used also unprocessed frames so the result couldnt be accurate because that is not how inter frames look (more precise delta frames). Now, if we could obtain such a delta frame after motion estimation was performed (or a close representation of it) to run DCTune on it would make that method more reliable. Bye |
![]() |
![]() |
![]() |
#38 | Link | |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
Quote:
* the fdct / idct on a p-frame is made on the difference between motion compensation and source frame. Here, you get (difference / 2 + 128), because you can't output frame with 'negative' pixels, and with pixels over 255. * keyframe are ( should be ) output totally grey. * vectors won't be exactly the same as those XviD will use. |
|
![]() |
![]() |
![]() |
#39 | Link | ||
Registered User
Join Date: Mar 2004
Posts: 95
|
Cool, many thanks!
Quote:
Quote:
Many thanks again! Last edited by marcellus; 22nd September 2004 at 13:48. |
||
![]() |
![]() |
![]() |
#40 | Link |
Registered User
Join Date: Jan 2002
Location: France
Posts: 2,856
|
The information is stored on 16bit, because it will be fed directly to the fdct which also works on 16 bits. And halving the coefficient after idct is a way of quantizing.
The default settings are close enough of libavcoded behavior ( diamond, sad, epz ). However, you won't be able to put an I every 15 frames with my filters, you'll have to do it with avisynth ( discarding one out of 15 frames ). Bframe will be hard to modelize ( it could be done with forward and backward motion compensation, averaging, and differenciating ). |
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|