Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 2nd January 2015, 15:32   #1  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
motion estimate for advanced denoise filters

I'm curious would motion estimate (mvtools) be necessary for the currently closest to ideal denoise filters like NLMeans or BM4D to enhance the quality to an even better level, would motion estimate here be useful or useless or even harmful
I searched google and the answer is quiet various, some says it's good, some says it's bad, now I feel my head's spinning
I wanna hear what you guys think about it and please don't talk things like reeeaaally professional, cuz I might be too underage to understand that '

Edit by manono: I moved this here as I don't see how it belongs in General Discussion. I hope that's okay with you.

Last edited by manono; 4th January 2015 at 00:27.
feisty2 is offline   Reply With Quote
Old 4th January 2015, 07:32   #2  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
@manono, it's okay, I posted in General Discussion because I think I want some advices about denoise algorithms, not particularly about plugins, and mvtools is not the only motion estimate program (even mvtools has avs version and vs version ) and BM4D plugin for avs/vs is still something does not exist yet :P
anyway, it suits both subforums, thx for leaving a note and let me know what's going on
feisty2 is offline   Reply With Quote
Old 4th January 2015, 21:10   #3  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Since nobody has posted an answer yet: I don't know how "NLMeans" (Non-Local Means) is generally implemented in video filters, but I know that this algorithm originally was suggested for image processing. Of course, the straight-forward way to apply an image algorithm on video is treating each frame like a separate image. If the NLMeans video filters are actually implemented in that way - somebody who is more into this topic may confirm or dispute this - then they obviously have no "temporal" component. And if they do not have any temporal component, then they probably can not benefit from motion estimation.

(If motion estimation is used in a denoising filter, then this is generally done in order to distinguish "real" noise from motion between consecutive frames. Put simply, in areas without any motion, we can "safely" blend the pixel values from consecutive frames in order to smooth out the noise. If, however, we did that in areas with motion, it would cause "ghosting" issues. But: If you do not apply such temporal processing at all, then motion estimation probably is not required or helpful)
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 4th January 2015 at 21:24.
LoRd_MuldeR is offline   Reply With Quote
Old 5th January 2015, 02:59   #4  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
TNLMeans can be spatio-temporal (but by default is spatial only) while NLMeansCL2 is only spatial. Zero benefit from mocomp in that case. But in general yes, that's why existing heavy-duty scripts like MCTemporalDenoise, TemporalDegrain, or QTGMC in progressive mode all use motion compensation. It benefits almost any temporal filter.

BM4D sound like something Tritical would code up for us. Neat. It's still spatial-only, but I could see a similar temporal extension to what TNLMeans allows.
foxyshadis is offline   Reply With Quote
Old 5th January 2015, 04:01   #5  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: void
Posts: 2,633
I read some more on google these 2 days, NLMeans has a 3D version for videos (az>0 in TNLMeans), something I always use for edge denoise (I use dfttest to denoise flat areas and merge them together with an edge mask afterward, both tnlmeans and dfttest work on motion compensated clips to get the best quality)
If I understand correctly, BM4D is a spatio temporal filter (BM3D is spatial), I guess motion estimate is not required in BM4D anymore, as it's a part of the algorithm, BM4D (Block Matching 4D) uses the same block match algorithm in mvtools in both spatial and temporal directions (that's why 2 more dimensions) while mvtools only block matches in temporal direction, it's like an upgraded mvtools, and it would require 4D filters if there's a BM4DCompensate (comparing to MCompensate)

Last edited by feisty2; 5th January 2015 at 04:06.
feisty2 is offline   Reply With Quote
Old 5th January 2015, 12:33   #6  |  Link
mawen1250
Registered User
 
Join Date: Aug 2011
Posts: 103
Motion estimate of x26x and mvtools is based on block matching. BM3D/BM4D is also based on block matching, so theoretically the BM algorithms used in mvtools can be modified to implement the BM part of BM3D.
The main difference is, for each reference block, ME only searches one matched block in a neighborhood frame, while BM3D searches multiple similar blocks in the current frame to form a group which is highly redundant, then the group is used for collaborative filtering.
As for NLMeans, the weighting of each pixel is determined by the similarity between its neighborhood and the reference one's, thus it is possible to modify BM algorithms to use in this case. Here the BM algorithm is only implemented to determine the weighting(similarity), not really matching.

Last edited by mawen1250; 5th January 2015 at 12:52.
mawen1250 is offline   Reply With Quote
Old 5th January 2015, 14:27   #7  |  Link
MrPete
Registered User
 
MrPete's Avatar
 
Join Date: Dec 2014
Location: @2100m in Colorado USA
Posts: 56
FWIW, something I deal with along these lines:

Sometimes it is helpful if not crucial to consider exactly which "layer" is being denoised, and also motion-compensated.

For example, I am converting old film to digital. The following "layers" exist in my workflow:

- Original real-world scene: may have had moving people or background (eg shot from a train)
- Original film camera: used grainy film, each frame might have moved slightly w/ respect to others - shows as motion relative to sprocket holes
- Playback projector: film frame positioning in gate can vary slightly - shows as sprocket hole motion but also moves the image in the frame
(Playback-time dust also can be thought of as additional "noise")
- Capture camera: sensor has ISO-based noise, and camera can vibrate, introducing a fourth motion - shows as motion of gate-edge, sprocket, and image in frame

So for me, I have four potential motion sources, and two or three sources of noise!

Practically-speaking:
* As long as the motion compensation is NOT doing variable-zoom resizing, I find it best to "deshake" all the way to the original scene, before denoising. This enables the temporal aspect to do the best job possible, because static features in the original image can be fully aligned.
* One implication: I fully crop the image before deshake. I'm NOT just trying to align the captured frames w/ one another.
MrPete is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:20.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.