Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th January 2013, 10:38   #1  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
Need help with temporal motion compensated denoise

When dfttestMC came out, I really loved the idea of pure temporal denoising. In fact, it seems the best way to denoise with minimal loss of details. Later I modded dfttestMC to work with new SVP pluging, which is at the very least much faster. I use it like this:
Code:
#================================================
c = separatefields()
a = c.selecteven()
b = c.selectodd()

prea=dfttest(a, sigma=16, dither=1)
preb=dfttest(b, sigma=16, dither=1)

thSAD = 150
thSCD1 = 100
thSCD2 = 84
ssxstring = """ "0.0:1.0 1.0:1.0" """
ssystring = """ "0.0:1.0 1.0:1.0" """
sststring = """ "0.0:1.0 0.01:0.0 1.0:0.0" """
sdftparams = ", smode=0, ftype=2, ssx=" + ssxstring + ", ssy=" + ssystring + ", sst=" + sststring
sanalyzeparams = ", block:{w:16, h:16, overlap:3}"

a = a.dfttestSV(pp=prea, mc=5, sbsize=1, sosize=0, pel=4, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2, dfttest_params=sdftparams, gpu=true, analyzeparams=sanalyzeparams)
b = b.dfttestSV(pp=preb, mc=5, sbsize=1, sosize=0, pel=4, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2, dfttest_params=sdftparams, gpu=true, analyzeparams=sanalyzeparams)

#around 530 - strings, fingers, gradient
#3496 - fine curtain
#8954 - shirt blur
#around 12970 - artifacts

interleave(a,b)
weave()
#================================================

QTGMC(Preset="Placebo", EdiMode="NNEDI3", ChromaEdi="", EdiQual=2, NNeurons=4, NNSize=3, NoiseProcess=0, SubPel=4, SubPelInterp=2, TrueMotion=true, Precise=true, ChromaMotion=true, GlobalMotion=true, search=5, searchparam=32).SelectEven()

GradFun3()
The idea is to eliminate all temporal frequencies except the lowest one. It worked, but still couldn't beat the strong noise on dark background. Seems like the lowest frequency is far from zero. So I added the second pass with TemporalSoften to denoising function:
Code:
# Based on motion-compensated dfttest by twc
# Aka: Really Really Really Slow
#
# Requirements:
# dfttest
# MVTools2
# SVP >= 1.0.5
# 
# Suggested:
# Dither (for stack16 processing)
#
# Description of function parameters:
#
# pp = Clip to calculate vectors from (default input)
# mc = Number of frames in each direction to compensate (default 2, max 5)
# lsb = stack16 output and processing (default false)
#
# dfttest Y, U, V, sigma, sbsize, sosize, tbsize, and dither are supported.
# Extra dfttest parameters may be passed via dfttest_params.
# MVTools2 pel, thSCD, thSAD, blksize, overlap, dct, search, and 
# searchparam are also supported. 
#
# analyzeparams = SVAnalyze parameters in JSON format. Exclude "vector", "gpu" and "delta"
#
# sigma is the main control of dfttest strength.
# tbsize should not be set higher than mc * 2 + 1.

function dfttestSV(clip input, clip "pp", int "mc", bool "mdg", bool "Y", bool "U", bool "V", float "sigma", int "sbsize", int "sosize", int "tbsize", int "dither", bool "lsb", string "dfttest_params", int "thSAD", int "thSCD1", int "thSCD2", int "pel", bool "gpu", string "analyzeparams")
{
	# Set default options. Most external parameters are passed valueless.
	mc = default(mc, 2).min(5)
	lsb = default(lsb, false)
	Y = default(Y, true)
	U = default(U, true)
	V = default(V, true)
	tbsize = default(tbsize, mc * 2 + 1)
	dfttest_params = default(dfttest_params, "")
	gpu = default(gpu, false)
	sgpu = gpu ? "1" : "0"
	spel = pel==4 ? "4" : pel==2 ? "2" : "1"
	analyzeparams = default(analyzeparams, "")

	# Set chroma parameters.
	chroma = U || V

	# Prepare supersampled clips.
	pp_enabled = defined(pp)
	sv_super = pp_enabled ? SVSuper(pp,"{gpu:" + sgpu + ", pel:" + spel + "}") : SVSuper(input,"{gpu:" + sgpu + ", pel:" + spel + "}")
	pp_super = pp_enabled ? MSuper(pp, pel=pel, chroma=chroma, hpad=0, vpad=0) : MSuper(input, pel=pel, chroma=chroma, hpad=0, vpad=0)

	# Motion vector search.	
	vec1 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:1}" + analyzeparams + "}")
	vec2 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:2}" + analyzeparams + "}")
	vec3 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:3}" + analyzeparams + "}")
	vec4 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:4}" + analyzeparams + "}")
	vec5 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:5}" + analyzeparams + "}")
	
	b5vec = vec5.SVConvert(isb=true)
	b4vec = vec4.SVConvert(isb=true)
	b3vec = vec3.SVConvert(isb=true)
	b2vec = vec2.SVConvert(isb=true)
	b1vec = vec1.SVConvert(isb=true)
	f1vec = vec1.SVConvert(isb=false)
	f2vec = vec2.SVConvert(isb=false)
	f3vec = vec3.SVConvert(isb=false)
	f4vec = vec4.SVConvert(isb=false)
	f5vec = vec5.SVConvert(isb=false)
			
	# Motion Compensation.
	b5clip = MCompensate(input, pp_super, b5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b4clip = MCompensate(input, pp_super, b4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b3clip = MCompensate(input, pp_super, b3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b2clip = MCompensate(input, pp_super, b2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b1clip = MCompensate(input, pp_super, b1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f1clip = MCompensate(input, pp_super, f1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f2clip = MCompensate(input, pp_super, f2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f3clip = MCompensate(input, pp_super, f3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f4clip = MCompensate(input, pp_super, f4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f5clip = MCompensate(input, pp_super, f5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)

	# Create compensated clip.
	interleaved = mc >= 5 ? Interleave(b5clip, b4clip, b3clip, b2clip, b1clip, input, f1clip, f2clip, f3clip, f4clip, f5clip) :
		\ mc == 4 ? Interleave(b4clip, b3clip, b2clip, b1clip, input, f1clip, f2clip, f3clip, f4clip) :
		\ mc == 3 ? Interleave(b3clip, b2clip, b1clip, input, f1clip, f2clip, f3clip) :
		\ mc == 2 ? Interleave(b2clip, b1clip, input, f1clip, f2clip) :
		\ Interleave(b1clip, input, f1clip)

	# Perform dfttest. Exception handling required for official dfttest.
	try {
		filtered = Eval("dfttest(interleaved, Y=Y, U=U, V=V, sigma=sigma, sbsize=sbsize, sosize=sosize, tbsize=tbsize, lsb=lsb" + dfttest_params + ")")
		}
	catch(err_msg)
		{
		filtered = Eval("dfttest(interleaved, Y=Y, U=U, V=V, sigma=sigma, sbsize=sbsize, sosize=sosize, tbsize=tbsize" + dfttest_params + ")")
		}
	
	# Second pass.
	filtered = SelectEvery(filtered, mc * 2 + 1, mc)	
	
	b5clip = MCompensate(filtered, pp_super, b5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b4clip = MCompensate(filtered, pp_super, b4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b3clip = MCompensate(filtered, pp_super, b3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b2clip = MCompensate(filtered, pp_super, b2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b1clip = MCompensate(filtered, pp_super, b1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f1clip = MCompensate(filtered, pp_super, f1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f2clip = MCompensate(filtered, pp_super, f2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f3clip = MCompensate(filtered, pp_super, f3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f4clip = MCompensate(filtered, pp_super, f4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f5clip = MCompensate(filtered, pp_super, f5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	
	interleaved = mc >= 5 ? Interleave(b5clip, b4clip, b3clip, b2clip, b1clip, filtered, f1clip, f2clip, f3clip, f4clip, f5clip) :
		\ mc == 4 ? Interleave(b4clip, b3clip, b2clip, b1clip, filtered, f1clip, f2clip, f3clip, f4clip) :
		\ mc == 3 ? Interleave(b3clip, b2clip, b1clip, filtered, f1clip, f2clip, f3clip) :
		\ mc == 2 ? Interleave(b2clip, b1clip, filtered, f1clip, f2clip) :
		\ Interleave(b1clip, filtered, f1clip)
		
	filtered = TemporalSoften(interleaved, mc, 7, 7, 15, 2)
	
	return SelectEvery(filtered, mc * 2 + 1, mc)
}
I tried to encode with it. Result was very clear. But another problem came up. Somehow the script holds back small movements, so video becomes jerky. Happens both with and without second pass. What am I doing wrong?
__________________
...desu!

Last edited by Mystery Keeper; 24th January 2013 at 12:15.
Mystery Keeper is offline   Reply With Quote
Old 24th January 2013, 23:41   #2  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
I tried changing prefiltering to:
Code:
prea=a.Tweak(cont=3.0).dfttest(sigma=8, sbsize=11, smode=0, dither=1)
preb=b.Tweak(cont=3.0).dfttest(sigma=8, sbsize=11, smode=0, dither=1)
And the result was overbright! Why is this happening at all? Prefiltering is only used to make superclip. Does MCompensate actually produce the result based on superclip and not raw input?
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 25th January 2013, 00:04   #3  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
Quote:
Originally Posted by Mystery Keeper View Post
Does MCompensate actually produce the result based on superclip and not raw input?
Yes, of course the super clip is used to produce the compensated frames. Did you notice that for MCompensate, you're not only specifying a vector clip, but also a super clip?
If you want preprocessing only for motion search but not for making the result, you need to use two different superclips in parallel. One for searching, one for rendering.

vid = whatever
sup1 = vid.preprocess().msuper()
sup2 = vid.msuper(levels=1)
vec = manalyse(sup1)
comp = vid.mcompensate(sup2,vec)

The two superclips must share the same geometry (ie same pel value, same padding values), but can have different settings otherwise (sharpness, rfilter etc). The superclip for rendering can be reduced to level=1, because the hierarchical scaling instances are only needed for motionsearch, but not for rendering.

Exception is when using pel=1. In that case the superclip is not actually used for rendering anyway. (But needs to be given in the mcompensate call, nonetheless.)
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 25th January 2013, 00:38   #4  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
Thank you, Didée. I'm getting quite different result now. Haven't yet determined if better or worse. Definitely maintaining more details though. But the held back movements issue remains. Also, I had clip order wrong. I changed it from the original script because I didn't know how MVTools work, and it seemed weird. Changed it back. Though I guess it shouldn't have affected anything. Current function is:
Code:
# Based on Motion-compensated dfttest by twc
# Aka: Really Really Really Slow
#
# Requirements:
# dfttest
# MVTools2
# SVP >= 1.0.5
# 
# Suggested:
# Dither (for stack16 processing)
#
# Description of function parameters:
#
# pp = Clip to calculate vectors from (default input)
# mc = Number of frames in each direction to compensate (default 2, max 5)
# lsb = stack16 output and processing (default false)
#
# dfttest Y, U, V, sigma, sbsize, sosize, tbsize, and dither are supported.
# Extra dfttest parameters may be passed via dfttest_params.
# MVTools2 pel, thSCD, thSAD, blksize, overlap, dct, search, and 
# searchparam are also supported. 
#
# analyzeparams = SVAnalyze parameters in JSON format. Exclude "vector", "gpu" and "delta"
#
# sigma is the main control of dfttest strength.
# tbsize should not be set higher than mc * 2 + 1.

function dfttestSV(clip input, clip "pp", int "mc", bool "mdg", bool "Y", bool "U", bool "V", float "sigma", int "sbsize", int "sosize", int "tbsize", int "dither", bool "lsb", string "dfttest_params", int "thSAD", int "thSCD1", int "thSCD2", int "pel", bool "gpu", string "analyzeparams")
{
	# Set default options. Most external parameters are passed valueless.
	mc = default(mc, 2).min(5)
	lsb = default(lsb, false)
	Y = default(Y, true)
	U = default(U, true)
	V = default(V, true)
	tbsize = default(tbsize, mc * 2 + 1)
	dfttest_params = default(dfttest_params, "")
	gpu = default(gpu, false)
	sgpu = gpu ? "1" : "0"
	spel = pel==4 ? "4" : pel==2 ? "2" : "1"
	analyzeparams = default(analyzeparams, "")

	# Set chroma parameters.
	chroma = U || V

	# Prepare supersampled clips.
	pp_enabled = defined(pp)
	sv_super = pp_enabled ? SVSuper(pp,"{gpu:" + sgpu + ", pel:" + spel + "}") : SVSuper(input,"{gpu:" + sgpu + ", pel:" + spel + "}")
	in_super = MSuper(input, pel=pel, chroma=chroma, hpad=0, vpad=0, levels=1)

	# Motion vector search.	
	vec1 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:1}" + analyzeparams + "}")
	vec2 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:2}" + analyzeparams + "}")
	vec3 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:3}" + analyzeparams + "}")
	vec4 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:4}" + analyzeparams + "}")
	vec5 = SVAnalyse(sv_super, "{vectors:3, gpu:" + sgpu + ", special:{delta:5}" + analyzeparams + "}")
	
	b5vec = vec5.SVConvert(isb=true)
	b4vec = vec4.SVConvert(isb=true)
	b3vec = vec3.SVConvert(isb=true)
	b2vec = vec2.SVConvert(isb=true)
	b1vec = vec1.SVConvert(isb=true)
	f1vec = vec1.SVConvert(isb=false)
	f2vec = vec2.SVConvert(isb=false)
	f3vec = vec3.SVConvert(isb=false)
	f4vec = vec4.SVConvert(isb=false)
	f5vec = vec5.SVConvert(isb=false)
			
	# Motion Compensation.
	b5clip = MCompensate(input, in_super, b5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b4clip = MCompensate(input, in_super, b4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b3clip = MCompensate(input, in_super, b3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b2clip = MCompensate(input, in_super, b2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b1clip = MCompensate(input, in_super, b1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f1clip = MCompensate(input, in_super, f1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f2clip = MCompensate(input, in_super, f2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f3clip = MCompensate(input, in_super, f3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f4clip = MCompensate(input, in_super, f4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f5clip = MCompensate(input, in_super, f5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)

	# Create compensated clip.
	interleaved = mc >= 5 ? Interleave(f5clip, f4clip, f3clip, f2clip, f1clip, input, b1clip, b2clip, b3clip, b4clip, b5clip) :
		\ mc == 4 ? Interleave(f4clip, f3clip, f2clip, f1clip, input, b1clip, b2clip, b3clip, b4clip) :
		\ mc == 3 ? Interleave(f3clip, f2clip, f1clip, input, b1clip, b2clip, b3clip) :
		\ mc == 2 ? Interleave(f2clip, f1clip, input, b1clip, b2clip) :
		\ Interleave(f1clip, input, b1clip)

	# Perform dfttest. Exception handling required for official dfttest.
	try {
		filtered = Eval("dfttest(interleaved, Y=Y, U=U, V=V, sigma=sigma, sbsize=sbsize, sosize=sosize, tbsize=tbsize, lsb=lsb" + dfttest_params + ")")
		}
	catch(err_msg)
		{
		filtered = Eval("dfttest(interleaved, Y=Y, U=U, V=V, sigma=sigma, sbsize=sbsize, sosize=sosize, tbsize=tbsize" + dfttest_params + ")")
		}
	
	# Second pass.
	filtered = SelectEvery(filtered, mc * 2 + 1, mc)	
	
	f_super = MSuper(filtered, pel=pel, chroma=chroma, hpad=0, vpad=0, levels=1)
	
	b5clip = MCompensate(filtered, f_super, b5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b4clip = MCompensate(filtered, f_super, b4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b3clip = MCompensate(filtered, f_super, b3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b2clip = MCompensate(filtered, f_super, b2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	b1clip = MCompensate(filtered, f_super, b1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f1clip = MCompensate(filtered, f_super, f1vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f2clip = MCompensate(filtered, f_super, f2vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f3clip = MCompensate(filtered, f_super, f3vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f4clip = MCompensate(filtered, f_super, f4vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	f5clip = MCompensate(filtered, f_super, f5vec, thSAD=thSAD, thSCD1=thSCD1, thSCD2=thSCD2)
	
	interleaved = mc >= 5 ? Interleave(f5clip, f4clip, f3clip, f2clip, f1clip, filtered, b1clip, b2clip, b3clip, b4clip, b5clip) :
		\ mc == 4 ? Interleave(f4clip, f3clip, f2clip, f1clip, filtered, b1clip, b2clip, b3clip, b4clip) :
		\ mc == 3 ? Interleave(f3clip, f2clip, f1clip, filtered, b1clip, b2clip, b3clip) :
		\ mc == 2 ? Interleave(f2clip, f1clip, filtered, b1clip, b2clip) :
		\ Interleave(f1clip, filtered, b1clip)
		
	filtered = TemporalSoften(interleaved, mc, 7, 7, 15, 2)
	
	return SelectEvery(filtered, mc * 2 + 1, mc)
}
Another question: can I instead of using TemporalSoften get a linear approximation of pixel intensities through the timeline and calculate it's value in the middle?

Also, Didée, what prefiltering would you use?
__________________
...desu!

Last edited by Mystery Keeper; 25th January 2013 at 00:54.
Mystery Keeper is offline   Reply With Quote
Old 25th January 2013, 01:03   #5  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
Are you speaking about "temporal median" filter? Value-in-the-middle does sound like that. But I can't get the "linear approximation" into the picture.

Median filter is easy, but problematic. There is medianblur.dll, which incorporates a temporal median. It is easy to use. But for reasons unknown, it tends to get ridiculously slow when it's used with MVTools' motion compensation. (And here, "ridiculous" is a synonyme for "close to unusable".)
A small workaround is the median2 script by g-force, implementing a pretty fast temporal median, but it has only a temporal radius of +2/-2 from the center, i.e. a total window of 5 frames.


edit - for me, prefilter depends absolutely on each individual source, and my mood of the day. There's no general suggestion to make.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)

Last edited by Didée; 25th January 2013 at 01:08.
Didée is offline   Reply With Quote
Old 25th January 2013, 01:08   #6  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
No, Didée. Median filter uses sorting. And linear approximation is getting the function of a straight line that does not necessarily goes through all of given points, but is "nearest" to all of them.

Weird. Tried searching for English articles on linear approximation, and most of them were completely unrelated. Here's what I mean.
https://en.wikipedia.org/wiki/Linear...mathematics%29
__________________
...desu!

Last edited by Mystery Keeper; 25th January 2013 at 01:17.
Mystery Keeper is offline   Reply With Quote
Old 25th January 2013, 02:57   #7  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
I think Didée knows what linear approximation means. There seems to be a misunderstanding.
Guest is offline   Reply With Quote
Old 25th January 2013, 04:08   #8  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
Still trying to come up with a good prefilter method. HistogramAdjust seems to help, but it also amplifies the noise greatly. And I still can't make the finger in certain sequence move as it should. Somehow it's moving is being negated already on prefilter stage. Still, there's already certain awesome effect with this filter.

__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 28th January 2013, 13:16   #9  |  Link
joka
Registered User
 
Join Date: May 2010
Posts: 28
Mystery Keeper

If I understand your linear approximation correct and I'm not completely wrong, so the solution for the middle of odd points in time should be the "temporal average".

However, I expect the same problems like with temporal median - look at not fully uncompensated areas / warping objects.
joka is offline   Reply With Quote
Old 28th January 2013, 13:39   #10  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
Joka, I've already made my own filter for linear approximation. See the AviSynth Development section. But that's not the problem of this thread. Problem is motion compensation accuracy. I need to find a prefiltering method that would aid motion compensation. It should both denoise and pull out blurred and dim details. Also, right now I'm trying to understand the phase correlation motion estimation method, so I could use it for compensation instead of MVTools and SVP.
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 28th January 2013, 14:23   #11  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
Phase correlation is the dinosaur of motion searching. Seriously, I don't claim having understood all of its possibilities (around the 2nd and 3rd corner), but as far as I've understood it, the best-suited application is full-frame motion. If you have a frame with lots of different motion (ie different objects are moving), then phase correlation only tells you whether there ARE any motion matches in the frame, but not WHERE those matches are.

It looks to me you are fighting the every-same basic problem of the "denoising" matter. If there is any difference, how to tell if that very difference is to be considered unimportant (noise), or if the difference infact is important and should be kept. No matter what filter you crack up, you'll always end with the finding that there are some pixel differences, Fullstop. The evaluation of "what do those differences mean", that is the problem ....
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 28th January 2013, 16:05   #12  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
Didée, I don't know what you mean by "dinosaur" (being too old?), but phase correlation actually tells you where the best match is. And it does so without doing any search at all (if we're comparing whole picture for global motion). The coordinates of the maximum peak in correlation is the desired motion vector. Still, I don't know how to approach smaller blocks search. And I've got no idea how to search for individual pixels motion.
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 28th January 2013, 20:20   #13  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,901
Quote:
Originally Posted by Mystery Keeper View Post
The coordinates of the maximum peak in correlation is the desired motion vector.
How do you find that peak? Sounds like a global optimization, which is notoriously difficult.
Guest is offline   Reply With Quote
Old 28th January 2013, 20:23   #14  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
Phase correlation was already used time before the blockbased search algorithms came to light. The method is older, hence the "dinosaur".

Quote:
phase correlation actually tells you where the best match is.
Nope. It tells you whether, how much, and direction. But it doesn't tell WHERE. Phase correlation works in frequency domain. Hence, "per definition" it is non-localized.
Read: Yes, you get vectors. But you don't get locations. Imagine a scene where five balls are rolling in five different directions. Phase correlation will show you five different peaks in the frequency histogram, well, let it be five different motion vectors. But it doesn't tell you where in the frame the corresponding objects are located. Hence, you have four motion vectors, but you don't know where they belong to, or where to apply them.

And if the actual problem is a waving finger, with the finger consisting out of only a few single pixels pixels in total, I don't see phase correlation well suited to manage the problem.

The moving finger is just a small difference. No matter if phase correlation or blockbased search. It is a small difference, and the every-same question is which small differences are noise, and which small differences are detail.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 28th January 2013, 21:17   #15  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
neuron2:
http://docs.opencv.org/modules/imgproc/doc/motion_analysis_and_object_tracking.html#phasecorrelate

Didée:
It wouldn't be a problem if my eyes didn't clearly tell me that small difference is a motion and not just noise. It is noticeable - thus a problem. And what I'm trying to understand is how to make phase correlation localized. Though I guess I am misunderstanding the method.
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 28th January 2013, 21:44   #16  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
In that link there is a description how to use phase correlation to figure a global shift-translation between two frames. GLOBAL. One vector for the whole image. Like when a camera is panning across a still scene.
Again: You DON'T track several individual objects within one frame via phase correlation. The frame as a whole, yes. But not individual objects within. For such a thing, you would need to split the frame in smaller sections, and then compare different sections against each other, and so on. Which in turn leads you, ad ultimo, to a blockbased motion search scheme. Tadaa.

Long story short: blockbased motion search has been invented because phase correlation is not (or not-very-well) suited to find local motion of individual objects.


Quote:
It wouldn't be a problem if my eyes didn't clearly tell me that small difference is a motion and not just noise.
Yes, of course! I'm fully with you here!

It's just that exactly THAT is THE problem in denoising:
"Hey, look at that, it is blataneously obvious that {this} is an object and {that} is just noise! Isn't it! Why can't the darn algorithm see that, too?"

That ever was, ever is, and will keep to be, the problem.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 28th January 2013, 22:03   #17  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
I never said it shouldn't be block based. But maybe it shouldn't be SAD based either? It also sounds like you gave up. Well, I say my result is unacceptable, and I want this video (and many others) processed. Can not give up. Either I find a good prefilter to make motion compensation more accurate, either I find generally more accurate motion compensation approach, or I have to find whole new approach to denoising. I'm not really considering the latter. Pure temporal denoising is awesome.

As for prefilter - I need a way to filter out the noise, but keep edges sharp and accentuated. Is there a good way for it? Real trouble is when there are blurred edges in the dark areas. HDR and histogram adjust methods amplify the noise as well.

Maybe I should also look into this:
http://docs.opencv.org/modules/video...lflowfarneback
__________________
...desu!

Last edited by Mystery Keeper; 28th January 2013 at 22:12.
Mystery Keeper is offline   Reply With Quote
Old 28th January 2013, 22:31   #18  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,391
There is no "general" prefilter. It's always source dependent ... and even more so, when the user is so merciless against tiny losses as you seem to be

Also, a prefilter cannot be perfect in killing all noise and keeping all edges. If it were perfect in those regards, you wouldn't need anything behind the prefilter anymore.

Finding a suited prefilter is an essential part of the art. Indeed.
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 28th January 2013, 23:18   #19  |  Link
Mystery Keeper
Beyond Kawaii
 
Mystery Keeper's Avatar
 
Join Date: Feb 2008
Location: Russia
Posts: 724
Quote:
Originally Posted by Didée View Post
If it were perfect in those regards, you wouldn't need anything behind the prefilter anymore.
Very wrong. I'm not looking for prefilter which is "halfway to good filtering". I'm looking for a reference clip that would make motion compensation more accurate. An analysis only clip. It may look very very different from the source.

Didée, there are two things which I learned during my first year of using AviSynth:
1) there are no miracles;
2) there is no universal solution.
Please don't repeat them to me. Some suggestions would be much appreciated.
__________________
...desu!
Mystery Keeper is offline   Reply With Quote
Old 28th January 2013, 23:48   #20  |  Link
wonkey_monkey
Formerly davidh*****
 
wonkey_monkey's Avatar
 
Join Date: Jan 2004
Posts: 2,496
Quote:
Originally Posted by Didée View Post
Long story short: blockbased motion search has been invented because phase correlation is not (or not-very-well) suited to find local motion of individual objects.
The Wikipedia page on Standards Conversion ("doesn't easily get confused by rotating or twirling objects" "Phase Correlation is elegant") makes phase correlation sound like the pinnacle of the art. But it's not true?
wonkey_monkey is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:35.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.