I think the banding you have comes from the final YV12 to RGB conversion. I tried to process your second clip with the following script :
Code:
FFmpegSource2 ("clip2.demuxed.m2v")
TFM ()
TDecimate (mode=1)
zzz_denoise ()
DitherPost (mode=2)
Function zzz_denoise (clip src, float "sigma", int "thr", bool "mask", int "sad")
{
sigma = Default (sigma, 16)
thr = Default (thr, 5)
mask = Default (mask, False)
sad = Default (sad, 200)
w = src.Width ()
h = src.Height ()
# Motion analysis
super = MSuper (src)
super_a = MSuper (src.TTempSmooth ().RemoveGrain (12))
fwd_vect_3 = super_a.MAnalyse (isb=false, delta=3, overlap=4)
fwd_vect_2 = super_a.MAnalyse (isb=false, delta=2, overlap=4)
fwd_vect_1 = super_a.MAnalyse (isb=false, delta=1, overlap=4)
bck_vect_1 = super_a.MAnalyse (isb=true, delta=1, overlap=4)
bck_vect_2 = super_a.MAnalyse (isb=true, delta=2, overlap=4)
bck_vect_3 = super_a.MAnalyse (isb=true, delta=3, overlap=4)
fwd_comp_2 = src.MCompensate (super, fwd_vect_2, thSAD=sad)
fwd_comp_1 = src.MCompensate (super, fwd_vect_1, thSAD=sad)
bck_comp_1 = src.MCompensate (super, bck_vect_1, thSAD=sad)
bck_comp_2 = src.MCompensate (super, bck_vect_2, thSAD=sad)
# Spatio-temporal denoising using modified dfttest
c_dft = Interleave (fwd_comp_2, fwd_comp_1, src, bck_comp_1, bck_comp_2)
c_dft = c_dft.dfttest (sigma=sigma, lsb=true) # Double height
c_dft = c_dft.SelectEvery (5, 2)
# Temporal-only denoising using modified MDegrain
c_deg = src.MDegrain3 (super, bck_vect_1, fwd_vect_1, bck_vect_2, fwd_vect_2, bck_vect_3, fwd_vect_3, thSAD=sad, lsb=true) # Double height
# Spatio-temporal denoising smoothes too much the details,
# therefore we use pure temporal denoising on edges or detailed areas.
edge_src = c_deg.Crop (0, 0, w, h)
edge_mask = edge_src.mt_edge (mode="prewitt", thY1=thr, thY2=thr)
edge_mask = edge_mask.mt_expand ()
edge_mask = StackVertical (edge_mask, edge_mask) # Double height
c_hyb = mt_merge (c_dft, c_deg, edge_mask, luma=true, y=3, u=3, v=3)
return (mask ? edge_mask.GreyScale () : c_hyb)
}
(This script also requires a modified version of MDegrain3. I added a link to my
previous post)
I've got pretty good results regarding denoising and debanding. However, depending on which software I play the processed clip, bands may appear or not, and obviously come from the player's YV12->RGB converter. But I don't know if it's only a matter of codec or if the video card's driver is involved too.
An example :
Original, noisy clip :
Denoised, with bands :
On this picture, one can see clearly the YV12->RGB conversion problem. Every six bands, there is a bigger jump caused by a step of 1 unit in luma values converted to a step of 2 units in the RGB colorspace instead of 1.
Denoised and dithered :
For the record, GradFun2DBMod :
To show the effect, a magnified part of the picture above (bottom left), "HSV stretched" (with GIMP) :
Original / Denoised
GradFun2DBMod / Dithered