Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th December 2011, 16:33   #361  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Thanks for answering so fast and that's good to hear

(You should probably update your post: http://forum.doom9.org/showthread.ph...59#post1386559 so anyone else looking for the same info don't have to browse 18 pages...)
mastrboy is offline   Reply With Quote
Old 15th December 2011, 18:43   #362  |  Link
-Vit-
Registered User
 
Join Date: Jul 2010
Posts: 448
Quote:
Originally Posted by mastrboy View Post
Little annoying that there are multiple forks of mvtools2 out there now, the official one, one from the dither package, one from QTGMC and one from www.svp-team.com
I'll remove my version from the QTGMC thread and use cretindesalpes' version for the modded plugins shortly. One down...
-Vit- is offline   Reply With Quote
Old 15th December 2011, 18:52   #363  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Good idea. A second idea to narrow it down further would be if both of your changes would be merged with the official release... Fizick seems to be the only "active" developer according to the changelog at least, hopefully he would read this post and accept your changes for 2.5.11.4.. :P

Last edited by mastrboy; 15th December 2011 at 18:53. Reason: typos
mastrboy is offline   Reply With Quote
Old 16th December 2011, 12:16   #364  |  Link
SSH4
Registered User
 
Join Date: Nov 2006
Posts: 90
cretindesalpes Is it possible add to dfttest (or may be to mdegrain) option like sstring(ssx/y/z) for change sigma depending on luma? I know different ways for simulate this (use luma as mask and merging filtered and original not the best one) but all of them require two passes or more of dfttest or mdeg
SSH4 is offline   Reply With Quote
Old 17th December 2011, 23:40   #365  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
Could you confirm one thing?. Motion vectors are very different when you take into account chroma between yv12 and yuy2 (planar) sources.
Since chroma is subsampled in yv12 I guess it is upsampled before or after calculating its motion vectors to comply to luma motion vectors.
Does this happen according to chroma in YUY2 sources taking into account the way chroma is stored in this format (when planar=true)?

And since I'm here, would you recommend mod16 inputs for more correct SAD calculation?
Dogway is offline   Reply With Quote
Old 18th December 2011, 17:45   #366  |  Link
cretindesalpes
͡҉҉ ̵̡̢̛̗̘̙̜̝̞̟̠͇̊̋̌̍̎̏̿̿
 
cretindesalpes's Avatar
 
Join Date: Feb 2009
Location: No support in PM
Posts: 712
Quote:
Originally Posted by Dogway View Post
Since chroma is subsampled in yv12 I guess it is upsampled before or after calculating its motion vectors to comply to luma motion vectors.
All SAD calculations are scaled depending on the plane subsampling. For example, if you specify blksize=16, luma SAD part will be calculated on 16x16 blocks, 4:2:2 chroma on 8x16 blocks and 4:2:0 chroma on 8x8 blocks.

Quote:
Motion vectors are very different when you take into account chroma between yv12 and yuy2 (planar) sources.
How different? Do you think there is a bug related to YUY2?

Quote:
And since I'm here, would you recommend mod16 inputs for more correct SAD calculation?
I don't think using mod16 (or more likely a mod-overlap size) does any significant difference, because motion estimation is always funny on the frame borders.

Quote:
Originally Posted by SSH4
Is it possible add to dfttest (or may be to mdegrain) option like sstring(ssx/y/z) for change sigma depending on luma?
It looks possible at the first glance; I made a note in my to do list. An additional improvement would be using the pixel values of a second clip as sigma multipliers. For MDegrain, you can obtain an equivalent effect by changing the luma curve of the search clip. What I usually do: prepare the search clip using dfttest(lsb=true) with basic MC, Dither_lut16() to apply a gamma-like curve and make it full range, convert it back to 8 bit using DitherPost(mode=-1) and feed MSuper (for the final MAnalyse only, not the actual filtering) with it. I think you could do a similar approximation with dfttest by applying a curve on the input clip luma (result in 16 bits to keep accuracy), filter with lsb=true, lsb_in=true then by applying the inverse function. Obviously, it would affect only the luma plane, the chroma plane filtering would be kept unchanged whatever the corresponding luma value.

Quote:
Originally Posted by mastrboy
Fizick seems to be the only "active" developer according to the changelog at least, hopefully he would read this post and accept your changes for 2.5.11.4.. :P
He wasn't interested when I asked him.
__________________
dither 1.28.1 for AviSynth | avstp 1.0.4 for AviSynth development | fmtconv r30 for Vapoursynth & Avs+ | trimx264opt segmented encoding

Last edited by cretindesalpes; 18th December 2011 at 18:11.
cretindesalpes is offline   Reply With Quote
Old 18th December 2011, 23:22   #367  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
I'm not very sure that's why I ask you if you could have a look. On a simple temporal radius of 1 there is this such difference (in the images below). If I set chroma false both output are 100% identical. Maybe this is just a matter of chroma being in different formats but difference catched my attention enough to not mention here.
Code:
planar=true
chroma=true
plane=0

planar ? interleaved2planar : last
super_search=MSuper(planar=planar,chroma=chroma?true:(plane==0?false:true))
bv2=super_search.MAnalyse(isb = true, delta = 1,chroma=chroma)
fv2=super_search.MAnalyse(isb = false, delta = 1,chroma=chroma)
MDegrain1(super_search, bv2, fv2, thSAD=300,planar=planar,lsb=false,plane=plane)
planar ? planar2interleaved : last


edit:
Quote:
Originally Posted by cretindesalpes View Post
...Dither_lut16() to apply a gamma-like curve and make it full range...
Very good idea! Do you also degamma? I'm not sure if this is preferred.

It also would be quite useful to add an interlaced parameter to manalyse for when multi=true, so interleaved delta order becomes ...-6,-4,-2, 2, 4, 6...

In another matter, I realised some time ago that in order to create motion vectors you don't need thSAD, this is only required for MDegrain (and well mrecalculate...), maybe the chroma smearing could be reduced by playing with plane and thSAD parameters running 2 instances of MDegrain but using the same vectors... since you know the internals of mvtools2 could you confirm this?

Last edited by Dogway; 30th December 2011 at 09:36.
Dogway is offline   Reply With Quote
Old 30th December 2011, 09:50   #368  |  Link
Yellow_
Registered User
 
Join Date: Sep 2009
Posts: 378
Is it possible to blend with float values and if so with values greater than 0.0 to 1.0 with Dither tools?

If I have two frames of video (YV12) of the same scene but with two exposures say one is 1 EV greater than the other, ie: twice as bright. Is it possible to blend them into a 16bit linear light frame with brightness values representing 0.0 - 2.0 and so on for wider EV spacing.

Last edited by Yellow_; 30th December 2011 at 09:55.
Yellow_ is offline   Reply With Quote
Old 30th December 2011, 18:32   #369  |  Link
Stephen R. Savage
Registered User
 
Stephen R. Savage's Avatar
 
Join Date: Nov 2009
Posts: 327
Quote:
Originally Posted by Yellow_ View Post
Is it possible to blend with float values and if so with values greater than 0.0 to 1.0 with Dither tools?

If I have two frames of video (YV12) of the same scene but with two exposures say one is 1 EV greater than the other, ie: twice as bright. Is it possible to blend them into a 16bit linear light frame with brightness values representing 0.0 - 2.0 and so on for wider EV spacing.
Pixels are just numbers; you can assume them to have whatever intervals you want. For example, there's nothing stopping you from averaging two values on the interval of [0.0 - 1.0] and calling the new value [0.0 - 2.0]. That said, you obviously can't get a display to show values greater than 100% brightness.

Pseudo-code:
Code:
a = Source1.ConvertTo16Bits().GammaToLinear()
b = Source2.ConvertTo16Bits().GammaToLinear()
Average(a,b)
MoreCode()
These values are nominally on the interval [0.0 - 1.0], but since they are linear values, there's no difference from a [0.0 - 2.0] interval with 15-bit precision.

Last edited by Stephen R. Savage; 30th December 2011 at 18:37.
Stephen R. Savage is offline   Reply With Quote
Old 30th December 2011, 20:18   #370  |  Link
Yellow_
Registered User
 
Join Date: Sep 2009
Posts: 378
Quote:
Originally Posted by Stephen R. Savage View Post
Pixels are just numbers; you can assume them to have whatever intervals you want. For example, there's nothing stopping you from averaging two values on the interval of [0.0 - 1.0] and calling the new value [0.0 - 2.0]. That said, you obviously can't get a display to show values greater than 100% brightness.

Pseudo-code:
Code:
a = Source1.ConvertTo16Bits().GammaToLinear()
b = Source2.ConvertTo16Bits().GammaToLinear()
Average(a,b)
MoreCode()
Ok, I've found the AverageM.dll and discussion here:

http://forum.doom9.org/showthread.ph...930#post726930

Quote:
These values are nominally on the interval [0.0 - 1.0], but since they are linear values, there's no difference from a [0.0 - 2.0] interval with 15-bit precision.
Ok, :-) I'm working within Dither Tools stacked 16bit functions and blending like this:

Quote:
Blending 8-bit pictures in linear light:

# Blending amount for the first clip
bl = 0.75
bls1 = String ( bl)
bls2 = String (1 - bl)

# 8-bit clips converted to linear 16-bit full range (gamma undone)
ug = " 16 - 0 max 1.41624 / 2.2 ^ "

# Redo the gamma, result in 16 bits YUV
rg = " 0.454545 ^ 362.5585 * 4096 +"

# Blend
Dither_lutxy8 (src1, src2,
\ expr ="x " + bls1 + " * y " + bls2 + " * + 256 *",
\ yexpr="x" + ug + bls1 + " * y" + ug + bls2 + " * +" + rg,
\ y=3, u=3, v=3)
Then I'm using Avs2yuv (really should be using Avs2pipemod) to export raw 48bit rgb to imagemagick like this:

Quote:
Dither_convert_yuv_to_rgb (lsb_in=true, output="rgb48y")
r = SelectEvery (3, 0)
g = SelectEvery (3, 1)
b = SelectEvery (3, 2)
Dither_convey_rgb48_on_yv12 (r, g, b)
And writing 16bit OpenEXR's from IM for each exposure, but as EXR supports HDR I would like to merge the two exposures from each pair frames into each OpenEXR image for Tonemapping externally rather than export 8bit tiffs, jpg's or pngs and then manually set the EV spacing for them in a HDR app like Photomatrix or psftools to export HDR formats for tonemapping. Too many steps for video. :-)

But I'm unsure how to 'position' the what were 'normalised' video frames shot at various EV's (8bit 0 - 255 -> Dither Tools to 16bit), for piping the raw rgb data into IM with EV spacing they were shot at. ie: There are options in camera to shoot at 1 EV ie twice bright, up to 5EV.

Sorry if it sounds convoluted or plain daft. :-) Not sure if Average is the tool in this scenario?

Last edited by Yellow_; 30th December 2011 at 20:20.
Yellow_ is offline   Reply With Quote
Old 1st January 2012, 05:31   #371  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 729
Quote:
Originally Posted by cretindesalpes View Post
For MDegrain, you can obtain an equivalent effect by changing the luma curve of the search clip. What I usually do: prepare the search clip using dfttest(lsb=true) with basic MC, Dither_lut16() to apply a gamma-like curve and make it full range, convert it back to 8 bit using DitherPost(mode=-1) and feed MSuper (for the final MAnalyse only, not the actual filtering) with it.
Could you post a code snippet for that, if it isn't a problem?
Mainly the Dither_lut16() part, I'd love to see the formula you use... I've been interested in luma-sensitive mdegraining for quite some time, but my own solution is crude at best (weighting together two clips...)
mandarinka is offline   Reply With Quote
Old 2nd January 2012, 15:57   #372  |  Link
cretindesalpes
͡҉҉ ̵̡̢̛̗̘̙̜̝̞̟̠͇̊̋̌̍̎̏̿̿
 
cretindesalpes's Avatar
 
Join Date: Feb 2009
Location: No support in PM
Posts: 712
Mandarinka:

Code:
# 16-bit TV-range -> full range with extra curvature
# s0 = new slope at 0
# c  = length of the curved part, > 0, no specific unit
#
# Matlab code:
# c  = 1/16;
# s0 = 2;
# x = linspace (0, 1, 10001);
# y1 = (x * 219 + 16) / 256; % Scale [0 ; 1]
# t = ((y1*256 - 16) / 219);
# k = (s0-1)*c;
# y2 = ((1+c) - (1+c)*c ./ (t + c)) * k + t * (1 - k);
# plot (x, y1, x, y2); grid on;
Function fslg_remap_luma_search_clip (clip src, float "s0", float "c")
{
	s0 = Default (s0, 2.0)
	c  = Default (c,  1.0/16)

	k = (s0 - 1) * c
	t = "x 4096 - 56064 / 0 1 clip"
	e = String(k)+" "+String(1+c)+" "+String((1+c)*c)+" "+t+" "+String(c)
\		+" + / - * "+t+" 1 "+String(k)+" - * + 65536 *"

	src.Dither_lut16 (
\		yexpr=e,
\		expr="x 32768 - 32768 * 28672 / 32768 +",
\		y=3, u=3, v=3
\	)
}
__________________
dither 1.28.1 for AviSynth | avstp 1.0.4 for AviSynth development | fmtconv r30 for Vapoursynth & Avs+ | trimx264opt segmented encoding
cretindesalpes is offline   Reply With Quote
Old 2nd January 2012, 21:20   #373  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 729
Thanks!
mandarinka is offline   Reply With Quote
Old 3rd January 2012, 01:33   #374  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
if I set s0 to 1 the result is the same as if I did ((x-16)/219)*255 so s0 is to revert source gamma or just to make the luma tv2pc range process gamma aware? Wouldn't be 2.2 more appropiate? and what is c, kind of a multiplier?
Dogway is offline   Reply With Quote
Old 3rd January 2012, 11:28   #375  |  Link
cretindesalpes
͡҉҉ ̵̡̢̛̗̘̙̜̝̞̟̠͇̊̋̌̍̎̏̿̿
 
cretindesalpes's Avatar
 
Join Date: Feb 2009
Location: No support in PM
Posts: 712
No, this is not a gamma function, and actually it makes the data going even further from the linear-light scale. The purpose is to amplify the scale of the dark parts relative the the other parts. So it acts like a luma-based multiplier on the SAD. With a slope s0 = 2.0, thSAD for the darkest parts is implicitly divided by 2. The c parameter indicates the length of the transition between the dark and normal SAD.

Here are some graphs to show how the curve looks for different s0 and c values:



Blue = original luma, green = mapped luma

Note that for high c or s0 values, the brightest parts have also their thSAD modified (in the other direction), because the slope of the curve near 1 tends to flatten.
__________________
dither 1.28.1 for AviSynth | avstp 1.0.4 for AviSynth development | fmtconv r30 for Vapoursynth & Avs+ | trimx264opt segmented encoding
cretindesalpes is offline   Reply With Quote
Old 3rd January 2012, 13:13   #376  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,361
Very explicative! so you just supply the knots for curve "free playing" (while mapping tv to pc)
So I assume it can undo gamma? possibly with s0=2.2,c=1.0?

By doing that you miss the advantage of tweaking the darks in the same filter, although I always thought that to ungamma was sufficient for manalyse to not underperform in dark parts. YET human eyes are very sensitive to low luma levels and thus films and such have a notable bias for obscure scenes so it really might be necessary to have an special care in these parts.

I would have thought something ideal as:
tv to pc->ungamma->dark enhacement

Possibly is to ask too much but having an equal thSAD weighting through the whole luma range as a base for dark enhacement isn't such a bad idea.
Thanks for the functions everyday is Xmas day : P
Dogway is offline   Reply With Quote
Old 4th January 2012, 13:05   #377  |  Link
cretindesalpes
͡҉҉ ̵̡̢̛̗̘̙̜̝̞̟̠͇̊̋̌̍̎̏̿̿
 
cretindesalpes's Avatar
 
Join Date: Feb 2009
Location: No support in PM
Posts: 712
Quote:
Originally Posted by Dogway View Post
Very explicative! so you just supply the knots for curve "free playing" (while mapping tv to pc)
So I assume it can undo gamma? possibly with s0=2.2,c=1.0?
As I said it cannot undo gamma, it's not a "power" function, moreover the general curve is inverted (so it's more like additional gamma). It is intended to be applied to the motion search clip only, not the real clip, so after motion estimation and SAD calculation, the dark parts of the real clip will be less filtered (similar to a thSAD reduction) than the bright ones.



On a completely unrelated note, I updated this script. It's helpful to dither a 16-bit clip to 10 bits and make x264-10bit encode the correct values, without requiring a patched build.
__________________
dither 1.28.1 for AviSynth | avstp 1.0.4 for AviSynth development | fmtconv r30 for Vapoursynth & Avs+ | trimx264opt segmented encoding
cretindesalpes is offline   Reply With Quote
Old 4th January 2012, 14:15   #378  |  Link
Yellow_
Registered User
 
Join Date: Sep 2009
Posts: 378
Quote:
On a completely unrelated note, I updated this script. It's helpful to dither a 16-bit clip to 10 bits and make x264-10bit encode the correct values, without requiring a patched build.
Could you just clarify a little more? I'm a bit confused. :-)

Are you saying 16bit from previous version(s) of Dither tools to 10bit vanilla x264 encoder results is slightly greenish output?

And as such it was necessary to do some hack if not using a patched x264 encoder build?

But that previous hack is now not needed due to a new function in Dither , which is less of a hack but still function needs to be added to our scripts?

That when going from Dither tools 16bit to patched 10bit build nothing is needed?
Yellow_ is offline   Reply With Quote
Old 4th January 2012, 19:33   #379  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Currently the 10 bit vanilla builds of x264 will convert 10 bit input to 16 bit and then dither down to 10 bit again. Some patched builds (like the newest from JEEB or Taro's) can skip this unnecessary, potentially harmful/unwanted, conversion.
I guess that script uses the inverse function of vanilla x264's 10 bit -> 16 bit -> 10 bit conversion, so that the conversion can do no harm and you can get the same result with a vanilla build as you would get with one of the patched builds. I don't know where you got the "greenish" idea from?
sneaker_ger is offline   Reply With Quote
Old 4th January 2012, 21:50   #380  |  Link
cretindesalpes
͡҉҉ ̵̡̢̛̗̘̙̜̝̞̟̠͇̊̋̌̍̎̏̿̿
 
cretindesalpes's Avatar
 
Join Date: Feb 2009
Location: No support in PM
Posts: 712
@Yellow_:

Yes vanilla x264 now scales and dither 16-bit input. However I don't know when it started to do this. I think the hacked x264 builds were mostly concerned about the 8-bit issue which was there from the beginning. Anyway the 16-bit path in the JEEB or 06_taro builds should also be free of unnecessary scaling.

@sneaker_ger:

Actually the 10->16->10 bit conversion is neutral. It's probably a (minor) waste of processing power, but it seems to work as expected. See the discussion here.
__________________
dither 1.28.1 for AviSynth | avstp 1.0.4 for AviSynth development | fmtconv r30 for Vapoursynth & Avs+ | trimx264opt segmented encoding
cretindesalpes is offline   Reply With Quote
Reply

Tags
color banding, deblocking, noise reduction


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 21:56.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.