Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 15th June 2017, 02:26   #441  |  Link
VS_Fan
Registered User
 
Join Date: Jan 2016
Posts: 91
Quote:
Originally Posted by MonoS View Post
Is this a "known issue" or am i doing something wrong?
It is most probably related to the luma:chroma SAD ratio weighting.

Pinterf explained it in this mvtools for avisynth forum post. He also recently released a version (2.7.18.22 2017-05-12) of mvtools for avisynth with a new parameter "scaleCSAD" to fine tune this ratio. See this forum post

Both in avisynth and vapoursynth, instead of converting to 444, you could filter (mdegrain, etc) planes separately, and then combine them together again with ShufflePlanes.
VS_Fan is offline   Reply With Quote
Old 15th June 2017, 19:40   #442  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 180
Quote:
Originally Posted by VS_Fan View Post
It is most probably related to the luma:chroma SAD ratio weighting.

Pinterf explained it in this mvtools for avisynth forum post. He also recently released a version (2.7.18.22 – 2017-05-12) of mvtools for avisynth with a new parameter "scaleCSAD" to fine tune this ratio. See this forum post
I took a look at those two posts and i can't understand how the thsad of the chroma planes should influence the luma plane.

Quote:
Originally Posted by VS_Fan View Post
Both in avisynth and vapoursynth, instead of converting to 444, you could filter (mdegrain, etc) planes separately, and then combine them together again with ShufflePlanes.
If i denoise both sending the planes parameter with 0 or using ShufflePlanes to denoise only the luma i get the same poor performance in bright spot.

I should also mention that i'm talking about the luma plane, chroma planes, afaik are ok.

EDIT: i'll send a sample asap
MonoS is offline   Reply With Quote
Old 16th June 2017, 04:31   #443  |  Link
VS_Fan
Registered User
 
Join Date: Jan 2016
Posts: 91
Quote:
Originally Posted by MonoS View Post
I took a look at those two posts and i can't understand how the thsad of the chroma planes should influence the luma plane.
It’s not the thSAD parameter (well, not only). It’s related to the mvtools internal SAD calculations made during ‘analyze’ or ‘recalculate’ to find the motion vectors. The luma:chroma weighting is:
  • 4:2 for YV12 (4:2:0 subsampling)
  • 4:4 for YV16 (4:2:2 subsampling)
  • 4:8 for YV24 (4:4:4 subsampling)
That means: with 444 subsampling mvtools will base its calculations on twice as much chroma data than luma data. That’s why you get “cleaner” results. The chroma planes have typically less noise than the luma plane. So, with such a low value for thSAD (denoise=200) mvtools picks up the right vectors easier for chroma planes than it can with luma data.

I saw three ways to improve your script:
  • The ‘denoise’ value (thSAD parameter) is half of the default value. Leave it at default (400)
  • You are resizing the clip prior to processing with mvtools, with the very basic ‘point’ kernel, and at the end you are cropping the borders. You should avoid that. Use the hpad & vpad parameters for super instead. And if you really want crop the borders, you can resize after mdegrain with a better kernel and then crop.
  • DitherLumaRebuild “allows tweaking for pumping up the darks” (comment by the author). This may be leading you to oversaturate the bright areas. Try without it. You could use some other prefilter.

Like this:
Code:
def Denoise2(src, denoise, blksize, fast, truemotion):
    overlap = int(blksize / 2)
    pad = blksize #+ overlap
    
    #src = core.fmtc.resample(src, src.width+pad, src.height+pad, sw=src.width+pad, sh=src.height+pad, kernel="point")
    
    super = core.mv.Super(src, hpad=pad, vpad=pad)
    
    #rep = has.DitherLumaRebuild(src, s0=1)
    # Optional - Some other prefilter:
    rep = core.dfttest.DFTTest(clip=src, tbsize=1, sigma=2.0)
    superRep = core.mv.Super(rep, hpad=pad, vpad=pad)
    
    bvec2 = core.mv.Analyse(superRep, isb = True, delta = 2, blksize=blksize, overlap=overlap, truemotion=truemotion)
    bvec1 = core.mv.Analyse(superRep, isb = True, delta = 1, blksize=blksize, overlap=overlap, truemotion=truemotion)
    fvec1 = core.mv.Analyse(superRep, isb = False, delta = 1, blksize=blksize, overlap=overlap, truemotion=truemotion)
    fvec2 = core.mv.Analyse(superRep, isb = False, delta = 2, blksize=blksize, overlap=overlap, truemotion=truemotion)
    
    fin = core.mv.Degrain2(src, super, bvec1,fvec1,bvec2,fvec2, denoise)
    
    #fin = core.std.CropRel(fin, 0, pad, 0, pad)
    
    return fin

src = core.lsmas.LWLibavSource("").fmtc.bitdepth(bits=16)

den = Denoise2(src, 400, blksize=16, fast=False, truemotion=False)

den.set_output()

Last edited by VS_Fan; 16th June 2017 at 04:39.
VS_Fan is offline   Reply With Quote
Old 16th June 2017, 23:39   #444  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 180
So, if i understand correctly, those weighting are using during the analyze function to search for the proper motion vector, doing the analysis in 444 change some values and "improve" the denoising on the luma plane, am i correct?

Regarding your suggestion:
I usually use thsad around 150 up to 500 to obtaining different level of denoising, 200 for me is for a low-to-mid denoising.
i'm not resizing the clip, i'm simply padding it as I, when i did extensive test some years ago, noticed bad denoising on the bottom and right edge, even with padding, so i've started to pad my clip by myself, this method achieved very nice results.
AFAIK DitherLumaRebuild is commonly used for prefiltering the clip before doing motion analysis, i found this trick in one of cretindesalpes post and on vs QTGMC port.

Anyway i think you are right suggesting to use a stronger denoising, using 400 on the 420 clip it obtain similar result in those areas with weak denoising, but i would prefere to avoid using such strong, in my opinion, thsad.
MonoS is offline   Reply With Quote
Old 17th June 2017, 19:33   #445  |  Link
VS_Fan
Registered User
 
Join Date: Jan 2016
Posts: 91
Quote:
Originally Posted by MonoS View Post
So, if i understand correctly, those weighting are using during the analyze function to search for the proper motion vector, doing the analysis in 444 change some values and "improve" the denoising on the luma plane, am i correct?
Right, but resampling to 444 doesn’t necessarily “improve” denoising. It just gives different results: For YUV colorspaces the amount of data used to represent luma and chroma for any pixel in each frame depends on the chroma subsampling. Mvtools’ analyze filter uses all data for each pixel to construct the blocks, unless you specify chroma=False.

From the mvtools doc at avisynth’s site: At analysis stage plugin divides frames by small blocks and try to find for every block in current frame the most similar (matching) block in second frame (previous or next). The relative shift of these blocks is motion vector. The main measure of block similarity is sum of absolute differences (SAD) of all pixels of these two blocks compared. SAD is a value which says how good the motion estimation was.
Quote:
Originally Posted by MonoS View Post
I usually use thsad around 150 up to 500 to obtaining different level of denoising, 200 for me is for a low-to-mid denoising.
This is my personal preference: For my 4:2:2 video sources I process each plane separately. I consider luma and chroma very different animals, so I tweak the corresponding thSAD and even thscd1 & thscd2 to lower values for chroma.
Quote:
Originally Posted by MonoS View Post
i'm not resizing the clip, i'm simply padding it as I, when i did extensive test some years ago, noticed bad denoising on the bottom and right edge, even with padding, so i've started to pad my clip by myself, this method achieved very nice results.
I can see now. There could have been a bug in earlier versions of the plugin, but you don’t need to do that any more
Quote:
Originally Posted by MonoS View Post
Anyway i think you are right suggesting to use a stronger denoising, using 400 on the 420 clip it obtain similar result in those areas with weak denoising, but i would prefere to avoid using such strong, in my opinion, thsad.
You could try mdegrain1, which will risk a lot less detail destruction while you use larger values for thsad.
VS_Fan is offline   Reply With Quote
Old 19th June 2017, 23:16   #446  |  Link
MonoS
Registered User
 
Join Date: Aug 2012
Posts: 180
So what may be happening is this: Upscaling the chroma planes mvtools think that less of the image is changed because we have 2*2 chroma pixel that are very similar, so the same thsad result in more similar blocks and so more strong denoise, am i right?
MonoS is offline   Reply With Quote
Old 20th June 2017, 17:18   #447  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,133
I understand now why you rewrote this thing like, entirely...
the code was full of weird bullshit and insanely fucking stupid stuff..

I managed to upgrade FakeBlockData and FakePlaneOfBlocks to normal C++14 but got stuck at FakeGroupOfPlanes
there's this bloody "update()" function throughout MVTools code,

it's like
Code:
void FakeBlockData::Update(const int *array) {
	vector.x = array[0];
	vector.y = array[1];
	vector.sad = array[2];
}
in FakeBlockData and I recoded it to
Code:
	auto Update(const VectorStructure *NewVectorPointer) {
		Vector = *NewVectorPointer;
	}
and in FakePlaneOfBlocks
Code:
void FakePlaneOfBlocks::Update(const int *array) {
	array += 0;
	for (int i = 0; i < nBlkCount; i++) {
		blocks[i].Update(array);
		array += N_PER_BLOCK;
	}
}
I, again recoded it like
Code:
	auto Update(const void *VectorStream) {
		auto StreamCursor = reinterpret_cast<const VectorStructure *>(VectorStream);
		for (auto i = 0; i < nBlkCount; ++i) {
			blocks[i].Update(StreamCursor);
			++StreamCursor;
		}
	}
.....

and there's one in FakeGroupOfPlanes like
Code:
void FakeGroupOfPlanes::Update(const int *array) {
	const int *pA = array;
	validity = GetValidity(array);
	pA += 2;
	for (int i = nLvCount_ - 1; i >= 0; i--)
		pA += pA[0];
	pA++;
	pA = array;
	pA += 2;
	for (int i = nLvCount_ - 1; i >= 0; i--) {
		planes[i]->Update(pA + 1);
		pA += pA[0];
	}
}
I mean like, dude, what the fuck??? this one is 11 out of 10 kinda wicked fucked up, all that weird abnormal pointer arithmetics with "pA" makes it impossible to recode...
I guess you should know all about that wicked update() function cuz you once converted MVTools to C
could you help me with this and explain that "FakeGroupOfPlanes::Update()", please?
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 20th June 2017, 17:52   #448  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 725
Quote:
Originally Posted by feisty2 View Post
and there's one in FakeGroupOfPlanes like
Code:
void FakeGroupOfPlanes::Update(const int *array) {
	const int *pA = array;
	validity = GetValidity(array);
	pA += 2;
	for (int i = nLvCount_ - 1; i >= 0; i--)
		pA += pA[0];
	pA++;
	pA = array;
	pA += 2;
	for (int i = nLvCount_ - 1; i >= 0; i--) {
		planes[i]->Update(pA + 1);
		pA += pA[0];
	}
}
I mean like, dude, what the fuck??? this one is 11 out of 10 kinda wicked fucked up, all that weird abnormal pointer arithmetics with "pA" makes it impossible to recode...
I guess you should know all about that wicked update() function cuz you once converted MVTools to C
could you help me with this and explain that "FakeGroupOfPlanes::Update()", please?
Well, see, half that function is redundant: https://github.com/dubhater/vapoursy...79f802d212L110
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 20th June 2017, 18:10   #449  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,133
something I failed to understand
Code:
const int *pA = array + 2;
what's that "+2"? some kind of offset value? will it be affected if I change the structure of vector stream? like if I change "sad" in the vector to double?

and this,
Code:
pA += pA[0];
I suppose it should be something like
Code:
constexpr auto StreamHeaderOffset = 2;
auto pA = reinterpret_cast<const VectorStructure *>(array + StreamHeaderOffset);
auto MoveOnToTheNextVector = [](auto &VectorPointer) {
	constexpr auto AbsoluteVectorSize = sizeof(std::decay_t<decltype(*VectorPointer)>);
	constexpr auto RelativeVectorSize = AbsoluteVectorSize / sizeof(int);
	auto ForwardDistance = VectorPointer->x / RelativeVectorSize;
	VectorPointer += ForwardDistance;
};
MoveOnToTheNextVector(pA);
?

and that's why I hate C and old C++ so much cuz it's like fucking deciphering assembly code, what's so hard about defining weird constants with constexpr variables with proper names and writing some nested closure functions to tell others what the hell you're doing exactly?
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated

Last edited by feisty2; 20th June 2017 at 18:36.
feisty2 is offline   Reply With Quote
Old 20th June 2017, 19:06   #450  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 725
Quote:
Originally Posted by feisty2 View Post
something I failed to understand
Code:
const int *pA = array + 2;
what's that "+2"? some kind of offset value? will it be affected if I change the structure of vector stream? like if I change "sad" in the vector to double?

and this,
Code:
pA += pA[0];
This is what Analyse and Recalculate attach to each frame they return, and what the "array" parameter points to:
Code:
int total_size; // Size of the entire thing, i.e. the last int you may access is array[total_size - 1]
int validity; // 0 if the frame is too close to the beginning or end of the clip, otherwise 1
int first_level_size;
VECTOR first_level_vectors[first_level_size];
int second_level_size;
VECTOR second_level_vectors[second_level_size];
...
int last_level_size;
VECTOR last_level_vectors[last_level_size];
int divided_extra_level_size; // may not exist
VECTOR divided_extra_level_vectors[divided_extra_level_size]; //may not exist
So that +2 skips over the total size and validity. And then pA += pA[0] skips over the "current" level.

You may have noticed that all those size fields store numbers of ints, rather than numbers of bytes. This is probably because everything in there used to be an int (the sizes, validity, VECTOR's members, other things that used to be stored there). This is weird already, and it would have become weirder when I made the type of VECTOR::sad int64_t, so since v19 all these sizes store numbers of bytes.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 20th June 2017, 21:00   #451  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,133
thanks for the detailed explanation, I can now finally reshape that weird piece of shit into something readable...
Code:
	auto Update(const std::int32_t *VectorStream) {
		constexpr auto StreamHeaderOffset = 2;
		auto StreamCursor = VectorStream + StreamHeaderOffset;
		auto GetValidity = [&]() {
			return VectorStream[1] == 1;
		};
		auto UpdateVectorsForEachLevel = [&](auto Level) {
			constexpr auto LevelHeaderOffset = 1;
			auto LevelLength = StreamCursor[0];
			auto CalibratedStreamCursor = reinterpret_cast<const VectorStructure *>(StreamCursor + LevelHeaderOffset);
			planes[Level]->Update(CalibratedStreamCursor);
			StreamCursor += LevelLength;
		};
		validity = GetValidity();
		for (auto Level = nLvCount_ - 1; Level >= 0; --Level)
			UpdateVectorsForEachLevel(Level);
	}
guess Imma stick to the int-based size for now cuz I don't want no extra trouble...

just for comparison, this was the original version
Code:
void FakeGroupOfPlanes::Update(const int *array) {
	const int *pA = array;
	validity = GetValidity(array);
	pA += 2;
	for (int i = nLvCount_ - 1; i >= 0; i--)
		pA += pA[0];
	pA++;
	pA = array;
	pA += 2;
	for (int i = nLvCount_ - 1; i >= 0; i--) {
		planes[i]->Update(pA + 1);
		pA += pA[0];
	}
}
now you see why I said modern C++ is python with pointers
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 9th July 2018, 13:35   #452  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 4,621
Would it be possible to have the 'star' motion search method from x265 included in MVTools?
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is online now   Reply With Quote
Old 9th July 2018, 17:02   #453  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 725
Quote:
Originally Posted by Boulder View Post
Would it be possible to have the 'star' motion search method from x265 included in MVTools?
It probably is.

But instead here is v20 with a small bug fix.

Code:
   * Fix green edges in the output of FlowBlur, FlowFPS, BlockFPS when pelclip is used and pel=2 (bug introduced in v12).
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 9th July 2018, 17:25   #454  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 4,621
Quote:
Originally Posted by jackoneill View Post
Thank you for even considering it, and thanks for the fix It's always nice to see plugins being maintained.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is online now   Reply With Quote
Old 31st July 2018, 06:36   #455  |  Link
edcrfv94
Registered User
 
Join Date: Apr 2015
Posts: 77
Vapoursynth mvtools not support DegrainN?(some script need tr=6)
edcrfv94 is offline   Reply With Quote
Old 31st July 2018, 07:32   #456  |  Link
Wolfberry
Helenium(Easter)
 
Wolfberry's Avatar
 
Join Date: Aug 2017
Location: Hsinchu, Taiwan
Posts: 99
Quote:
Originally Posted by feisty2 View Post

1. Binary Part: Extended Degrain to Degrain24 (24, it's my lucky number!)
2. Resurrected vmulti features from MVTools 2.6.0.5, implemented via a python module, "tr" works up to 24, guess no one will ever use a time radius > 24.... maybe?
3. Resurrected StoreVect and RestoreVect from MVTools 2.6.0.5, implemented via a python module

vmulti demos:
1. DegrainN
Code:
import vapoursynth as vs
import mvmulti
core = vs.core
sup = core.mvsf.Super(clp)
vec = mvmulti.Analyze(sup,tr=6,blksize=8,overlap=4)
vec = mvmulti.Recalculate(sup,vec,tr=6,blksize=4,overlap=2)
clp = mvmulti.DegrainN(clp, sup, vec, tr=6)
clp.set_output()
DegrainN is available in mvsf (mvtools single precision), the python module can be obtained here
__________________
Monochrome Anomaly
Wolfberry is offline   Reply With Quote
Old 29th October 2018, 07:28   #457  |  Link
edcrfv94
Registered User
 
Join Date: Apr 2015
Posts: 77
vapoursynth mvtools v20
Code:
c_in = src

sup_a = core.mv.Super(c_in, pel=2)
sup = sup_a

analyse_args_df = dict(blksize=16, overlap=8, search=5, searchparam=4, dct=5)
bVec1 = core.mv.Analyse(sup_a, isb=True, delta=1, **analyse_args_df)
fVec1 = core.mv.Analyse(sup_a, isb=False, delta=1, **analyse_args_df)
bVec2 = core.mv.Analyse(sup_a, isb=True, delta=2, **analyse_args_df)
fVec2 = core.mv.Analyse(sup_a, isb=False, delta=2, **analyse_args_df)
bVec3 = core.mv.Analyse(sup_a, isb=True, delta=3, **analyse_args_df)
fVec3 = core.mv.Analyse(sup_a, isb=False, delta=3, **analyse_args_df)

compensate_args_df = dict(thsad=400)
bc1 = core.mv.Compensate(c_in, sup, bVec1, **compensate_args_df)
fc1 = core.mv.Compensate(c_in, sup, fVec1, **compensate_args_df)
bc2 = core.mv.Compensate(c_in, sup, bVec2, **compensate_args_df)
fc2 = core.mv.Compensate(c_in, sup, fVec2, **compensate_args_df)
bc3 = core.mv.Compensate(c_in, sup, bVec3, **compensate_args_df)
fc3 = core.mv.Compensate(c_in, sup, fVec3, **compensate_args_df)

cmp = core.std.Interleave([bc3, bc2, bc1, c_in, fc1, fc2, fc3])
#cmp = core.std.Interleave([fc3, fc2, fc1, c_in, bc1, bc2, bc3])

AviSynth+ mvtools-2.7.33
Code:
c_in = last

sup_a =  c_in.MSuper(pel=2)
sup   = sup_a

vec_norm = sup_a.MAnalyse(multi=true, delta=3, blksize=16, overlap=8, search=5, searchparam=4, DCT=5)
cmp = c_in.MCompensate(sup, vec_norm, tr=3, thSAD=400)
or

Code:
c_in = last

sup_a =  c_in.MSuper(pel=2)
sup   = sup_a

#vec = sup_a.MAnalyse(multi=true, delta=3, blksize=16, overlap=8, search=5, searchparam=4, DCT=5)
#cmp = c_in.MCompensate(sup, vec, tr=3, thSAD=400)

bVec1 = MAnalyse(sup_a, isb=True, delta=1, blksize=16, overlap=8, search=5, searchparam=4, dct=5)
fVec1 = MAnalyse(sup_a, isb=False, delta=1, blksize=16, overlap=8, search=5, searchparam=4, dct=5)
bVec2 = MAnalyse(sup_a, isb=True, delta=2, blksize=16, overlap=8, search=5, searchparam=4, dct=5)
fVec2 = MAnalyse(sup_a, isb=False, delta=2, blksize=16, overlap=8, search=5, searchparam=4, dct=5)
bVec3 = MAnalyse(sup_a, isb=True, delta=3, blksize=16, overlap=8, search=5, searchparam=4, dct=5)
fVec3 = MAnalyse(sup_a, isb=False, delta=3, blksize=16, overlap=8, search=5, searchparam=4, dct=5)

bc1 = MCompensate(c_in, sup, bVec1, thsad=400)
fc1 = MCompensate(c_in, sup, fVec1, thsad=400)
bc2 = MCompensate(c_in, sup, bVec2, thsad=400)
fc2 = MCompensate(c_in, sup, fVec2, thsad=400)
bc3 = MCompensate(c_in, sup, bVec3, thsad=400)
fc3 = MCompensate(c_in, sup, fVec3, thsad=400)

cmp = Interleave(bc3, bc2, bc1, c_in, fc1, fc2, fc3)
#cmp = Interleave(fc3, fc2, fc1, c_in, bc1, bc2, bc3)
Very different from the reslts of AviSynth version, not sure which one correct.
edcrfv94 is offline   Reply With Quote
Old 13th March 2019, 16:52   #458  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 725
v21 fixes three bugs:
Code:
   * BlockFPS, Flow, FlowFPS, FlowInter: Fix crash with certain blksize/overlapv
     ratios, like 8/2. Thanks to pinterf for finding the cause and the solution.
   * Flow, FlowBlur, FlowFPS, FlowInter: Fix crash due to motion vectors pointing
     outside the frame. Thanks to pinterf for finding the cause and the solution.
   * Analyse: Fix use of an uninitialised variable. Only dct modes 2, 6, and 9 were
     affected. The result was probably just nondeterministic output. This
     uninitialised variable was inherited from the Avisynth plugin, version 2.5.11.3.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 13th March 2019, 17:03   #459  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,031
Awesome!
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository
ChaosKing is online now   Reply With Quote
Old 2nd November 2019, 10:02   #460  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,031
Isn't it time for a v22 - AVX2 booster edition?
Or at least a test build so we can test it?
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database || https://github.com/avisynth-repository
ChaosKing is online now   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 11:42.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.