Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 2nd May 2017, 22:31   #2501  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
Thanks to all who contributed to finding and solving my crash problem with fmtconv. I could not use something like mawen1250's "nnedi3_resample.py" script because of the frequent crashes.
This script is also used in HAvsFunc.
For somewhat around 6 months I have been pulling my hair because of the frequent/random crashes with such scripts.
I've always thougt it must be my system because no-one else seemed to have this problem -

With the recompiled version, all crashes all gone !!

I've tried to recompile fmtc myself on MINGW64/MSYS2, but I got an error in Interlocked.hpp :
Code:
config.status: creating Makefile
config.status: executing depfiles commands
config.status: executing libtool commands
  CXX      ../../src/main.lo
  CXX      ../../src/fmtc/Bitdepth.lo
In file included from ./../../src/conc/Interlocked.h:132:0,
                 from ./../../src/conc/AtomicPtr.hpp:27,
                 from ./../../src/conc/AtomicPtr.h:117,
                 from ./../../src/conc/CellPool.h:30,
                 from ./../../src/conc/ObjPool.h:44,
                 from ../../src/fmtc/Bitdepth.h:30,
                 from ../../src/main.cpp:18:
./../../src/conc/Interlocked.hpp: In static member function 'static void conc::Interlocked::cas(conc::Interlocked::Data128&, volatile Data128&, const Data128&, const Data128&)':
./../../src/conc/Interlocked.hpp:349:2: error: '::InterlockedCompareExchange128' has not been declared
  ::InterlockedCompareExchange128 (
  ^~
In file included from ./../../src/conc/Interlocked.h:132:0,
                 from ./../../src/conc/AtomicPtr.hpp:27,
                 from ./../../src/conc/AtomicPtr.h:117,
                 from ./../../src/conc/CellPool.h:30,
                 from ./../../src/conc/ObjPool.h:44,
                 from ./../../src/fmtc/Bitdepth.h:30,
                 from ../../src/fmtc/Bitdepth.cpp:28:
./../../src/conc/Interlocked.hpp: In static member function 'static void conc::Interlocked::cas(conc::Interlocked::Data128&, volatile Data128&, const Data128&, const Data128&)':
./../../src/conc/Interlocked.hpp:349:2: error: '::InterlockedCompareExchange128' has not been declared
  ::InterlockedCompareExchange128 (
  ^~
./../../src/conc/Interlocked.hpp:351:4: error: 'excg_hi' was not declared in this scope
    excg_hi,
    ^~~~~~~
./../../src/conc/Interlocked.hpp:351:4: error: 'excg_hi' was not declared in this scope
    excg_hi,
    ^~~~~~~
./../../src/conc/Interlocked.hpp:352:4: error: 'excg_lo' was not declared in this scope
    excg_lo,
    ^~~~~~~
./../../src/conc/Interlocked.hpp:352:4: error: 'excg_lo' was not declared in this scope
    excg_lo,
    ^~~~~~~
I used :
Code:
 cd build/unix
./autogen.sh
./configure --prefix=/local64 --{build,host,target}=x86_64-w64-mingw32
make -j4
Anyone an idea to fix this ?

Last edited by Pat357; 2nd May 2017 at 22:45.
Pat357 is offline   Reply With Quote
Old 3rd May 2017, 11:21   #2502  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Probably delete https://github.com/EleonoreMizo/fmtc....hpp#L349-L354
and replace with https://github.com/EleonoreMizo/fmtc...ocked.hpp#L360

Also open an issue.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 3rd May 2017, 16:12   #2503  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
Thanks a lot !
Yup, that was the first stopper...
Now I got :
Code:
 CXX      ../../src/fmtcl/TransOpErimm.lo
  CXX      ../../src/fmtcl/TransOpFilmStream.lo
  CXX      ../../src/fmtcl/TransOpLinPow.lo
  CXX      ../../src/fmtcl/TransOpLogC.lo
  CXX      ../../src/fmtcl/TransOpLogTrunc.lo
  CXX      ../../src/fmtcl/TransOpPow.lo
  CXX      ../../src/fmtcl/TransOpSLog.lo
  CXX      ../../src/fmtcl/TransOpSLog3.lo
  CXX      ../../src/fmtcl/VoidAndCluster.lo
  CXX      ../../src/fstb/CpuId.lo
  CXX      ../../src/fstb/fnc.lo
{standard input}: Assembler messages:
{standard input}:46: Error: unsupported instruction `mov'
{standard input}:96: Error: unsupported instruction `mov'
{standard input}:113: Error: unsupported instruction `mov'
{standard input}:125: Error: unsupported instruction `mov'
{standard input}:161: Error: unsupported instruction `mov'
make: *** [Makefile:1084: ../../src/fstb/CpuId.lo] Error 1
make: *** Waiting for unfinished jobs..
Further ideas ? (probably has to do with '' around the mov instruction)
Pat357 is offline   Reply With Quote
Old 3rd May 2017, 16:47   #2504  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Maybe this works: https://dpaste.de/sVR3
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 3rd May 2017, 20:30   #2505  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
Yep, compiling is now without errors.
But : the generated .DLL is unfortunately not working :

- as long as I use "VSPIPE -i ", everything seems to be ok. No errors and the clip properties are correct (I'm using fmtc for scaling & bitdepth conversion).
- if I try to feed the stream to ffplay, mplayer or MPV, it looks like no frames are produced. FFplay sits there but no errors or no window for playing is opened.
- VSPIPE -p behaviours the same as the players : no errors and the progress counter doesn't even start. No crashes neither.

Did you ever compile it yourself ?

Last edited by Pat357; 3rd May 2017 at 22:05.
Pat357 is offline   Reply With Quote
Old 3rd May 2017, 20:48   #2506  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Quote:
Originally Posted by Pat357 View Post
Yep, compiling is now without errors.
But : the generated .DLL is unfortunately not working :

- as long as I use "VSPIPE -i ", everything seems to be ok. No errors and the clip properties are correct (I'm using fmtc for scaling).
- if I try to feed the stream to ffplay, mplayer or MPV, it looks like no frames are produced. FFplay sits there but no errors or no window for playing is opened.
- VSPIPE -p behaviours the same as the players : no errors and the progress counter doesn't even start. No crashes neither.

Did you ever compile it yourself ?
I did, but only in Linux.

It sounds like there is a deadlock. Do you have avstp.dll someplace where it might be found, like in System32 or the same folder as libfmtconv.dll?
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 3rd May 2017, 21:05   #2507  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
Quote:
Originally Posted by jackoneill View Post
I did, but only in Linux.

It sounds like there is a deadlock. Do you have avstp.dll someplace where it might be found, like in System32 or the same folder as libfmtconv.dll?
No avstp.dll anywhere in my path or in VS-plugin directories.

Would it also work in VS ?
I used it before in AVS but I found increasing #threads in Set's AVS 2.6 MT more stable then this plug-in.

Last edited by Pat357; 3rd May 2017 at 22:06. Reason: typo
Pat357 is offline   Reply With Quote
Old 4th May 2017, 15:49   #2508  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Quote:
Originally Posted by Pat357 View Post
No avstp.dll anywhere in my path or in VS-plugin directories.

Would it also work in VS ?
I used it before in AVS but I found increasing #threads in Set's AVS 2.6 MT more stable then this plug-in.
Maybe it would work. I think the code for it is there, at least. And you really should open a new issue or two at https://github.com/EleonoreMizo/fmtconv/issues.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 4th May 2017, 17:20   #2509  |  Link
Pat357
Registered User
 
Join Date: Jun 2006
Posts: 452
Quote:
Originally Posted by jackoneill View Post
Maybe it would work. I think the code for it is there, at least. And you really should open a new issue or two at https://github.com/EleonoreMizo/fmtconv/issues.
I'll open 3 (2 + found another today) issues, but since development slowed down or even stopped, there is no hurry doing so; I'll do it tomorrow or the day after.
Code:
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[255,128,128], format=vs.YUV420P8)
clip = core.fmtc.resample(clip=clip, kernel="spline16", w=960, h=720, interlaced=False, interlacedd=False, cpuopt=0)
clip=core.resize.Point(clip=clip, format=vs.YUV420P8)
clip.set_output()
This will return a green clip if cpuopt=0 and a white clip if cpuopt=1 or anything but 0.

If I add this right before the fmtc line :
Code:
clip=core.resize.Point(clip=clip, format=vs.YUV420P16)
you get a white clip in both cases. It seems if cpuopt=0, the 8bit data is not converted to 16 bit first, maybe it's even the way it's supposed to work and you need 16b (using fmtc.bitdepth) before using fmtc.resample !

BTW, how do I create a Blank or colored clip with >8 bits data ?
I've tried
Code:
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[255,128,128], format=vs.YUV420P16)
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[1,0.5,0.5], format=vs.YUV420P16)
the documentation points to using float, like I did in the second case..like Luma is ~0-1 U and V are ~ -0.5-+0.5

I hope it is not implemented in a way that you need to double the values for each extra bit >8 .... This would be confusing and non-practical.
Pat357 is offline   Reply With Quote
Old 4th May 2017, 17:21   #2510  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
Quote:
Originally Posted by Pat357 View Post
I'll open 3 (2 + found another today) issues, but since development slowed down or even stopped, there is no hurry doing so; I'll do it tomorrow or the day after.
Code:
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[255,128,128], format=vs.YUV420P8)
clip = core.fmtc.resample(clip=clip, kernel="spline16", w=960, h=720, interlaced=False, interlacedd=False, cpuopt=0)
clip=core.resize.Point(clip=clip, format=vs.YUV420P8)
clip.set_output()
This will return a green clip if cpuopt=0 and a white clip if cpuopt=1 or anything but 0.

If I add this right before the fmtc line :
Code:
clip=core.resize.Point(clip=clip, format=vs.YUV420P16)
you get a white clip in both cases. It seems if cpuopt=0, the 8bit data is not converted to 16 bit first, maybe it's even the way it's supposed to work and you need 16b (using fmtc.bitdepth) before using fmtc.resample !

BTW, how do I create a Blank or colored clip with >8 bits data ?
I've tried
Code:
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[255,128,128], format=vs.YUV420P16)
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[1,0.5,0.5], format=vs.YUV420P16)
the documentation points to using float, like I did in the second case..like Luma is ~0-1 U and V are ~ -0.5-+0.5

I hope it is not implemented in a way that you need to double the values for each extra bit >8 .... This would be confusing and non-practical.
It uses the supplied value directly. Floats are rounded to int for int formats. It's implemented that way so you can easily have exact values for testing.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet

Last edited by Myrsloik; 4th May 2017 at 19:05.
Myrsloik is offline   Reply With Quote
Old 5th May 2017, 12:57   #2511  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
Something else where I'm thinking "let's just ask, who knows....":

Is there a way to _separate_ high frequency content from the mid to low frequency content?

Commercial noise-reduction programs/plugins have (sometimes.. a lot?) a preview-mode where they display the luminance channel at the same time, 3 times: high freq, mid freq, low freq. It helps (me at least) a lot in dialing in noise reduction or sharpening plugins by better seeing what is happening in certain areas. Is it possible to do this splitting so I can tweak around with it in Vapoursynth Editor to take a look at just the high-freq or low-freq stuff. Splitting the luminance and chroma channels is easy enough.

Also, on a related note, can someone give me a small example how I would go about to blending two clips with a certain opacity? Like, I want the original image, but overlay 30% of a cleaned-image on top of that. Can that be done with the generic Vapoursynth stuff?
dipje is offline   Reply With Quote
Old 5th May 2017, 13:37   #2512  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
Quote:
Originally Posted by dipje View Post
Something else where I'm thinking "let's just ask, who knows....":

Is there a way to _separate_ high frequency content from the mid to low frequency content?

Commercial noise-reduction programs/plugins have (sometimes.. a lot?) a preview-mode where they display the luminance channel at the same time, 3 times: high freq, mid freq, low freq. It helps (me at least) a lot in dialing in noise reduction or sharpening plugins by better seeing what is happening in certain areas. Is it possible to do this splitting so I can tweak around with it in Vapoursynth Editor to take a look at just the high-freq or low-freq stuff. Splitting the luminance and chroma channels is easy enough.

Also, on a related note, can someone give me a small example how I would go about to blending two clips with a certain opacity? Like, I want the original image, but overlay 30% of a cleaned-image on top of that. Can that be done with the generic Vapoursynth stuff?
misc.AverageFrames([clipa, clipb], [10, 3], scale=1/10) (or scale=1/13 depending on how you want it)

For the frequency stuff you can simply use normal convolutions if you don't need speed or something spectacular. [-1 -1 -1 -1 8 -1 -1 -1 -1] <- highpass filter, make it all ones and you have a lowpass filter.

Or describe more exactly what you want.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet

Last edited by Myrsloik; 5th May 2017 at 13:50.
Myrsloik is offline   Reply With Quote
Old 5th May 2017, 14:32   #2513  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
I guess a highpass and/or lowpass is what I want yeah.. maybe some Levels tweaking to get the stuff visible but that is something 'done to taste'.

Only way for me to describe it better is by a screenshot: https://snag.gy/ym2UzE.jpg. But highpass / lowpass can help a lot already.
dipje is offline   Reply With Quote
Old 5th May 2017, 15:04   #2514  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
Found docs of AverageFrames in the github. Not on your site yet :P.

Hmm.. 'scale must be a positive number' as an error message, I get whenever I have a scale of less than 0.5. scale = 1/10 or cale = 1/13 both give less than 0.5 .

Can I just keep 'scale = 1.0' and then change the weights around between 0.0 and 1.0... as long as both of my weights are a total of 1.0?

weight a = 0.0, weight b = 1.0 means get only 'clip b'.
weight a = 1.0, weight b = 0.0 means get only 'clip a'.
weight a = 0.5, weight b = 0.5 means get perfect blend between the two.

So then I can start shifting the weights to '0.4 and 0.6' or '0.7 and 0.3' depending on how much I want of which clip.. am I right in thinking like this?

edit: Took a look at the source of AverageFrames.
It seems 'weights' are integers and 'scale' as well, while the documentation lists them both as floats.
Also, the docs say that after multiplying by the weights + summing, the results is multiplied by the scale. Looking at the source it seems it's divided by the scale (which would make the integers much better to understand and work with :P).

So weight a = 70, weight b = 30, scale = 100 seems to give me 'a little bit of b' what I'm looking for.

Last edited by dipje; 5th May 2017 at 15:13.
dipje is offline   Reply With Quote
Old 5th May 2017, 15:07   #2515  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
Quote:
Originally Posted by dipje View Post
Found docs of AverageFrames in the github. Not on your site yet :P.

Hmm.. 'scale must be a positive number' as an error message, I get whenever I have a scale of less than 0.5. scale = 1/10 or cale = 1/13 both give less than 0.5 .

Can I just keep 'scale = 1.0' and then change the weights around between 0.0 and 1.0... as long as both of my weights are a total of 1.0?

weight a = 0.0, weight b = 1.0 means get only 'clip b'.
weight a = 1.0, weight b = 0.0 means get only 'clip a'.
weight a = 0.5, weight b = 0.5 means get perfect blend between the two.

So then I can start shifting the weights to '0.4 and 0.6' or '0.7 and 0.3' depending on how much I want of which clip.. am I right in thinking like this?
No, you can't. Integer weights for integer clips. Floats get rounded.


Ooops, scale should be 10 or 13. It's the integer to divide by. I should double check this stuff and fix the documentation...
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet

Last edited by Myrsloik; 5th May 2017 at 15:14.
Myrsloik is offline   Reply With Quote
Old 5th May 2017, 15:37   #2516  |  Link
Joachim Buambeki
Registered User
 
Join Date: May 2010
Location: Germany, Munich
Posts: 49
I can't comment on how to do frequency separation in VS but I can vouch for it to be extremely useful for (very) high resolution video denoising and am wondering why this hasn't been implemented in any noise removal scripts (or has it)?
Joachim Buambeki is offline   Reply With Quote
Old 5th May 2017, 15:59   #2517  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
I just use it to finetune the settings. I split my yuv into 3 separate channels, then one by one I use this convolution trick to really show the fine detail noise. Things like compression artifacts, blockiness, lines, etc.. In playing with the convolution stuff (and to enhance it's effect a bit or make it better to view) I switched to the 5x5 convolution and then play with the divisor value (values >= 0 <= 1.0) to get something that highlights the artifacts.
Then I place knlmeanscl before it and tweak the strength parameter there till the artifacts are gone. I do that separately for all 3 channels (although most of the time my U + V channels end up being pretty close / the same). I merge the 3 clips back into a single YUV clip and view it. If I think it looks 'too strongly denoised' I use the averageframeS() thing to blend it with the original till I get something that doesn't loose (too much) detail.
Deband everything slightly with f3kdb to 10bits / 12bits (depending on the project) which will add some dithering noise back in but I lost a lot of banding and compression artifacts of 'weaker codecs'. Render the stuff to a DPX sequence or something else used by colour programs and the clip can go into grading.
dipje is offline   Reply With Quote
Old 5th May 2017, 19:24   #2518  |  Link
age
Registered User
 
Join Date: Oct 2015
Posts: 54
Quote:
Originally Posted by dipje View Post
I guess a highpass and/or lowpass is what I want yeah.. maybe some Levels tweaking to get the stuff visible but that is something 'done to taste'.

Only way for me to describe it better is by a screenshot: https://snag.gy/ym2UzE.jpg. But highpass / lowpass can help a lot already.
It seems the same Frequency separation described in this article for gimp
https://pixls.us/articles/skin-retou...let-decompose/

We need "grain extract" and "grain merge"
https://docs.gimp.org/2.9/en/gimp-co...yer-modes.html
Code:
grain_extract=core.std.Expr(clips=[lower layer,upper layer], expr=[" x y - 128 + "])

grain_merge=core.std.Expr(clips=[lower layer,upper layer], expr=[" x y + 128 - "])
Code:
c
low_frequency=core.std.Convolution(c,matrix=[1, 1, 1, 1, 1, 1, 1, 1, 1])
high_frequency=core.std.Expr(clips=[c,low_frequency], expr=[" x y - 128 + "])
f=core.std.Expr(clips=[low_frequency,high_frequency], expr=[" x y + 128 - "])
age is offline   Reply With Quote
Old 5th May 2017, 19:59   #2519  |  Link
Joachim Buambeki
Registered User
 
Join Date: May 2010
Location: Germany, Munich
Posts: 49
What would be an approximation of the mid frequency? According to my experience, you can be increasingly more aggressive with the (temporal) denoising, the lower your frequency is (with the odd exceptions).

Last edited by Joachim Buambeki; 5th May 2017 at 21:59.
Joachim Buambeki is offline   Reply With Quote
Old 6th May 2017, 00:38   #2520  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 268
@Age, Thanks for the input!

I tweaked it a bit though. Well.. I'm working in 16bit so the '128' in your example I changed to 32768. And the simple 3x3 convolution I replaced with a 5x5 gaussian blur kernel.

After splitting the image into 'hfreq' and 'lfreq' this way, I split 'hfreq' into separate y, u and v. I apply KNLMeansCL do on of the planes, afterwards I boost the contrast quite a bit (on the hfreq ones at least) by using std.Limiter / std.Levels. This way I can really see the high-freq filth in the image. I increase KNLMeansCL's strength bit by bit until it look good. I repeat the process for the u and v planes, which normally can do with less strength on knlmeanscl. I combine the y, u and v clips back into 'hfreq', then to the exact same with the lfreq clip (split into y,u,v, tweak knlmeans with a contrast boost, turn off the contrast boost and combine them back into one 'lfreq' clip). I actually finalized on less strength and less of a search window on the low frequency content (Since I'm more working on compression artifacts and banding issues and such, not the noise of the image itself).

Then use your last 'f' line to combine my lfreq and hfreq back into a single clip, use ContraSharpening against the original, f3kdb with slight settings to dither down to 10bit and presto.
dipje is offline   Reply With Quote
Reply

Tags
speed, vaporware, vapoursynth

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 00:04.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.