Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
2nd May 2017, 22:31 | #2501 | Link |
Registered User
Join Date: Jun 2006
Posts: 452
|
Thanks to all who contributed to finding and solving my crash problem with fmtconv. I could not use something like mawen1250's "nnedi3_resample.py" script because of the frequent crashes.
This script is also used in HAvsFunc. For somewhat around 6 months I have been pulling my hair because of the frequent/random crashes with such scripts. I've always thougt it must be my system because no-one else seemed to have this problem - With the recompiled version, all crashes all gone !! I've tried to recompile fmtc myself on MINGW64/MSYS2, but I got an error in Interlocked.hpp : Code:
config.status: creating Makefile config.status: executing depfiles commands config.status: executing libtool commands CXX ../../src/main.lo CXX ../../src/fmtc/Bitdepth.lo In file included from ./../../src/conc/Interlocked.h:132:0, from ./../../src/conc/AtomicPtr.hpp:27, from ./../../src/conc/AtomicPtr.h:117, from ./../../src/conc/CellPool.h:30, from ./../../src/conc/ObjPool.h:44, from ../../src/fmtc/Bitdepth.h:30, from ../../src/main.cpp:18: ./../../src/conc/Interlocked.hpp: In static member function 'static void conc::Interlocked::cas(conc::Interlocked::Data128&, volatile Data128&, const Data128&, const Data128&)': ./../../src/conc/Interlocked.hpp:349:2: error: '::InterlockedCompareExchange128' has not been declared ::InterlockedCompareExchange128 ( ^~ In file included from ./../../src/conc/Interlocked.h:132:0, from ./../../src/conc/AtomicPtr.hpp:27, from ./../../src/conc/AtomicPtr.h:117, from ./../../src/conc/CellPool.h:30, from ./../../src/conc/ObjPool.h:44, from ./../../src/fmtc/Bitdepth.h:30, from ../../src/fmtc/Bitdepth.cpp:28: ./../../src/conc/Interlocked.hpp: In static member function 'static void conc::Interlocked::cas(conc::Interlocked::Data128&, volatile Data128&, const Data128&, const Data128&)': ./../../src/conc/Interlocked.hpp:349:2: error: '::InterlockedCompareExchange128' has not been declared ::InterlockedCompareExchange128 ( ^~ ./../../src/conc/Interlocked.hpp:351:4: error: 'excg_hi' was not declared in this scope excg_hi, ^~~~~~~ ./../../src/conc/Interlocked.hpp:351:4: error: 'excg_hi' was not declared in this scope excg_hi, ^~~~~~~ ./../../src/conc/Interlocked.hpp:352:4: error: 'excg_lo' was not declared in this scope excg_lo, ^~~~~~~ ./../../src/conc/Interlocked.hpp:352:4: error: 'excg_lo' was not declared in this scope excg_lo, ^~~~~~~ Code:
cd build/unix ./autogen.sh ./configure --prefix=/local64 --{build,host,target}=x86_64-w64-mingw32 make -j4 Last edited by Pat357; 2nd May 2017 at 22:45. |
3rd May 2017, 11:21 | #2502 | Link |
unsigned int
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
|
Probably delete https://github.com/EleonoreMizo/fmtc....hpp#L349-L354
and replace with https://github.com/EleonoreMizo/fmtc...ocked.hpp#L360 Also open an issue.
__________________
Buy me a "coffee" and/or hire me to write code! |
3rd May 2017, 16:12 | #2503 | Link |
Registered User
Join Date: Jun 2006
Posts: 452
|
Thanks a lot !
Yup, that was the first stopper... Now I got : Code:
CXX ../../src/fmtcl/TransOpErimm.lo CXX ../../src/fmtcl/TransOpFilmStream.lo CXX ../../src/fmtcl/TransOpLinPow.lo CXX ../../src/fmtcl/TransOpLogC.lo CXX ../../src/fmtcl/TransOpLogTrunc.lo CXX ../../src/fmtcl/TransOpPow.lo CXX ../../src/fmtcl/TransOpSLog.lo CXX ../../src/fmtcl/TransOpSLog3.lo CXX ../../src/fmtcl/VoidAndCluster.lo CXX ../../src/fstb/CpuId.lo CXX ../../src/fstb/fnc.lo {standard input}: Assembler messages: {standard input}:46: Error: unsupported instruction `mov' {standard input}:96: Error: unsupported instruction `mov' {standard input}:113: Error: unsupported instruction `mov' {standard input}:125: Error: unsupported instruction `mov' {standard input}:161: Error: unsupported instruction `mov' make: *** [Makefile:1084: ../../src/fstb/CpuId.lo] Error 1 make: *** Waiting for unfinished jobs.. |
3rd May 2017, 16:47 | #2504 | Link |
unsigned int
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
|
Maybe this works: https://dpaste.de/sVR3
__________________
Buy me a "coffee" and/or hire me to write code! |
3rd May 2017, 20:30 | #2505 | Link |
Registered User
Join Date: Jun 2006
Posts: 452
|
Yep, compiling is now without errors.
But : the generated .DLL is unfortunately not working : - as long as I use "VSPIPE -i ", everything seems to be ok. No errors and the clip properties are correct (I'm using fmtc for scaling & bitdepth conversion). - if I try to feed the stream to ffplay, mplayer or MPV, it looks like no frames are produced. FFplay sits there but no errors or no window for playing is opened. - VSPIPE -p behaviours the same as the players : no errors and the progress counter doesn't even start. No crashes neither. Did you ever compile it yourself ? Last edited by Pat357; 3rd May 2017 at 22:05. |
3rd May 2017, 20:48 | #2506 | Link | |
unsigned int
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
|
Quote:
It sounds like there is a deadlock. Do you have avstp.dll someplace where it might be found, like in System32 or the same folder as libfmtconv.dll?
__________________
Buy me a "coffee" and/or hire me to write code! |
|
3rd May 2017, 21:05 | #2507 | Link | |
Registered User
Join Date: Jun 2006
Posts: 452
|
Quote:
Would it also work in VS ? I used it before in AVS but I found increasing #threads in Set's AVS 2.6 MT more stable then this plug-in. Last edited by Pat357; 3rd May 2017 at 22:06. Reason: typo |
|
4th May 2017, 15:49 | #2508 | Link | |
unsigned int
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
|
Quote:
__________________
Buy me a "coffee" and/or hire me to write code! |
|
4th May 2017, 17:20 | #2509 | Link | |
Registered User
Join Date: Jun 2006
Posts: 452
|
Quote:
Code:
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[255,128,128], format=vs.YUV420P8) clip = core.fmtc.resample(clip=clip, kernel="spline16", w=960, h=720, interlaced=False, interlacedd=False, cpuopt=0) clip=core.resize.Point(clip=clip, format=vs.YUV420P8) clip.set_output() If I add this right before the fmtc line : Code:
clip=core.resize.Point(clip=clip, format=vs.YUV420P16) BTW, how do I create a Blank or colored clip with >8 bits data ? I've tried Code:
clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[255,128,128], format=vs.YUV420P16) clip = core.std.BlankClip(width=1920, height=1080, fpsnum=24000 ,fpsden=1001, color=[1,0.5,0.5], format=vs.YUV420P16) I hope it is not implemented in a way that you need to double the values for each extra bit >8 .... This would be confusing and non-practical. |
|
4th May 2017, 17:21 | #2510 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
|
Quote:
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet Last edited by Myrsloik; 4th May 2017 at 19:05. |
|
5th May 2017, 12:57 | #2511 | Link |
Registered User
Join Date: Oct 2014
Posts: 268
|
Something else where I'm thinking "let's just ask, who knows....":
Is there a way to _separate_ high frequency content from the mid to low frequency content? Commercial noise-reduction programs/plugins have (sometimes.. a lot?) a preview-mode where they display the luminance channel at the same time, 3 times: high freq, mid freq, low freq. It helps (me at least) a lot in dialing in noise reduction or sharpening plugins by better seeing what is happening in certain areas. Is it possible to do this splitting so I can tweak around with it in Vapoursynth Editor to take a look at just the high-freq or low-freq stuff. Splitting the luminance and chroma channels is easy enough. Also, on a related note, can someone give me a small example how I would go about to blending two clips with a certain opacity? Like, I want the original image, but overlay 30% of a cleaned-image on top of that. Can that be done with the generic Vapoursynth stuff? |
5th May 2017, 13:37 | #2512 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
|
Quote:
For the frequency stuff you can simply use normal convolutions if you don't need speed or something spectacular. [-1 -1 -1 -1 8 -1 -1 -1 -1] <- highpass filter, make it all ones and you have a lowpass filter. Or describe more exactly what you want.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet Last edited by Myrsloik; 5th May 2017 at 13:50. |
|
5th May 2017, 14:32 | #2513 | Link |
Registered User
Join Date: Oct 2014
Posts: 268
|
I guess a highpass and/or lowpass is what I want yeah.. maybe some Levels tweaking to get the stuff visible but that is something 'done to taste'.
Only way for me to describe it better is by a screenshot: https://snag.gy/ym2UzE.jpg. But highpass / lowpass can help a lot already. |
5th May 2017, 15:04 | #2514 | Link |
Registered User
Join Date: Oct 2014
Posts: 268
|
Found docs of AverageFrames in the github. Not on your site yet :P.
Hmm.. 'scale must be a positive number' as an error message, I get whenever I have a scale of less than 0.5. scale = 1/10 or cale = 1/13 both give less than 0.5 . Can I just keep 'scale = 1.0' and then change the weights around between 0.0 and 1.0... as long as both of my weights are a total of 1.0? weight a = 0.0, weight b = 1.0 means get only 'clip b'. weight a = 1.0, weight b = 0.0 means get only 'clip a'. weight a = 0.5, weight b = 0.5 means get perfect blend between the two. So then I can start shifting the weights to '0.4 and 0.6' or '0.7 and 0.3' depending on how much I want of which clip.. am I right in thinking like this? edit: Took a look at the source of AverageFrames. It seems 'weights' are integers and 'scale' as well, while the documentation lists them both as floats. Also, the docs say that after multiplying by the weights + summing, the results is multiplied by the scale. Looking at the source it seems it's divided by the scale (which would make the integers much better to understand and work with :P). So weight a = 70, weight b = 30, scale = 100 seems to give me 'a little bit of b' what I'm looking for. Last edited by dipje; 5th May 2017 at 15:13. |
5th May 2017, 15:07 | #2515 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,548
|
Quote:
Ooops, scale should be 10 or 13. It's the integer to divide by. I should double check this stuff and fix the documentation...
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet Last edited by Myrsloik; 5th May 2017 at 15:14. |
|
5th May 2017, 15:37 | #2516 | Link |
Registered User
Join Date: May 2010
Location: Germany, Munich
Posts: 49
|
I can't comment on how to do frequency separation in VS but I can vouch for it to be extremely useful for (very) high resolution video denoising and am wondering why this hasn't been implemented in any noise removal scripts (or has it)?
|
5th May 2017, 15:59 | #2517 | Link |
Registered User
Join Date: Oct 2014
Posts: 268
|
I just use it to finetune the settings. I split my yuv into 3 separate channels, then one by one I use this convolution trick to really show the fine detail noise. Things like compression artifacts, blockiness, lines, etc.. In playing with the convolution stuff (and to enhance it's effect a bit or make it better to view) I switched to the 5x5 convolution and then play with the divisor value (values >= 0 <= 1.0) to get something that highlights the artifacts.
Then I place knlmeanscl before it and tweak the strength parameter there till the artifacts are gone. I do that separately for all 3 channels (although most of the time my U + V channels end up being pretty close / the same). I merge the 3 clips back into a single YUV clip and view it. If I think it looks 'too strongly denoised' I use the averageframeS() thing to blend it with the original till I get something that doesn't loose (too much) detail. Deband everything slightly with f3kdb to 10bits / 12bits (depending on the project) which will add some dithering noise back in but I lost a lot of banding and compression artifacts of 'weaker codecs'. Render the stuff to a DPX sequence or something else used by colour programs and the clip can go into grading. |
5th May 2017, 19:24 | #2518 | Link | |
Registered User
Join Date: Oct 2015
Posts: 54
|
Quote:
https://pixls.us/articles/skin-retou...let-decompose/ We need "grain extract" and "grain merge" https://docs.gimp.org/2.9/en/gimp-co...yer-modes.html Code:
grain_extract=core.std.Expr(clips=[lower layer,upper layer], expr=[" x y - 128 + "]) grain_merge=core.std.Expr(clips=[lower layer,upper layer], expr=[" x y + 128 - "]) Code:
c low_frequency=core.std.Convolution(c,matrix=[1, 1, 1, 1, 1, 1, 1, 1, 1]) high_frequency=core.std.Expr(clips=[c,low_frequency], expr=[" x y - 128 + "]) f=core.std.Expr(clips=[low_frequency,high_frequency], expr=[" x y + 128 - "]) |
|
5th May 2017, 19:59 | #2519 | Link |
Registered User
Join Date: May 2010
Location: Germany, Munich
Posts: 49
|
What would be an approximation of the mid frequency? According to my experience, you can be increasingly more aggressive with the (temporal) denoising, the lower your frequency is (with the odd exceptions).
Last edited by Joachim Buambeki; 5th May 2017 at 21:59. |
6th May 2017, 00:38 | #2520 | Link |
Registered User
Join Date: Oct 2014
Posts: 268
|
@Age, Thanks for the input!
I tweaked it a bit though. Well.. I'm working in 16bit so the '128' in your example I changed to 32768. And the simple 3x3 convolution I replaced with a 5x5 gaussian blur kernel. After splitting the image into 'hfreq' and 'lfreq' this way, I split 'hfreq' into separate y, u and v. I apply KNLMeansCL do on of the planes, afterwards I boost the contrast quite a bit (on the hfreq ones at least) by using std.Limiter / std.Levels. This way I can really see the high-freq filth in the image. I increase KNLMeansCL's strength bit by bit until it look good. I repeat the process for the u and v planes, which normally can do with less strength on knlmeanscl. I combine the y, u and v clips back into 'hfreq', then to the exact same with the lfreq clip (split into y,u,v, tweak knlmeans with a contrast boost, turn off the contrast boost and combine them back into one 'lfreq' clip). I actually finalized on less strength and less of a search window on the low frequency content (Since I'm more working on compression artifacts and banding issues and such, not the noise of the image itself). Then use your last 'f' line to combine my lfreq and hfreq back into a single clip, use ContraSharpening against the original, f3kdb with slight settings to dither down to 10bit and presto. |
Tags |
speed, vaporware, vapoursynth |
Thread Tools | Search this Thread |
Display Modes | |
|
|