Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
|
|
Thread Tools | Search this Thread | Display Modes |
13th August 2016, 17:33 | #1 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Plum, blind deconvolution enhanced by pixel/block matching
https://github.com/IFeelBloated/Plum...master/Plum.py
Plum gives an alternative kind of sharpening, the kernel is an actual deconvolution filter instead of USM, which reverses blurring with circle shaped PSFs Plum produces no ringing (but might enhance the existing ringing), almost no aliasing and does not enhance the noise much (will probably enhance static noise), and it's also not xylographing. some previous discussion here: http://forum.doom9.org/showthread.php?t=173756 the non-linear amplifying expression was copied from finesharp(credits to didee) Plum could also do some "plastic surgeries" for DVD videos and make them look as close to the master tape as possible it's impossible for Plum to restore the DVD video back to its exact mastertape state, that is, Plum tries to recover the general "mastertape texture" (image should "look" extremely delicate and fragile and fine, not "dumb" and coarse) instead of trying to recover every lost detail. the underlying philosophy is similar to generative adversarial network: why even bother trying to reconstruct the ground truth (which is impossible anyways) when you could just fool everyone's eyes with the fake stuff as long as there's no reference to the ground truth? some discussion: https://forum.doom9.org/showthread.p...34#post1810234 Master tape DVD Release DVD + Plum Code:
ref = Plum.Basic(clip) clip = Plum.Final([clip, ref], [Plum.Super(clip), Plum.Super(ref)]) Last edited by feisty2; 24th June 2017 at 09:22. |
17th August 2016, 17:01 | #6 | Link |
Pajas Mentales...
Join Date: Dec 2004
Location: Spanishtán
Posts: 497
|
is not problem of GCC
Code:
vcfreq.cpp:45:21: fatal error: windows.h: No such file or directory Code:
└───╼ locate windows.h >snip /usr/include/wine/windows/windows.h >snip Last edited by sl1pkn07; 17th August 2016 at 17:04. |
17th August 2016, 17:07 | #7 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
maybe you could just replace it with <cstdlib> or.. |
|
17th August 2016, 18:55 | #8 | Link |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,579
|
It uses loadlibrary to load fftw. That's the main reason it won't compile in linux. But I fixed that.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
18th August 2016, 11:42 | #9 | Link |
Registered User
Join Date: May 2016
Posts: 7
|
Python exception: DFTTest: invalid entry in sigma string
Hi feisty2,
I tried your script, but it prompt me this error through vspipe Code:
vspipe -y lo.gatto.vpy . Script evaluation failed: Python exception: DFTTest: invalid entry in sigma string Traceback (most recent call last): File "src/cython/vapoursynth.pyx", line 1491, in vapoursynth.vpy_evaluateScript (src/cython/vapoursynth.c:26905) File "lo.gatto.vpy", line 18, in <module> v = Plum.Final([v, deconv, conv], [sup, supdeconv, supconv]) File "/usr/lib/python3.5/site-packages/Plum.py", line 315, in Final clip = internal.final(src, super, radius, pel, sad, constants, attenuate_window, attenuate, cutoff) File "/usr/lib/python3.5/site-packages/Plum.py", line 163, in final dif = DFTTest(dif, sbsize=attenuate_window, sstring=attenuate, **dfttest_args) File "src/cython/vapoursynth.pyx", line 1383, in vapoursynth.Function.__call__ (src/cython/vapoursynth.c:25212) vapoursynth.Error: DFTTest: invalid entry in sigma string sample clip: https://www.dropbox.com/s/w5plhz6ob7...mple.mpeg?dl=0 Mediainfo output Code:
General Complete name : sample.mpeg Format : MPEG-PS File size : 3.04 MiB Duration : 4 s 840 ms Overall bit rate mode : Variable Overall bit rate : 5 267 kb/s Video ID : 224 (0xE0) Format : MPEG Video Format version : Version 2 Format profile : Main@Main Format settings, BVOP : Yes Format settings, Matrix : Default Format settings, GOP : M=3, N=13 Format settings, picture structure : Frame Duration : 4 s 840 ms Bit rate mode : Variable Bit rate : 5 162 kb/s Maximum bit rate : 8 500 kb/s Width : 720 pixels Height : 576 pixels Display aspect ratio : 2.40:1 Frame rate : 25.000 FPS Standard : PAL Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Interlaced Scan order : Top Field First Compression mode : Lossy Bits/(Pixel*Frame) : 0.498 Time code of first frame : 11:01:25:07 Time code source : Group of pictures header GOP, Open/Closed : Open Stream size : 2.98 MiB (98%) Color primaries : BT.601 PAL Transfer characteristics : BT.470 System B, BT.470 System G Matrix coefficients : BT.601 My VS script Code:
import vapoursynth as vs import Plum core = vs.get_core() v = core.d2v.Source('output.mpeg.d2v') v = v[92173] v = core.fmtc.bitdepth(v, flt = 1) v = core.yadifmod.Yadifmod(v, edeint = core.nnedi3.nnedi3(v, 1), order = 1 ) v = core.fmtc.resample(v, w = 1024, h = 576, kernel = 'sinc', taps = 128) deconv = Plum.Basic(v) conv = Plum.Basic(v,mode="convolution") sup = Plum.Super([v, None]) supdeconv = Plum.Super([v, deconv]) supconv = Plum.Super([conv, None]) v = Plum.Final([v, deconv, conv], [sup, supdeconv, supconv]) v = core.fmtc.bitdepth(v, bits = 8, dmode = 8, patsize = 32) v.set_output() |
18th August 2016, 13:27 | #10 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
and your video is progressive, DO NOT deinterlace it |
|
18th August 2016, 13:53 | #11 | Link | ||
Registered User
Join Date: May 2016
Posts: 7
|
Quote:
Code:
https://github.com/HomeOfVapourSynthEvolution/VapourSynth-DFTTest is this the most recent updated branch, isn't it? Quote:
I thought it too, but I always let have the experts the final word. Thanks. I will edit this message if it turns with no errors. Thanks again, feisty2. EDIT1: Nothing, same erorr. Last edited by aculnaig; 18th August 2016 at 14:06. |
||
18th August 2016, 14:15 | #12 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
the sample generally lacks textures and details and looks plain flat, and deconvolution does not work on stuff like this, Plum is designed for vids that are filled with rich and soft textures/details, and it then makes them sharp and delicate, and there's just no texture at all in that sample, so... |
|
18th August 2016, 14:26 | #13 | Link |
Registered User
Join Date: May 2016
Posts: 7
|
Ok, thanks.
I wanted to do some test with that movie but I always end up doing things that doesn't fit ! Sigh... The film is this, btw... Code:
https://en.wikipedia.org/wiki/Il_commissario_Lo_Gatto Last edited by aculnaig; 18th August 2016 at 14:28. |
21st August 2016, 10:39 | #14 | Link |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
I totally hate the USM "boom" look. So this algorithm looks interesting to me. However, I think FineSharp already does a pretty nice job on avoiding the USM look. So the first thing I did was look at the comparison images. But for some reason, the FineSharp image has a *much* stronger sharpening strength applied to it than the Plum image in the first post of this thread. Which makes it hard to judge which looks really better. Would you mind increasing the Plum sharpening strength for the screenshots in the first post to achieve the same subjective sharpening strength as FineSharp? That would make it *much* easier to see if plum really improves on FineSharp in quality.
Also, I prefer testing with high-quality sources instead of ultra-blurry SD sources. Maybe you could add comparison screenshots for this image? FWIW, when comparing sharpening algos I usually prefer using a rather high sharpening strength, higher than I would use in real life, because that makes it more obvious what the algorithms are really doing. Thank you!! |
21st August 2016, 10:55 | #15 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
Plum looks much milder because it ONLY sharpens the most delicate parts of the image(makes high frequencies even higher), it does not amplify low/median frequencies at all while finesharp still amplifies median frequencies.. maybe I could lower down the strength of finesharp and make it easier to compare.. |
21st August 2016, 11:04 | #16 | Link |
Registered Developer
Join Date: Sep 2006
Posts: 9,140
|
Well, the same sharpening "number" in different algos doesn't necessarily produce the same subjective sharpening strength. I'd like the images to produce the same *perceived* sharpness level, and then compare which I like best. This is how I usually compare sharpening algos, anyway. Usually when I do that, any USM like algo looks terribly bloated/artificial, while algos like FineSharp look much more natural.
I understand I could easily do this comparison myself, but I'm always having a hard time getting AviSynth/VapourSynth scripts with lots of dependencies to work... Maybe you could just sharpen my preferred 1080p image with all dials turned up to "overload" with Plum, then I can compare your final result with the best I can achieve with my own preferred algos? That would be awesome! |
21st August 2016, 11:12 | #17 | Link | |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
Quote:
Plum Code:
clp = finesharp.sharpen(clp,sstr=0.86) |
|
21st August 2016, 11:36 | #18 | Link |
I'm Siri
Join Date: Oct 2012
Location: void
Posts: 2,633
|
and I cannot sharpen your image because Plum works in both spatial and temporal dimensions, gonna need a video sample and it should be no more than 720p
Plum is an advanced and very complex algorithm, I don't think my shitty i7-4790k could handle Plum on 1080p and below is how Plum looks like with an insanely high strength Code:
deconv = Plum.Basic(clp) conv = Plum.Basic(clp,mode="convolution") sup = Plum.Super([clp, None]) supdeconv = Plum.Super([clp, deconv]) supconv = Plum.Super([conv, None]) clp = Plum.Final([clp, deconv, conv], [sup, supdeconv, supconv], strength=3.2, cutoff=16) |
21st August 2016, 12:20 | #20 | Link |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,579
|
I had a look at your script and saw some odd things...
You invoke several Expr several times here in series with max. You can have up to 25 inputs to Expr and combining several will be much faster. Making the input clips something like [clip, SelectEvery(src, radius * 2 + 1, 1), SelectEvery(src, radius * 2 + 1, 2)...] and then the expr " x y max z max a max b max..." will be faster. And maybe 3% more annoying to generate the string for but definitely faster. Code:
def extremum_multi(src, radius, mode): core = vs.get_core() SelectEvery = core.std.SelectEvery Expr = core.std.Expr clip = SelectEvery(src, radius * 2 + 1, 0) for i in range(1, radius * 2 + 1): clip = Expr([clip, SelectEvery(src, radius * 2 + 1, i)], "x y " + mode) return clip Fom the shrink function: Code:
DDD = Expr([DD, convDD], ["x y - x 0.5 - * 0 < 0.5 x y - abs x 0.5 - abs < x y - 0.5 + x ? ?"]) dif = MakeDiff(dif, DDD) convD = Convolution(dif, **conv_args) dif = Expr([dif, convD], ["y 0.5 - abs x 0.5 - abs > y 0.5 ?"]) clip = MergeDiff(src, dif) Code:
clamped = helpers.clamp(averaged, bright_limit, dark_limit, 0.0, 0.0) amplified = Expr([clamped, src[0]], expression)
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
Thread Tools | Search this Thread |
Display Modes | |
|
|