Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 13th August 2016, 17:33   #1  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Plum, blind deconvolution enhanced by pixel/block matching

https://github.com/IFeelBloated/Plum...master/Plum.py
Plum gives an alternative kind of sharpening, the kernel is an actual deconvolution filter instead of USM, which reverses blurring with circle shaped PSFs
Plum produces no ringing (but might enhance the existing ringing), almost no aliasing and does not enhance the noise much (will probably enhance static noise), and it's also not xylographing.
some previous discussion here:
http://forum.doom9.org/showthread.php?t=173756

the non-linear amplifying expression was copied from finesharp(credits to didee)

Plum could also do some "plastic surgeries" for DVD videos and make them look as close to the master tape as possible
it's impossible for Plum to restore the DVD video back to its exact mastertape state, that is, Plum tries to recover the general "mastertape texture" (image should "look" extremely delicate and fragile and fine, not "dumb" and coarse) instead of trying to recover every lost detail.

the underlying philosophy is similar to generative adversarial network: why even bother trying to reconstruct the ground truth (which is impossible anyways) when you could just fool everyone's eyes with the fake stuff as long as there's no reference to the ground truth?


some discussion: https://forum.doom9.org/showthread.p...34#post1810234

Master tape


DVD Release


DVD + Plum
Code:
ref = Plum.Basic(clip)
clip = Plum.Final([clip, ref], [Plum.Super(clip), Plum.Super(ref)])
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated

Last edited by feisty2; 24th June 2017 at 09:22.
feisty2 is offline   Reply With Quote
Old 17th August 2016, 11:51   #2  |  Link
hydra3333
Registered User
 
Join Date: Oct 2009
Location: crow-land
Posts: 522
Thanks !
Any dependencies ?
hydra3333 is offline   Reply With Quote
Old 17th August 2016, 12:31   #3  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by hydra3333 View Post
Thanks !
Any dependencies ?
VCFreq and... must have stuff?
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 17th August 2016, 15:45   #4  |  Link
sl1pkn07
Pajas Mentales...
 
Join Date: Dec 2004
Location: Spanishtán
Posts: 452
knlmeanscl
fmtconv
nnedi3
vsfreq (only windows )
dfttest
bm3d
__________________
[AUR] Vapoursynth Stuff
sl1pkn07 is offline   Reply With Quote
Old 17th August 2016, 16:57   #5  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
updated Plum and same to the pics at #1
@sl1pkn07
vcfreq is open source, maybe it will just compile without any weird shit on GCC...who knows
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 17th August 2016, 17:01   #6  |  Link
sl1pkn07
Pajas Mentales...
 
Join Date: Dec 2004
Location: Spanishtán
Posts: 452
is not problem of GCC
Code:
vcfreq.cpp:45:21: fatal error: windows.h: No such file or directory
Code:
└───╼  locate windows.h
>snip
/usr/include/wine/windows/windows.h
>snip
__________________
[AUR] Vapoursynth Stuff

Last edited by sl1pkn07; 17th August 2016 at 17:04.
sl1pkn07 is offline   Reply With Quote
Old 17th August 2016, 17:07   #7  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by sl1pkn07 View Post
is not problem of GCC
Code:
vcfreq.cpp:45:21: fatal error: windows.h: No such file or directory
Code:
└───╼  locate windows.h
>snip
/usr/include/wine/windows/windows.h
>snip
very unlikely that vcfreq called some windows api or whatever like that
maybe you could just replace it with <cstdlib> or..
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 17th August 2016, 18:55   #8  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Ikea Chair
Posts: 1,897
It uses loadlibrary to load fftw. That's the main reason it won't compile in linux. But I fixed that.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet
Myrsloik is offline   Reply With Quote
Old 18th August 2016, 11:42   #9  |  Link
aculnaig
Registered User
 
Join Date: May 2016
Posts: 7
Python exception: DFTTest: invalid entry in sigma string

Hi feisty2,

I tried your script, but it prompt me this error through vspipe


Code:
vspipe -y lo.gatto.vpy .


Script evaluation failed:
Python exception: DFTTest: invalid entry in sigma string
Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 1491, in vapoursynth.vpy_evaluateScript (src/cython/vapoursynth.c:26905)
  File "lo.gatto.vpy", line 18, in <module>
    v         = Plum.Final([v, deconv, conv], [sup, supdeconv, supconv])
  File "/usr/lib/python3.5/site-packages/Plum.py", line 315, in Final
    clip                  = internal.final(src, super, radius, pel, sad, constants, attenuate_window, attenuate, cutoff)
  File "/usr/lib/python3.5/site-packages/Plum.py", line 163, in final
    dif             = DFTTest(dif, sbsize=attenuate_window, sstring=attenuate, **dfttest_args)
  File "src/cython/vapoursynth.pyx", line 1383, in vapoursynth.Function.__call__ (src/cython/vapoursynth.c:25212)
vapoursynth.Error: DFTTest: invalid entry in sigma string

sample clip: https://www.dropbox.com/s/w5plhz6ob7...mple.mpeg?dl=0

Mediainfo output

Code:
General
Complete name                            : sample.mpeg
Format                                   : MPEG-PS
File size                                : 3.04 MiB
Duration                                 : 4 s 840 ms
Overall bit rate mode                    : Variable
Overall bit rate                         : 5 267 kb/s

Video
ID                                       : 224 (0xE0)
Format                                   : MPEG Video
Format version                           : Version 2
Format profile                           : Main@Main
Format settings, BVOP                    : Yes
Format settings, Matrix                  : Default
Format settings, GOP                     : M=3, N=13
Format settings, picture structure       : Frame
Duration                                 : 4 s 840 ms
Bit rate mode                            : Variable
Bit rate                                 : 5 162 kb/s
Maximum bit rate                         : 8 500 kb/s
Width                                    : 720 pixels
Height                                   : 576 pixels
Display aspect ratio                     : 2.40:1
Frame rate                               : 25.000 FPS
Standard                                 : PAL
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 8 bits
Scan type                                : Interlaced
Scan order                               : Top Field First
Compression mode                         : Lossy
Bits/(Pixel*Frame)                       : 0.498
Time code of first frame                 : 11:01:25:07
Time code source                         : Group of pictures header
GOP, Open/Closed                         : Open
Stream size                              : 2.98 MiB (98%)
Color primaries                          : BT.601 PAL
Transfer characteristics                 : BT.470 System B, BT.470 System G
Matrix coefficients                      : BT.601

My VS script

Code:
import vapoursynth as vs
import Plum
core = vs.get_core()


v = core.d2v.Source('output.mpeg.d2v')
v = v[92173]

v = core.fmtc.bitdepth(v, flt = 1)
v = core.yadifmod.Yadifmod(v, edeint = core.nnedi3.nnedi3(v, 1), order = 1 )
v = core.fmtc.resample(v, w = 1024, h = 576, kernel = 'sinc', taps = 128)

deconv    = Plum.Basic(v)
conv      = Plum.Basic(v,mode="convolution")
sup       = Plum.Super([v, None])
supdeconv = Plum.Super([v, deconv])
supconv   = Plum.Super([conv, None])
v         = Plum.Final([v, deconv, conv], [sup, supdeconv, supconv])

v = core.fmtc.bitdepth(v, bits = 8, dmode = 8, patsize = 32)

v.set_output()
aculnaig is offline   Reply With Quote
Old 18th August 2016, 13:27   #10  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by aculnaig View Post
Hi feisty2,

I tried your script, but it prompt me this error through vspipe


Code:
vspipe -y lo.gatto.vpy .


Script evaluation failed:
Python exception: DFTTest: invalid entry in sigma string
Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 1491, in vapoursynth.vpy_evaluateScript (src/cython/vapoursynth.c:26905)
  File "lo.gatto.vpy", line 18, in <module>
    v         = Plum.Final([v, deconv, conv], [sup, supdeconv, supconv])
  File "/usr/lib/python3.5/site-packages/Plum.py", line 315, in Final
    clip                  = internal.final(src, super, radius, pel, sad, constants, attenuate_window, attenuate, cutoff)
  File "/usr/lib/python3.5/site-packages/Plum.py", line 163, in final
    dif             = DFTTest(dif, sbsize=attenuate_window, sstring=attenuate, **dfttest_args)
  File "src/cython/vapoursynth.pyx", line 1383, in vapoursynth.Function.__call__ (src/cython/vapoursynth.c:25212)
vapoursynth.Error: DFTTest: invalid entry in sigma string

sample clip: https://www.dropbox.com/s/w5plhz6ob7...mple.mpeg?dl=0

Mediainfo output

Code:
General
Complete name                            : sample.mpeg
Format                                   : MPEG-PS
File size                                : 3.04 MiB
Duration                                 : 4 s 840 ms
Overall bit rate mode                    : Variable
Overall bit rate                         : 5 267 kb/s

Video
ID                                       : 224 (0xE0)
Format                                   : MPEG Video
Format version                           : Version 2
Format profile                           : Main@Main
Format settings, BVOP                    : Yes
Format settings, Matrix                  : Default
Format settings, GOP                     : M=3, N=13
Format settings, picture structure       : Frame
Duration                                 : 4 s 840 ms
Bit rate mode                            : Variable
Bit rate                                 : 5 162 kb/s
Maximum bit rate                         : 8 500 kb/s
Width                                    : 720 pixels
Height                                   : 576 pixels
Display aspect ratio                     : 2.40:1
Frame rate                               : 25.000 FPS
Standard                                 : PAL
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 8 bits
Scan type                                : Interlaced
Scan order                               : Top Field First
Compression mode                         : Lossy
Bits/(Pixel*Frame)                       : 0.498
Time code of first frame                 : 11:01:25:07
Time code source                         : Group of pictures header
GOP, Open/Closed                         : Open
Stream size                              : 2.98 MiB (98%)
Color primaries                          : BT.601 PAL
Transfer characteristics                 : BT.470 System B, BT.470 System G
Matrix coefficients                      : BT.601

My VS script

Code:
import vapoursynth as vs
import Plum
core = vs.get_core()


v = core.d2v.Source('output.mpeg.d2v')
v = v[92173]

v = core.fmtc.bitdepth(v, flt = 1)
v = core.yadifmod.Yadifmod(v, edeint = core.nnedi3.nnedi3(v, 1), order = 1 )
v = core.fmtc.resample(v, w = 1024, h = 576, kernel = 'sinc', taps = 128)

deconv    = Plum.Basic(v)
conv      = Plum.Basic(v,mode="convolution")
sup       = Plum.Super([v, None])
supdeconv = Plum.Super([v, deconv])
supconv   = Plum.Super([conv, None])
v         = Plum.Final([v, deconv, conv], [sup, supdeconv, supconv])

v = core.fmtc.bitdepth(v, bits = 8, dmode = 8, patsize = 32)

v.set_output()
cannot reproduce the error, which version of DFTTest were you using, I assume not the latest one?

and your video is progressive, DO NOT deinterlace it
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 18th August 2016, 13:53   #11  |  Link
aculnaig
Registered User
 
Join Date: May 2016
Posts: 7
Quote:
Originally Posted by feisty2 View Post

cannot reproduce the error, which version of DFTTest were you using, I assume not the latest one?
I cloned this repository

Code:
https://github.com/HomeOfVapourSynthEvolution/VapourSynth-DFTTest

is this the most recent updated branch, isn't it?




Quote:
Originally Posted by feisty2
and your video is progressive, DO NOT deinterlace it

I thought it too, but I always let have the experts the final word.

Thanks.



I will edit this message if it turns with no errors.

Thanks again, feisty2.


EDIT1: Nothing, same erorr.

Last edited by aculnaig; 18th August 2016 at 14:06.
aculnaig is offline   Reply With Quote
Old 18th August 2016, 14:15   #12  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by aculnaig View Post
I cloned this repository

Code:
https://github.com/HomeOfVapourSynthEvolution/VapourSynth-DFTTest

is this the most recent updated branch, isn't it?







I thought it too, but I always let have the experts the final word.

Thanks.



I will edit this message if it turns with no errors.

Thanks again, feisty2.
I did some tests on your sample anyways, and I think Plum is not the kind of sharpener for this type of vids,
the sample generally lacks textures and details and looks plain flat, and deconvolution does not work on stuff like this, Plum is designed for vids that are filled with rich and soft textures/details, and it then makes them sharp and delicate, and there's just no texture at all in that sample, so...
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 18th August 2016, 14:26   #13  |  Link
aculnaig
Registered User
 
Join Date: May 2016
Posts: 7
Ok, thanks.

I wanted to do some test with that movie but I always end up doing things that doesn't fit ! Sigh...


The film is this, btw...

Code:
https://en.wikipedia.org/wiki/Il_commissario_Lo_Gatto

Last edited by aculnaig; 18th August 2016 at 14:28.
aculnaig is offline   Reply With Quote
Old 21st August 2016, 10:39   #14  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
I totally hate the USM "boom" look. So this algorithm looks interesting to me. However, I think FineSharp already does a pretty nice job on avoiding the USM look. So the first thing I did was look at the comparison images. But for some reason, the FineSharp image has a *much* stronger sharpening strength applied to it than the Plum image in the first post of this thread. Which makes it hard to judge which looks really better. Would you mind increasing the Plum sharpening strength for the screenshots in the first post to achieve the same subjective sharpening strength as FineSharp? That would make it *much* easier to see if plum really improves on FineSharp in quality.

Also, I prefer testing with high-quality sources instead of ultra-blurry SD sources. Maybe you could add comparison screenshots for this image?



FWIW, when comparing sharpening algos I usually prefer using a rather high sharpening strength, higher than I would use in real life, because that makes it more obvious what the algorithms are really doing.

Thank you!!
madshi is offline   Reply With Quote
Old 21st August 2016, 10:55   #15  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by madshi View Post
I totally hate the USM "boom" look. So this algorithm looks interesting to me. However, I think FineSharp already does a pretty nice job on avoiding the USM look. So the first thing I did was look at the comparison images. But for some reason, the FineSharp image has a *much* stronger sharpening strength applied to it than the Plum image in the first post of this thread. Which makes it hard to judge which looks really better. Would you mind increasing the Plum sharpening strength for the screenshots in the first post to achieve the same subjective sharpening strength as FineSharp? That would make it *much* easier to see if plum really improves on FineSharp in quality.

Also, I prefer testing with high-quality sources instead of ultra-blurry SD sources. Maybe you could add comparison screenshots for this image?



FWIW, when comparing sharpening algos I usually prefer using a rather high sharpening strength, higher than I would use in real life, because that makes it more obvious what the algorithms are really doing.

Thank you!!
same sharpening strength for 3 algorithms, all 1.64

Plum looks much milder because it ONLY sharpens the most delicate parts of the image(makes high frequencies even higher), it does not amplify low/median frequencies at all while finesharp still amplifies median frequencies..
maybe I could lower down the strength of finesharp and make it easier to compare..
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 21st August 2016, 11:04   #16  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Well, the same sharpening "number" in different algos doesn't necessarily produce the same subjective sharpening strength. I'd like the images to produce the same *perceived* sharpness level, and then compare which I like best. This is how I usually compare sharpening algos, anyway. Usually when I do that, any USM like algo looks terribly bloated/artificial, while algos like FineSharp look much more natural.

I understand I could easily do this comparison myself, but I'm always having a hard time getting AviSynth/VapourSynth scripts with lots of dependencies to work...

Maybe you could just sharpen my preferred 1080p image with all dials turned up to "overload" with Plum, then I can compare your final result with the best I can achieve with my own preferred algos? That would be awesome!
madshi is offline   Reply With Quote
Old 21st August 2016, 11:12   #17  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
Quote:
Originally Posted by madshi View Post
Well, the same sharpening "number" in different algos doesn't necessarily produce the same subjective sharpening strength. I'd like the images to produce the same *perceived* sharpness level, and then compare which I like best. This is how I usually compare sharpening algos, anyway. Usually when I do that, any USM like algo looks terribly bloated/artificial, while algos like FineSharp look much more natural.

I understand I could easily do this comparison myself, but I'm always having a hard time getting AviSynth/VapourSynth scripts with lots of dependencies to work...

Maybe you could just sharpen my preferred 1080p image with all dials turned up to "overload" with Plum, then I can compare your final result with the best I can achieve with my own preferred algos? That would be awesome!
Plum shares the exact same non-linear amplifying expression with finesharp... so same number gives the mathematically same amount of sharpening, anyways, I reduced the strength to 0.86 for finesharp to show the difference
Plum


Code:
clp = finesharp.sharpen(clp,sstr=0.86)
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 21st August 2016, 11:36   #18  |  Link
feisty2
I'm Siri
 
feisty2's Avatar
 
Join Date: Oct 2012
Location: Los Angeles, California
Posts: 2,119
and I cannot sharpen your image because Plum works in both spatial and temporal dimensions, gonna need a video sample and it should be no more than 720p
Plum is an advanced and very complex algorithm, I don't think my shitty i7-4790k could handle Plum on 1080p

and below is how Plum looks like with an insanely high strength
Code:
deconv = Plum.Basic(clp)
conv = Plum.Basic(clp,mode="convolution")
sup = Plum.Super([clp, None])
supdeconv = Plum.Super([clp, deconv])
supconv = Plum.Super([conv, None])
clp = Plum.Final([clp, deconv, conv], [sup, supdeconv, supconv], strength=3.2, cutoff=16)
__________________
If I got new ideas, will post here: https://github.com/IFeelBloated
feisty2 is offline   Reply With Quote
Old 21st August 2016, 11:39   #19  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,137
Quote:
Originally Posted by feisty2 View Post
anyways, I reduced the strength to 0.86 for finesharp to show the difference
Thanks, Plum does look better (less bloated) than FineSharp in this comparison.
madshi is offline   Reply With Quote
Old 21st August 2016, 12:20   #20  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Ikea Chair
Posts: 1,897
I had a look at your script and saw some odd things...

You invoke several Expr several times here in series with max. You can have up to 25 inputs to Expr and combining several will be much faster.

Making the input clips something like [clip, SelectEvery(src, radius * 2 + 1, 1), SelectEvery(src, radius * 2 + 1, 2)...] and then the expr " x y max z max a max b max..." will be faster. And maybe 3% more annoying to generate the string for but definitely faster.

Code:
 def extremum_multi(src, radius, mode):
          core            = vs.get_core()
          SelectEvery     = core.std.SelectEvery
          Expr            = core.std.Expr
          clip            = SelectEvery(src, radius * 2 + 1, 0)
          for i in range(1, radius * 2 + 1):
              clip        = Expr([clip, SelectEvery(src, radius * 2 + 1, i)], "x y " + mode)
          return clip
In here you can technically fold the makediff into the first expr and the mergediff into the second. I think.

Fom the shrink function:
Code:
          DDD             = Expr([DD, convDD], ["x y - x 0.5 - * 0 < 0.5 x y - abs x 0.5 - abs < x y - 0.5 + x ? ?"])
          dif             = MakeDiff(dif, DDD)
          convD           = Convolution(dif, **conv_args)
          dif             = Expr([dif, convD], ["y 0.5 - abs x 0.5 - abs > y 0.5 ?"])
          clip            = MergeDiff(src, dif)
May be possible to combine these 2 as well
Code:
          clamped         = helpers.clamp(averaged, bright_limit, dark_limit, 0.0, 0.0)
          amplified       = Expr([clamped, src[0]], expression)
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet
Myrsloik is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:45.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.