Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 26th June 2022, 15:39   #21  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
Quote:
Why don't you apply any IVTC of pulldown or deinterlace of any sort? The DVD source is awful.
That were just example calls.
For the trigun example I used:
Code:
# Imports
import vapoursynth as vs
import os
import sys
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'i:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/GrainFilter/RemoveGrain/RemoveGrainVS.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# Import scripts
import mvsfunc
import havsfunc
# source: 'C:\Users\Selur\Desktop\VTS_01_7.mkv'
# current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
# Loading C:\Users\Selur\Desktop\VTS_01_7.mkv using LWLibavSource
clip = core.lsmas.LWLibavSource(source="C:/Users/Selur/Desktop/VTS_01_7.mkv", format="YUV420P8", cache=0, prefer_hw=0)
# Setting color matrix to 470bg.
clip = core.std.SetFrameProps(clip, _Matrix=5)
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 29.970
clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
clip2clip = clip
# Deinterlacing using TIVTC
clip = core.tivtc.TFM(clip=clip, clip2=clip2clip)
clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
# make sure content is preceived as frame based
clip = core.std.SetFieldBased(clip, 0)
# cropping the video to 700x480
clip = core.std.CropRel(clip=clip, left=16, right=4, top=0, bottom=0)
clip = havsfunc.DeHalo_alpha(clip)
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# adjusting color space from YUV420P8 to RGB24 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="full")
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/2x_LD-Anime_Skr_v1.0.pth"
vsgan.load(model)
vsgan.apply() # 1400x960
clip = vsgan.clip
# resizing 1400x960 to 720x556
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=720, h=556, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGB48 to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 23.976fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
# Output
clip.set_output()
Quote:
Can you please tell me the differences between the upscaling types and what can support CUDA?
Nowadys VSGAN should tile the content if need be, depending on the model setting an overlap would be recommended.
At least for me the VRAM usage doesn't seem to change in regard to the vs threads.
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 26th June 2022, 15:57   #22  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
vs-mlrt also has a waifu2x implementation: https://github.com/AmusementClub/vs-mlrt/wiki/waifu2x
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 27th June 2022, 17:44   #23  |  Link
Shinkiro
Registered User
 
Join Date: Dec 2012
Posts: 65
Something like that
https://workupload.com/file/gDGErXYRRRG
__________________
Ryzen 2700x | ASUS ROG Strix GTX 1080 Ti | 16 Gb DDR4
Windows 10 x64 20H2
KD-55XE9005 | Edifier R2800
Shinkiro is offline   Reply With Quote
Old 27th June 2022, 19:03   #24  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,542
Quote:
Originally Posted by Shinkiro View Post
Something like that
Please post your script, not results only.

I can see some shimmering on horizontal lines left, such as walls.

Very nice result, anyway.
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 27th June 2022, 22:15   #25  |  Link
Shinkiro
Registered User
 
Join Date: Dec 2012
Posts: 65
Quote:
Originally Posted by tormento View Post
Please post your script, not results only.
I can see some shimmering on horizontal lines left, such as walls.
Very nice result, anyway.
Code:
tr=8
setmemorymax(8192)
DGSource("VTS_01_7.dgi")
Bifrost(interlaced=true).TComb(mode=0,fthreshL=4,othreshL=5,scthresh=12)
ASTDRmc(strength=5, tempsoftth=30, tempsoftrad=3, tempsoftsc=3, blstr=0.5, tht=255, FluxStv=75, dcn=15, edgem=false)
TFM(mode=4,pp=1,MI=25,display=false, slow=2,cthresh=8,mthresh=6,chroma=false,ubsco=false,hint=true,opt=4,metric=0)
TDecimate(mode=1)
TurnLeft().vsLGhost(mode=1, shift=1, intensity=-26).TurnRight()

EdgeCleaner(strength=10, rep=false, rmode=17, smode=0, hot=false)

nnedi3_rpow2(rfactor=2, nsize=0, nns=4, qual=2, etype=0, pscrn=4, cshift="spline36resize", threads=tr)
ConvertTo16bit()

FineDehalo(rx=3.0, ry=3.0, thmi=80, thma=128, thlimi=50, thlima=100, darkstr=0.0, brightstr=1.4, showmask=0, contra=0.0, excl=true)
FineDehalo(rx=2.8, ry=2.8, thmi=80, thma=128, thlimi=50, thlima=100, darkstr=0.0, brightstr=1.2, showmask=0, contra=0.0, excl=true)
LSFmod(ss_x=1.0,ss_y=1.0,strength=6,Smode=5)

mthr = 16
bi = BitsPerComponent(last)
mthrHBD = ex_bs(mthr, 8, bi, true)
mlight1=last.flatmask(2, scale=7.0, lo=4, MSR=60, invert=false)
mdark1=last.flatmask(4, scale=7.0, lo=4, MSR=50, invert=false)
mask=last.ConditionalFilter(mlight1, mdark1, "AverageLuma()",">","60").ex_lut(Format("x {mthrHBD} <= x 0.5 * x 2 * ?"), UV=1).RemoveGrain((980>960) ? 20 : 11, -1)

deg1 = last.MCTemporalDenoise(settings="very low", edgeclean=true, ecrad=4, stabilize=true, maxr=3, strength=30, GPU=false)
deg2 = last.SMDegrain(tr=2,thSAD=121, thSADC=50, thSCD1=156,thSCD2=96, contrasharp=false, refinemotion=true, chroma=true, plane=4)

deg=ConditionalFilter(deg1, deg2, "AverageLuma()",">","60")
ex_merge(deg ,last ,mask, luma=true, Y=3, UV=3)

ConvertToStacked().TAAmbk(aatype=1, preaa=-1, postaa=false, sharp=0.0, mtype=0, cycle=0, dark=0.0,lsb_in=true , lsb_out=true).ConvertFromStacked()

Blackmanresize(948, 720, taps=4,14,0,-4,0)
ContinuityFixer(left=4, top=0, right=3, bottom=0, radius=0)

db=last.neo_f3kdb(sample_mode=2, Y=68, Cb=68, Cr=68, grainy=56, grainC=56, range=15, dynamic_grain=true)
ex_merge(db, last, mask, luma=true, Y=3, UV=3)

z_ConvertFormat(colorspace_op="470bg:601:470bg:f=>709:709:709:f",dither_type="none")
ConvertBits(bits=10)
Prefetch(tr)
__________________
Ryzen 2700x | ASUS ROG Strix GTX 1080 Ti | 16 Gb DDR4
Windows 10 x64 20H2
KD-55XE9005 | Edifier R2800

Last edited by Shinkiro; 1st July 2022 at 23:38.
Shinkiro is offline   Reply With Quote
Old 28th June 2022, 01:55   #26  |  Link
Blankmedia
Registered User
 
Join Date: Oct 2011
Location: Dans le nord
Posts: 65
https://workupload.com/file/w9adMvevMFk


PHP Code:
chemin "D:\A encoder\Trigun\VTS_01_7.d2v"

DGDecode_mpeg2source(chemininfo=3).Converttoyv12()
#~ RoboCrop(LeftAdd=0, TopAdd=0, RightAdd=0, BotAdd=0)

#~ RequestLinear(clim=100)

Tcomb()

fmtc_bitdepth(bits=16)
/*
A=QTGMCp( Preset="very slow",sourcematch=3, Sharpness=0.0, lossless=2 ).selecteven()
B=QTGMCp( Preset="very slow",sourcematch=3, Sharpness=0.0, lossless=2 ).selectodd()
C=Tfm(field=1,Mode=5,PP=2,cthresh=2,mthresh=2,clip2=A,micmatching=0,chroma=true,display=false, d2v=chemin, slow=2)
D=Tfm(field=0,Mode=5,PP=2,cthresh=2,mthresh=2,clip2=B,micmatching=0,chroma=true,display=false, d2v=chemin, slow=2)
Interleave(C,D)
*/
SimpsonsDesentrelace("D:\A encoder\Trigun\VTS_01_7.d2v")
oo last

NNEDI3(Field=-2qual=2etype=1nns=4pscrn=3)
Merge(a.SelectEven(), a.SelectOdd()) 

D1 mt_makediff(oo,a)
D2 mt_makediff(a,a.removegrain(11,-1))
mt_adddiff(D2.repair(D1,13).mt_lutxy(D2,"x 32768 - y 32768 - * 0 < 32768 x 32768 - abs y 32768 - abs < x y ? ?"),U=2,V=2)


QTGMC(Preset="slow"InputType=1sourcematch=3MatchEnhance=0.45 )
ex_repair(a,oomode="Temp1")

source=last
#~ x1 = source.ex_BM3D(sigma=10,radius=1,UV=1,tv_range=False)
x1 source.fluxsmootht(3)
x2 source.removegrain(11,-1).Extracty().Converttoyuv422()
x22 source.mt_makediff(mt_makediff(x2,x2.removegrain(20,-1))).MinMapBlur()
x222 x22.removegrain(4,-1)
x222 x222.ex_sbr().merge(x222,0.25)
enhD ex_lutxy(x22.removegrain(27).fmtc_bitdepth(bits=8dmode=1),x222.fmtc_bitdepth(bits=8dmode=1),"128 x y - abs 2 / 1 1.6 / ^ 2.51 * x y - x y - abs 0.1 + / * +",UV=2).frfun7(lambda=1.1T=6.0Tuv=2.0,P=2).fmtc_bitdepth(bits=16)
enh source.mt_adddiff(enhD,U=2,V=2)

BLK  
ME1  
5
ME2  

_DCT 
5

sup1 
x1.removegrain(11).MSuper(hpad=16vpad=16pel=2sharp=0)
sup2 enh.MSuper(hpad=16vpad=16pel=2levels=0sharp=1)
bv3  MAnalyse(sup1,delta=3,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=truedct=_DCT)
bv2  MAnalyse(sup1,delta=2,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=truedct=_DCT)
bv1  MAnalyse(sup1,delta=1,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=truedct=_DCT)
fv1  MAnalyse(sup1,delta=1,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=false,dct=_DCT)
fv2  MAnalyse(sup1,delta=2,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=false,dct=_DCT)
fv3  MAnalyse(sup1,delta=3,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=false,dct=_DCT)

bv4  MAnalyse(sup1,delta=4,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=truedct=_DCT)
fv4  MAnalyse(sup1,delta=4,truemotion=false,global=true,blksize=BLK,overlap=BLK/2,search=ME1,searchparam=ME2,isb=false,dct=_DCT)
#~ source.mdegrain3(sup2,bv1,fv1,bv2,fv2,bv3,fv3, thSAD=300, thSCD1=256, thSCD2=96, limit=135, plane=0)
#~ source.mdegrain2(sup2,bv1,fv1,bv2,fv2, thSAD=250, thSCD1=256, thSCD2=96, limit=135, plane=0)
source.mdegrain4(sup2,bv1,fv1,bv2,fv2,bv3,fv3,bv4,fv4,thSAD=300thSCD1=256thSCD2=96limit=135plane=0)

lsfplus(preset="slow"strength=10smode=5edgemode=1preblur="FFT3Dfilter(sigma=4,plane=0)")


masklisse =  HQderingmod(mrad=2,minp nrmode =3sharp 0show=False).ex_median("IQM").blur(0.5).neo_f3kdb(15,preset="very high",grainy=0grainc=0blur_first=truesample_mode=4).Flatmask(lo=4,MSR=45,scale=5)
masklisse.ex_expand(4,"disk").ex_median("STWM")
masklisse.ex_inflate().ex_expand(2,"disk")
masque ex_makediff(a,b,dif=false).maa2()
mt_merge(lastex_median("smart2").neo_minideen(2),luma=True,masque).maa2()

masked_dha(rx=1.77ry=1.77brightstr=1.0darkstr=0.0lowsens=50highsens=50msk_pull=48msk_push=192ss=3show_msk=False)

FineDehalo(rx=2.2thmi=50thma=100thlimi=50thlima=100darkstr=1brightstr=1showmask=0contra=0.1excl=true# ajout
neo_f3kdb(preset="high"sample_mode=4range=10dynamic_grain Truegrainy=48grainc=48
edit:

PHP Code:

function SimpsonsDesentrelaceclip INstring "chemin" )
    {
    
A=QTGMC(INPreset="very slow",sourcematch=3Sharpness=0.0lossless=).selecteven()
    
TFM(INOrder=-1,Mode=5,PP=2,Clip2=A,Slow=2,MChroma=False,Ubsco=False,CThresh=12,mthresh=2,Chroma=True,micmatching=0d2v=chemin)
    
TDecimate(Mode=1)
    } 

Last edited by Blankmedia; 28th June 2022 at 03:01.
Blankmedia is offline   Reply With Quote
Old 29th June 2022, 21:11   #27  |  Link
Blankmedia
Registered User
 
Join Date: Oct 2011
Location: Dans le nord
Posts: 65
I was kind of hoping someone would chip in and correct if I'm doing something wrong or something I do could be done better.

@Tormento do you like it?
Blankmedia is offline   Reply With Quote
Old 29th June 2022, 23:00   #28  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,542
Quote:
Originally Posted by Blankmedia View Post
@Tormento do you like it?
I will post the findings I am doing around in the next days.

Can I invite every script author in this thread to provide the output video to have a fair comparison?
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 16th July 2022, 00:49   #29  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,795
masked dha + upscale with esrgan (~2fps). Starting to look like a bluray

https://www.dropbox.com/s/lwhwhjihdc...gun_2.mkv?dl=1
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database

Last edited by ChaosKing; 16th July 2022 at 00:52.
ChaosKing is offline   Reply With Quote
Old 16th July 2022, 01:25   #30  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,153
Quote:
Originally Posted by ChaosKing View Post
masked dha + upscale with esrgan (~2fps). Starting to look like a bluray

https://www.dropbox.com/s/lwhwhjihdc...gun_2.mkv?dl=1
What masked dha mean??
kedautinh12 is offline   Reply With Quote
Old 16th July 2022, 01:45   #31  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
Quote:
Originally Posted by kedautinh12 View Post
What masked dha mean??
Search is your friend

http://avisynth.nl/index.php/External_filters#Dehaloing
Reel.Deel is offline   Reply With Quote
Old 17th July 2022, 10:51   #32  |  Link
tormento
Acid fr0g
 
tormento's Avatar
 
Join Date: May 2002
Location: Italy
Posts: 2,542
Quote:
Originally Posted by ChaosKing View Post
masked dha + upscale with esrgan (~2fps). Starting to look like a bluray
Please post your script too
__________________
@turment on Telegram
tormento is offline   Reply With Quote
Old 18th July 2022, 09:19   #33  |  Link
PoeBear
Registered User
 
Join Date: Jan 2017
Posts: 48
I wonder how AiUpscale would fare over Waifu2x. It's got some comparisons against Waifi2x on its github page, and the HQ model seems to compete pretty well. The HQ Sharp might even offer a more pleasing presentation, depending on what it picks to sharpen
PoeBear is offline   Reply With Quote
Old 18th July 2022, 18:04   #34  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,795
@tormento
Code:
clip = lvsfunc.dehalo.masked_dha(clip,  rx=2, ry=2)

from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "some-anime-model.pth" # forgot which one, test some anime models from here https://upscale.wiki/wiki/Model_Database
vsgan.load(model)
vsgan.apply()


@PoeBear
It would be at least much faster than Waifu2x.


I tried many esrgan models from here https://upscale.wiki/wiki/Model_Database and depending on the source (and what the model was trained on) the results were much much^2 better than any waifu upscale I've ever seen.

+ Waifu was made / trained for more modern art, not 80s / 90s animations. In my tests waifu looks kinda "good" when used with a stronger denoise parameter, but at the same time it destroys all details.

FSRCNNX should be better and less destructive.


Now I have to learn how to train my own esrgan model
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database
ChaosKing is offline   Reply With Quote
Old 18th July 2022, 18:16   #35  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,352
Yeah, how I wished I could use esrgan (basicvsrpp, rife, dpir, etc) for video, last time I tried a month ago or so I had issues installing some tensor api for Python 38 (Win7 here).

I have tested some models with images and I digged these two, give them a try.

Code:
    2X_DigitalFilmV5_Lite.pth (sharpener for soft lines, no need to downscale before AI)
    2x_LD-Anime_Skr_v1.0.pth (for ringing, rainbowing, aliasing)
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 18th July 2022, 20:46   #36  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
Quote:
Originally Posted by ChaosKing View Post
masked dha + upscale with esrgan (~2fps). Starting to look like a bluray

https://www.dropbox.com/s/lwhwhjihdc...gun_2.mkv?dl=1
Finally had a look at this, the result is pretty good. I think it would help out to get rid of the the small jitter that is common is these old cartoons. CelStabilize does a very good job of that but unfortunately the source code for CelBackground was never published. I tried contacting the author like a year ago but the email no longer works .
Reel.Deel is offline   Reply With Quote
Old 18th July 2022, 21:14   #37  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,352
Quote:
Originally Posted by Reel.Deel View Post
Finally had a look at this, the result is pretty good. I think it would help out to get rid of the the small jitter that is common is these old cartoons. CelStabilize does a very good job of that but unfortunately the source code for CelBackground was never published. I tried contacting the author like a year ago but the email no longer works .
CelStabilise was very good actually and my main "dejitter" for anime, but it came with many added artifacts in form of color shifts/clipping, and it didn't work on pans/tilts or across scenes. (btw another good lost filter is JPEGSource() )

I have been thinking for some time on creating a dejitter function to replace the old Stabilization Tools Pack, it's basically supersampling and differentiating, the trick being making it robust enough.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread
Dogway is offline   Reply With Quote
Old 18th July 2022, 21:56   #38  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
Quote:
Originally Posted by Dogway View Post
CelStabilise was very good actually and my main "dejitter" for anime, but it came with many added artifacts in form of color shifts/clipping, and it didn't work on pans/tilts or across scenes. (btw another good lost filter is JPEGSource() )

I have been thinking for some time on creating a dejitter function to replace the old Stabilization Tools Pack, it's basically supersampling and differentiating, the trick being making it robust enough.
Yeah CelStabilise has a few issues, and also has a resolution limit (I tried processing an HD clip and it crashed). But, it worked really well on SD static scenes that only had jitter. Sadly there is no replacement that I've seen for it. Regarding JPEGSource, I contacted SEt some months ago about it. I saw that PictureView3 was now available in x64 (which shares code from JpegSource). So I asked him if would be possible to provide an x64 version. He replied: "It's definitely possible, bit I haven't touched AviSynth development in a long while. I'll keep in mind that there is such interest and try to find time for that." - I have not heard back from him nor have I bothered him again.
Reel.Deel is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 05:49.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.