Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
26th June 2022, 15:39 | #21 | Link | ||
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
Quote:
For the trigun example I used: Code:
# Imports import vapoursynth as vs import os import sys # getting Vapoursynth core core = vs.core # Import scripts folder scriptPath = 'i:/Hybrid/64bit/vsscripts' sys.path.insert(0, os.path.abspath(scriptPath)) # Loading Plugins core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/GrainFilter/RemoveGrain/RemoveGrainVS.dll") core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll") core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll") core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll") # Import scripts import mvsfunc import havsfunc # source: 'C:\Users\Selur\Desktop\VTS_01_7.mkv' # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine # Loading C:\Users\Selur\Desktop\VTS_01_7.mkv using LWLibavSource clip = core.lsmas.LWLibavSource(source="C:/Users/Selur/Desktop/VTS_01_7.mkv", format="YUV420P8", cache=0, prefer_hw=0) # Setting color matrix to 470bg. clip = core.std.SetFrameProps(clip, _Matrix=5) clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5) clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) # making sure frame rate is set to 29.970 clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001) clip2clip = clip # Deinterlacing using TIVTC clip = core.tivtc.TFM(clip=clip, clip2=clip2clip) clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976 # make sure content is preceived as frame based clip = core.std.SetFieldBased(clip, 0) # cropping the video to 700x480 clip = core.std.CropRel(clip=clip, left=16, right=4, top=0, bottom=0) clip = havsfunc.DeHalo_alpha(clip) clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full") # Setting color range to PC (full) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0) # adjusting color space from YUV420P8 to RGB24 for vsVSGAN clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="full") # resizing using VSGAN from vsgan import ESRGAN vsgan = ESRGAN(clip=clip,device="cuda") model = "I:/Hybrid/64bit/vsgan_models/2x_LD-Anime_Skr_v1.0.pth" vsgan.load(model) vsgan.apply() # 1400x960 clip = vsgan.clip # resizing 1400x960 to 720x556 clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited") # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) # adjusting resizing clip = core.fmtc.resample(clip=clip, w=720, h=556, kernel="lanczos", interlaced=False, interlacedd=False) # adjusting output color from: RGB48 to YUV420P10 for x265Model clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited", dither_type="error_diffusion") # set output frame rate to 23.976fps clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001) # Output clip.set_output() Quote:
At least for me the VRAM usage doesn't seem to change in regard to the vs threads. |
||
26th June 2022, 15:57 | #22 | Link |
Registered User
Join Date: Sep 2008
Posts: 365
|
vs-mlrt also has a waifu2x implementation: https://github.com/AmusementClub/vs-mlrt/wiki/waifu2x
__________________
(i have a tendency to drunk post) |
27th June 2022, 17:44 | #23 | Link |
Registered User
Join Date: Dec 2012
Posts: 65
|
Something like that
https://workupload.com/file/gDGErXYRRRG
__________________
Ryzen 2700x | ASUS ROG Strix GTX 1080 Ti | 16 Gb DDR4
Windows 10 x64 20H2 KD-55XE9005 | Edifier R2800 |
27th June 2022, 22:15 | #25 | Link | |
Registered User
Join Date: Dec 2012
Posts: 65
|
Quote:
Code:
tr=8 setmemorymax(8192) DGSource("VTS_01_7.dgi") Bifrost(interlaced=true).TComb(mode=0,fthreshL=4,othreshL=5,scthresh=12) ASTDRmc(strength=5, tempsoftth=30, tempsoftrad=3, tempsoftsc=3, blstr=0.5, tht=255, FluxStv=75, dcn=15, edgem=false) TFM(mode=4,pp=1,MI=25,display=false, slow=2,cthresh=8,mthresh=6,chroma=false,ubsco=false,hint=true,opt=4,metric=0) TDecimate(mode=1) TurnLeft().vsLGhost(mode=1, shift=1, intensity=-26).TurnRight() EdgeCleaner(strength=10, rep=false, rmode=17, smode=0, hot=false) nnedi3_rpow2(rfactor=2, nsize=0, nns=4, qual=2, etype=0, pscrn=4, cshift="spline36resize", threads=tr) ConvertTo16bit() FineDehalo(rx=3.0, ry=3.0, thmi=80, thma=128, thlimi=50, thlima=100, darkstr=0.0, brightstr=1.4, showmask=0, contra=0.0, excl=true) FineDehalo(rx=2.8, ry=2.8, thmi=80, thma=128, thlimi=50, thlima=100, darkstr=0.0, brightstr=1.2, showmask=0, contra=0.0, excl=true) LSFmod(ss_x=1.0,ss_y=1.0,strength=6,Smode=5) mthr = 16 bi = BitsPerComponent(last) mthrHBD = ex_bs(mthr, 8, bi, true) mlight1=last.flatmask(2, scale=7.0, lo=4, MSR=60, invert=false) mdark1=last.flatmask(4, scale=7.0, lo=4, MSR=50, invert=false) mask=last.ConditionalFilter(mlight1, mdark1, "AverageLuma()",">","60").ex_lut(Format("x {mthrHBD} <= x 0.5 * x 2 * ?"), UV=1).RemoveGrain((980>960) ? 20 : 11, -1) deg1 = last.MCTemporalDenoise(settings="very low", edgeclean=true, ecrad=4, stabilize=true, maxr=3, strength=30, GPU=false) deg2 = last.SMDegrain(tr=2,thSAD=121, thSADC=50, thSCD1=156,thSCD2=96, contrasharp=false, refinemotion=true, chroma=true, plane=4) deg=ConditionalFilter(deg1, deg2, "AverageLuma()",">","60") ex_merge(deg ,last ,mask, luma=true, Y=3, UV=3) ConvertToStacked().TAAmbk(aatype=1, preaa=-1, postaa=false, sharp=0.0, mtype=0, cycle=0, dark=0.0,lsb_in=true , lsb_out=true).ConvertFromStacked() Blackmanresize(948, 720, taps=4,14,0,-4,0) ContinuityFixer(left=4, top=0, right=3, bottom=0, radius=0) db=last.neo_f3kdb(sample_mode=2, Y=68, Cb=68, Cr=68, grainy=56, grainC=56, range=15, dynamic_grain=true) ex_merge(db, last, mask, luma=true, Y=3, UV=3) z_ConvertFormat(colorspace_op="470bg:601:470bg:f=>709:709:709:f",dither_type="none") ConvertBits(bits=10) Prefetch(tr)
__________________
Ryzen 2700x | ASUS ROG Strix GTX 1080 Ti | 16 Gb DDR4
Windows 10 x64 20H2 KD-55XE9005 | Edifier R2800 Last edited by Shinkiro; 1st July 2022 at 23:38. |
|
28th June 2022, 01:55 | #26 | Link |
Registered User
Join Date: Oct 2011
Location: Dans le nord
Posts: 65
|
https://workupload.com/file/w9adMvevMFk
PHP Code:
PHP Code:
Last edited by Blankmedia; 28th June 2022 at 03:01. |
16th July 2022, 00:49 | #29 | Link |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
masked dha + upscale with esrgan (~2fps). Starting to look like a bluray
https://www.dropbox.com/s/lwhwhjihdc...gun_2.mkv?dl=1
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database Last edited by ChaosKing; 16th July 2022 at 00:52. |
16th July 2022, 01:25 | #30 | Link | |
Registered User
Join Date: Jan 2018
Posts: 2,156
|
Quote:
|
|
16th July 2022, 01:45 | #31 | Link |
Registered User
Join Date: Mar 2012
Location: Texas
Posts: 1,665
|
|
18th July 2022, 09:19 | #33 | Link |
Registered User
Join Date: Jan 2017
Posts: 48
|
I wonder how AiUpscale would fare over Waifu2x. It's got some comparisons against Waifi2x on its github page, and the HQ model seems to compete pretty well. The HQ Sharp might even offer a more pleasing presentation, depending on what it picks to sharpen
|
18th July 2022, 18:04 | #34 | Link |
Registered User
Join Date: Dec 2005
Location: Germany
Posts: 1,795
|
@tormento
Code:
clip = lvsfunc.dehalo.masked_dha(clip, rx=2, ry=2) from vsgan import ESRGAN vsgan = ESRGAN(clip=clip,device="cuda") model = "some-anime-model.pth" # forgot which one, test some anime models from here https://upscale.wiki/wiki/Model_Database vsgan.load(model) vsgan.apply() @PoeBear It would be at least much faster than Waifu2x. I tried many esrgan models from here https://upscale.wiki/wiki/Model_Database and depending on the source (and what the model was trained on) the results were much much^2 better than any waifu upscale I've ever seen. + Waifu was made / trained for more modern art, not 80s / 90s animations. In my tests waifu looks kinda "good" when used with a stronger denoise parameter, but at the same time it destroys all details. FSRCNNX should be better and less destructive. Now I have to learn how to train my own esrgan model
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth VapourSynth Portable FATPACK || VapourSynth Database |
18th July 2022, 18:16 | #35 | Link |
Registered User
Join Date: Nov 2009
Posts: 2,361
|
Yeah, how I wished I could use esrgan (basicvsrpp, rife, dpir, etc) for video, last time I tried a month ago or so I had issues installing some tensor api for Python 38 (Win7 here).
I have tested some models with images and I digged these two, give them a try. Code:
2X_DigitalFilmV5_Lite.pth (sharpener for soft lines, no need to downscale before AI) 2x_LD-Anime_Skr_v1.0.pth (for ringing, rainbowing, aliasing)
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread |
18th July 2022, 20:46 | #36 | Link | |
Registered User
Join Date: Mar 2012
Location: Texas
Posts: 1,665
|
Quote:
|
|
18th July 2022, 21:14 | #37 | Link | |
Registered User
Join Date: Nov 2009
Posts: 2,361
|
Quote:
I have been thinking for some time on creating a dejitter function to replace the old Stabilization Tools Pack, it's basically supersampling and differentiating, the trick being making it robust enough.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread |
|
18th July 2022, 21:56 | #38 | Link | |
Registered User
Join Date: Mar 2012
Location: Texas
Posts: 1,665
|
Quote:
|
|
Thread Tools | Search this Thread |
Display Modes | |
|
|