Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#4781 | Link |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
I've been trying to use fvf insertsign to overlay a lagarith .avi with alpha on top of my video. however, it does not work.
how can i specify it right so that it takes the alpha channel properly? I used the same files with avs and it worked... i think i am missing something related to telling vs about the alpha channel? the logo appears but without being overlayed properly Last edited by ~ VEGETA ~; 30th January 2023 at 23:47. |
![]() |
![]() |
![]() |
#4782 | Link |
Registered User
Join Date: May 2011
Posts: 284
|
havsfunc could be used:
import havsfunc clip=havsfunc.Overlay(clip1,clip2,mask=alpha) https://github.com/HomeOfVapourSynth...sfunc.py#L5960 |
![]() |
![]() |
![]() |
#4783 | Link | |
Registered User
Join Date: Dec 2020
Posts: 80
|
Quote:
__________________
CPU: AMD 3700X | GPU: RTX 3070Ti | RAM: 32GB 3200MHz Discord: @Julek#9391 || GitHub |
|
![]() |
![]() |
![]() |
#4784 | Link | |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
Quote:
also, it does not have options to specify ranges. also: video1= lvf.misc.overlay_sign(clip=video1,overlay=fixed_logo,frame_ranges=[8508,8655]) didn't work and put this: https://pastebin.com/GSZRYLqi first it didn't accept 2 colorspaces as alpha clip in RGBA, so I did: Code:
video1= lsmash... fixed_logo = core.ffms2.Source(source=path..., alpha=True) fixed_logo = core.resize.Bilinear(fixed_logo, format=vs.YUV420P8, matrix_s="709") video1= lvf.misc.overlay_sign(clip=video1,overlay=fixed_logo,frame_ranges=[8508,8655]) On insertsign side, I still cannot use it even after modifying it with the mentioned commit manually in scripts folder. |
|
![]() |
![]() |
![]() |
#4787 | Link |
Registered User
Join Date: May 2011
Posts: 284
|
I'd check alpha first, it needs to be 255 to have full transparency to see logo over clip, and zero values to see clip without any transition. In a previewer using color pickers. The borders would be some grayscale for smooth transitions. And it looks like mask has to be specifically used as a keyword, havsfunc.Overlay(clip, logo, mask=alpha)
oh those levels, ..., if alpha values do not reach 255, then using something like, for grayscale clip (plane does not have to be specified): core.std.Levels(max_in=230, max_out=255) to get it to 255, that number 235 you test in previewer, could be a bit higher or lower Last edited by _Al_; 31st January 2023 at 17:26. |
![]() |
![]() |
![]() |
#4788 | Link | |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
Quote:
thanks but i managed to make insertsign work by just using the string of file location as input rather than ffms2. |
|
![]() |
![]() |
![]() |
#4789 | Link |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
I'd like to ask about better ways to sharpen anime rather than using regular sharpeners (like LSFmod). this is regardless of descale or not to descale.
I found AI or NN upscalers with nice results in terms of denoise, deblock, and sharpening but are they used on actual fansub encodes (modern anime)? I am only interested in making the encodes slightly more sharp (no halos) but not too sharp. Doing one of these upscalers then SSIM downsample to 1080p... then doing maskedmerge using an edge mask to only get sharp edges from the upscaled clip. is this a good approach? I have Ryzen 7900X CPU, 3060TI GPU, 32G DDR5 5600MHz PC so I think it can handle high work load. what do you think? |
![]() |
![]() |
![]() |
#4790 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,031
|
You might also want to look into vsgan models from https://upscale.wiki/wiki/Model_Database, most of them are trained on animes.
|
![]() |
![]() |
![]() |
#4791 | Link | |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
Quote:
however, what do you think about the approach I explained above? also, besides vsgan, what other similar tools which are used in encoding anime real releases not just for testing. I feel like such tools can do some damage to backgrounds or some unwanted features like denoise and deblock. thus i pointed out the masks. |
|
![]() |
![]() |
![]() |
#4792 | Link | |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,031
|
Yes, your approach might work too.
Anime release groups usually use tons of masking and rarely use a filter as is. ![]() You might also want to check out the 'Enhance Everything!' discord channel (https://discord.com/invite/cpAUpDK) and the 'Irrational Encoding Wizardry' (https://discord.gg/qxTxVJGtst) channel. Quote:
Code:
# resizing using VSGAN from vsgan import ESRGAN vsgan = ESRGAN(clip=clip,device="cuda") model = "I:/Hybrid/64bit/vsgan_models/1x_BroadcastToStudioLite_485k.pth" vsgan.load(model) vsgan.apply() clip = vsgan.clip Cu Selur |
|
![]() |
![]() |
![]() |
#4793 | Link | |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
Quote:
what other tools exist for this task rather than vsgan\esrgan? how can we compare their results and see which is best for anime? I know these stuff do many enhancements but i am only interested in getting lines sharper, the line art itself not background or texture. so I don't want to do these silly "4k upscaled anime" releases or even do very sharp everything... i think you got what i mean. I know fansub releases do many masking, me included. However, i didn't see a vs script for a release which has stuff like vsgan in it which made me wonder why it is not used. |
|
![]() |
![]() |
![]() |
#4794 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,031
|
afaik. most release groups use normal filters or script collections like lvsfunc&co and no magic tools.
Why it's not used is probably easy: a. requires up-to-date hardware, especially if you want to encode tons of content. Encoding with on multiple machines to speed up the processing is hard if each of them require a 3000+ NVIDIA GPU to not totally suck and are not really fast even then. b. assuming you don't need to do restoration, you can archive the stuff most ai filtering does through other filters when you spend enough effort. c. you don't have much control aside from different types of masking to control what the ai stuff does (assuming you didn't train the models yourself) d. folks filtering anime often want to keep artifacts which they perceive as details of the source. (there is also the discussion of which release of anime xy has the colors 'right', often it has to be the one that came out earlier,...) => to get to the bottom of things, you probably will need to go to the discord channels of the groups and ask them and some might actually reply honestly and not simply say 'ai bad => you bad' *gig* Cu Selur |
![]() |
![]() |
![]() |
#4795 | Link |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
I get this error: https://pastebin.com/Kj8QBQDi
I tried doing many tools but could not get the release to be as sharp as another good one. talked to them but didn't give many details how they achieved it. thus I thought of this method. I have Ryzen 7900X and 3060Ti so I guess my PC qualifies. I get it that AI upscale is bad for anime releases but i don't plan to use it that way.. I only want the good sharp lines. ALL other stuff are exactly as they were. Last edited by ~ VEGETA ~; 8th February 2023 at 19:48. |
![]() |
![]() |
![]() |
#4796 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,031
|
btw. can someone port EZDenoise to Vapoursynth?
Original: Code:
function EZdenoise(clip Input, int "thSAD", int "thSADC", int "TR", int "BLKSize", int "Overlap", int "Pel", bool "Chroma", bool "out16") { thSAD = default(thSAD, 150) thSADC = default(thSADC, thSAD) TR = default(TR, 3) BLKSize = default(BLKSize, 8) Overlap = default(Overlap, 4) Pel = default(Pel, 1) Chroma = default(Chroma, false) out16 = default(out16, false) Super = Input.MSuper(Pel=Pel, Chroma=Chroma) Multi_Vector = Super.MAnalyse(Multi=true, Delta=TR, BLKSize=BLKSize, Overlap=Overlap, Chroma=Chroma) Input.MDegrainN(Super, Multi_Vector, TR, thSAD=thSAD, thSAD2=int(float(thSAD*0.9)), thSADC=thSADC, thSADC2=int(float(thSADC*0.9)), out16=out16) } |
![]() |
![]() |
![]() |
#4797 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,031
|
@~ VEGETA ~: No clue. First time I see that error. :/
I can send you a link to my current 'torch-AddOn' for Hybrid which basically is a folder with a portable Vapoutsynth, which also includes vsgan (and tons of other stuff). Cu Selur Last edited by Selur; 8th February 2023 at 19:59. |
![]() |
![]() |
![]() |
#4798 | Link |
The cult of personality
Join Date: May 2013
Location: Planet Vegeta
Posts: 155
|
Hello
I am trying to use this https://github.com/YomikoR/GetFnative with base dimensions of 1920x1080p and fractional height of 847.047 (or so, don't remember now). However, I'd like to use SSIM downsampler instead of regular spline36 if possible. I tried so but could not because it always produces a shifted image. I tried different manual values but couldn't do it properly as the original code example shown in linked page. any tips? |
![]() |
![]() |
![]() |
#4799 | Link |
Registered User
Join Date: Jul 2018
Posts: 876
|
The magic is inside mvtools - not in this simple mvtools usage function. First is good to port to vapoursynth all new features of todays mvtools (post 2.7.45 builds to the end of 2022 or with some new planned features to 2023 like finally fixing quality issue of https://github.com/pinterf/mvtools/issues/59 for non-4:4:4 sources).
At first VS may still not support simple MDegrainN from old 2.7.45 era. As in latest commit in 2023 https://github.com/dubhater/vapoursy...f67254580b7ab9 it is only start to support fixed-tr to 6. It is lightyears behind latest end of 2022 MDegrainN already having 'spatial' multi-pass blending modes and going to go into 'temporal' multi-pass blending as a next step. Also the most useful features like several interpolated overlap modes (at the MDegrain stage - running MAnalyse in max speed non-overlapped or using hardware accelerator of MVs from MPEG encoder chip) including new quality/performance balanced mode of 'diagonal' overlap having only 2x number of blocks and giving close to blksize/2 4x overlap mode quality in old mvtools overlap design. Also runtime sub-sample shifting allowing to save host RAM traffic and expensive onboard CPU caches trashing with pre-calculated subsample refined planes. The required for correct pel=4 UV processing in 4:2:0 granularity is pel/8. So if designed in old way via MSuper it will bug memory subsystem even more. Also running MVs analysis in 'large' tr of 10 and more opens better possibility to perform intermediate linear or non-linear MVs grading in time axis after MAnalyse and before MDegrain (MVLPF current implemented feature of MDegrainN). Using too few tr for 1..6 gives too low timed samples of MVs to make good FIR convolution linear LPF. Last edited by DTL; 6th March 2023 at 22:54. |
![]() |
![]() |
![]() |
#4800 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,031
|
Okay, so in conclusion with https://github.com/dubhater/vapoursynth-mvtools EZDenoise can't be ported atm. .
Sad news, but I will have to live with it. Thanks. |
![]() |
![]() |
![]() |
Tags |
speed, vaporware, vapoursynth |
Thread Tools | Search this Thread |
Display Modes | |
|
|