View Single Post
Old 18th July 2022, 18:04   #34  |  Link
ChaosKing
Registered User
 
Join Date: Dec 2005
Location: Germany
Posts: 1,795
@tormento
Code:
clip = lvsfunc.dehalo.masked_dha(clip,  rx=2, ry=2)

from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "some-anime-model.pth" # forgot which one, test some anime models from here https://upscale.wiki/wiki/Model_Database
vsgan.load(model)
vsgan.apply()


@PoeBear
It would be at least much faster than Waifu2x.


I tried many esrgan models from here https://upscale.wiki/wiki/Model_Database and depending on the source (and what the model was trained on) the results were much much^2 better than any waifu upscale I've ever seen.

+ Waifu was made / trained for more modern art, not 80s / 90s animations. In my tests waifu looks kinda "good" when used with a stronger denoise parameter, but at the same time it destroys all details.

FSRCNNX should be better and less destructive.


Now I have to learn how to train my own esrgan model
__________________
AVSRepoGUI // VSRepoGUI - Package Manager for AviSynth // VapourSynth
VapourSynth Portable FATPACK || VapourSynth Database
ChaosKing is offline   Reply With Quote