View Single Post
Old 25th January 2022, 10:46   #83  |  Link
PRAGMA
Registered User
 
Join Date: Jul 2019
Posts: 73
Quote:
Originally Posted by Selur View Post
Ah, okay.
Things seem to work fine with ESRGAN models (like BSRGAN), but using:
Code:
# Imports
import os
import sys
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'I:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
# Import scripts
import mvsfunc
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading source using FFMS2
clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False)
# making sure input color matrix is set as 470bg
clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# adjusting color space from YUV420P8 to RGB24 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
# resizing using VSGAN
from vsgan import EGVSR
vsgan = EGVSR(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/4x_iter420000_EGVSR.pth"
vsgan.load(model, nb=10, degradation="BD", out_nc=3, nf=64)
vsgan.apply() # 2560x1408
clip = vsgan.clip
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=1920, h=1056, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGB48 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
# set output frame rate to 25.000fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
I run out of VRAM
Code:
CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 8.00 GiB total capacity; 6.62 GiB already allocated; 0 bytes free; 6.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Sadly 'overlap' seems to only be implemented for ESRGAN, since when using "vsgan.apply(overlap=16)" in the above example, I get:
Code:
Python exception: apply() got an unexpected keyword argument 'overlap'

Cu Selur
Hi, There are some problems I've discovered with EGVSR atm. It will take further research and testing than initially expected.

Currently, half accuracy for EGVSR does not work correctly, even with fixes on master atm.

It's also somewhat VRAM intensive due to it essentially storing and running multiple frames (6 in total by default) at a time.

One mistake people are doing is letting VS use the default multi-threading with EGVSR when it should be disabled with `core.num_threads = 1`. Once you do this, it will only run the model on the current frame + n(interval) next frames at a time, instead of e.g. 72 frames with a num_threads of 12.

The fixes I'm speaking of right now are in the GitHub repo, but not in a version yet. You could install it straight from the GitHub master if you want to give it a quick test.

I'm still working on trying to get half-accuracy properly working for EGVSR, and still trying to work on methods to reduce VRAM but sadly it's just not going all that well. It might simply just take a lot of VRAM considering the number of frames the network processes at once.

And as for overlap, yes, it's not implemented in EGVSR at the moment, but perhaps that's something we could try one day to lower VRAM requirements.
PRAGMA is offline   Reply With Quote