Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th January 2022, 22:11   #81  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,345
Selur are you running 1.6.3 ? Maybe some memory release issue - but it seems to work ok for me using ESRGAN (or derivatives). Multiple seeks ok on a resolution that definitely uses tiles
poisondeathray is offline   Reply With Quote
Old 25th January 2022, 05:41   #82  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
yes, I'm using VSGAN 1.6.3 (using NVIDIA Game-reay driver version 511.23), after a system restart I can do a few more seeks inside the source, but after a bit vram stays at max until preview crashs. :/
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 25th January 2022, 10:46   #83  |  Link
PRAGMA
Registered User
 
Join Date: Jul 2019
Posts: 73
Quote:
Originally Posted by Selur View Post
Ah, okay.
Things seem to work fine with ESRGAN models (like BSRGAN), but using:
Code:
# Imports
import os
import sys
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'I:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
# Import scripts
import mvsfunc
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading source using FFMS2
clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False)
# making sure input color matrix is set as 470bg
clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# adjusting color space from YUV420P8 to RGB24 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
# resizing using VSGAN
from vsgan import EGVSR
vsgan = EGVSR(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/4x_iter420000_EGVSR.pth"
vsgan.load(model, nb=10, degradation="BD", out_nc=3, nf=64)
vsgan.apply() # 2560x1408
clip = vsgan.clip
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=1920, h=1056, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGB48 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
# set output frame rate to 25.000fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
I run out of VRAM
Code:
CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 8.00 GiB total capacity; 6.62 GiB already allocated; 0 bytes free; 6.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Sadly 'overlap' seems to only be implemented for ESRGAN, since when using "vsgan.apply(overlap=16)" in the above example, I get:
Code:
Python exception: apply() got an unexpected keyword argument 'overlap'

Cu Selur
Hi, There are some problems I've discovered with EGVSR atm. It will take further research and testing than initially expected.

Currently, half accuracy for EGVSR does not work correctly, even with fixes on master atm.

It's also somewhat VRAM intensive due to it essentially storing and running multiple frames (6 in total by default) at a time.

One mistake people are doing is letting VS use the default multi-threading with EGVSR when it should be disabled with `core.num_threads = 1`. Once you do this, it will only run the model on the current frame + n(interval) next frames at a time, instead of e.g. 72 frames with a num_threads of 12.

The fixes I'm speaking of right now are in the GitHub repo, but not in a version yet. You could install it straight from the GitHub master if you want to give it a quick test.

I'm still working on trying to get half-accuracy properly working for EGVSR, and still trying to work on methods to reduce VRAM but sadly it's just not going all that well. It might simply just take a lot of VRAM considering the number of frames the network processes at once.

And as for overlap, yes, it's not implemented in EGVSR at the moment, but perhaps that's something we could try one day to lower VRAM requirements.
PRAGMA is offline   Reply With Quote
Old 25th January 2022, 15:36   #84  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
sadly using 'core.num_threads = 1' doesn't help here.
Using:
Code:
# Imports
import os
import sys
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Limit thread count to 1
core.num_threads = 1
# Import scripts folder
scriptPath = 'I:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
# Import scripts
import mvsfunc
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading source using FFMS2
clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False)
# making sure input color matrix is set as 470bg
clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# adjusting color space from YUV420P8 to RGB24 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
# resizing using VSGAN
from vsgan import EGVSR
vsgan = EGVSR(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/4x_iter420000_EGVSR.pth"
# using model parameters from 4x_iter420000_EGVSR.defaults
vsgan.load(model, nb=10, degradation="BD", out_nc=3, nf=64)
vsgan.apply() # 2560x1408
clip = vsgan.clip
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=1920, h=1056, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGB48 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
# set output frame rate to 25.000fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
VRAM usage directly jumps to max usage reports:
Code:
CUDA out of memory. Tried to allocate 56.00 MiB (GPU 0; 8.00 GiB total capacity; 6.62 GiB already allocated; 0 bytes free; 6.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
and is stuck at max vram usage until I close the viewer.
Similar situation when using "core.num_threads = 1" with:
Code:
vsgan = ESRGAN(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/4x_BSRGAN.pth"
vsgan.load(model)
vsgan.apply(overlap=16) # 2560x1408
First frame works, vram usage is at max and stays there.
Old version 1.5 version worked fine with the same source and model.

trying to go back to 1.5.0 with
Code:
I:\Hybrid\64bit\Vapoursynth>python -m pip install --user --force VSGAN==1.5.0
I get:
Code:
Collecting VSGAN==1.5.0
  Using cached vsgan-1.5.0-py3-none-any.whl (11 kB)
Collecting numpy<2.0.0,>=1.19.5
  Using cached numpy-1.22.1-cp39-cp39-win_amd64.whl (14.7 MB)
Installing collected packages: numpy, VSGAN
ERROR: Exception:
Traceback (most recent call last):
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\cli\base_command.py", line 164, in exc_logging_wrapper
    status = run_func(*args)
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\cli\req_command.py", line 205, in wrapper
    return func(self, options, args)
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\commands\install.py", line 404, in run
    installed = install_given_reqs(
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\req\__init__.py", line 73, in install_given_reqs
    requirement.install(
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\req\req_install.py", line 765, in install
    scheme = get_scheme(
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\locations\__init__.py", line 208, in get_scheme
    old = _distutils.get_scheme(
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\locations\_distutils.py", line 130, in get_scheme
    scheme = distutils_scheme(dist_name, user, home, root, isolated, prefix)
  File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip\_internal\locations\_distutils.py", line 69, in distutils_scheme
    i.finalize_options()
  File "distutils\command\install.py", line 274, in finalize_options
  File "distutils\command\install.py", line 437, in finalize_other
distutils.errors.DistutilsPlatformError: User base directory is not specified
probably will have to uinstall and reinstall VSGAN to swithc back to 1.5.0, but I'll stick to current 1.6.3 and see whether someone else also has the same problem.
Okay,scratch that, I made a backup of my Vapoursynth folder before updating to 1.6.3, using 1.5.0 memory usage with version 1.5.0 and:
Code:
# Imports
import os
import sys
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'I:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
# Import scripts
import mvsfunc
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading source using FFMS2
clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False)
# making sure input color matrix is set as 470bg
clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
from vsgan import VSGAN
# adjusting color space from YUV420P8 to RGB24 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
# resizing using VSGAN
vsgan = VSGAN(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/4x_BSRGAN.pth"
vsgan.load_model(model)
vsgan.run(overlap=16) # 2560x1408
clip = vsgan.clip
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=1920, h=1056, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGB48 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
# set output frame rate to 25.000fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
stays at 3.1GB and while the preview is open and I can again switch between frames. (I kept the 1.6.3 folder so I can switch back easily and test stuff if needed.)

Cu Selur
__________________
Hybrid here in the forum, homepage

Last edited by Selur; 25th January 2022 at 15:55.
Selur is offline   Reply With Quote
Old 25th January 2022, 20:47   #85  |  Link
PRAGMA
Registered User
 
Join Date: Jul 2019
Posts: 73
Quote:
Originally Posted by Selur View Post
Okay, also got another issue with the current version.
Using:
Code:
# Imports
import os
import sys
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'I:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
# Import scripts
import mvsfunc
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading source using FFMS2
clip = core.ffms2.Source(source="G:/TestClips&Co/files/test.avi",cachefile="E:/Temp/avi_6c441f37d9750b62d59f16ecdbd59393_853323747.ffindex",format=vs.YUV420P8,alpha=False)
# making sure input color matrix is set as 470bg
clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# adjusting color space from YUV420P8 to RGB24 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "I:/Hybrid/64bit/vsgan_models/4x_BSRGAN.pth"
vsgan.load(model)
vsgan.apply() # 2560x1408
clip = vsgan.clip
# adjusting resizing
clip = core.fmtc.resample(clip=clip, w=640, h=352, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGB48 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
# set output frame rate to 25.000fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
When I open this, it takes a bit to calculate the first frame during which VRAM usage goes up to the max (8GB) and doesn't go down again and hinders me from looking at another frame. :/
when using "vsgan.apply(overlap=16)" the same happens.
-> at least here the new version does only work for one frame.
I also tried different other models, all seem to have the same effect on my system.
I sometimes I can step through a few frames (two work, third always fails), but then vram gets stuck and thats it.
Hi, so, the VRAM requirements are generally high but you shouldn't have this much issues using it on a <480p 4x model scenario. When you say you get stuck or cant render a frame, what exactly is the block? The VRAM seems to stay in use until you try render a new frame, the VRAM seems to clear so fast and then get re-used incredibly quickly.

Have you tried using vs-pipe instead of through vs-edit just to see what performance you actually get from as minimal going on as possible? (`vspipe script.vpy . -p`).

Perhaps try close all VS-edit instances, check the GPU usage has gone down (it will still have a bit loaded by pytorch from some reason), and then reopen and try? Sometimes randomly seeking and clicking play a few times just uses too much VRAM before it clears, hence crashes. But if you seek around without clicking play more than a few times it shouldnt give you troubles. Clicking F5/Preview after a fair while also gives troubles, though these are vs-edit issues to do with queue caches or something.
PRAGMA is offline   Reply With Quote
Old 25th January 2022, 22:04   #86  |  Link
PRAGMA
Registered User
 
Join Date: Jul 2019
Posts: 73
Update: v1.6.4 fixes a memory leak that happens because I have no fucking idea, but alas, if anyone had any VRAM issues please try on v1.6.4.
Some other changes are there as well, mainly changes to how half=True param works. Spoiler: It doesnt exist, but dont worry see changelog.
PRAGMA is offline   Reply With Quote
Old 26th January 2022, 16:54   #87  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
v1.6.4 works again for me Thanks!
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 14th February 2022, 23:51   #88  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Quote:
Originally Posted by Selur View Post
v1.6.4 works again for me Thanks!
I get insta crashes on 1.6.4, 1.6.0-1.6.3 works for some frames before it crash...
The only "stable" version for me seems to be 1.5.0.

Testing with model:RealESRGAN_x2plus.pth

Is there any debug logs I can activate to figure out the issue?

Hardware:
Nvidia RTX3090 (tried both 4xx and 5xx series drivers)
AMD 3900x
32GB Ram
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 15th February 2022, 08:14   #89  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
What does your script look like?
What resolution is your input?
You might simply be running out of VRAM.
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 15th February 2022, 11:40   #90  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Quote:
Originally Posted by Selur View Post
What does your script look like?
What resolution is your input?
You might simply be running out of VRAM.
Source resolution I'm testing on is 720x540, script:
Code:
chroma = video.resize.Spline36(video.width*2,video.height*2)
video = video.fmtc.resample (css="444")
video = video.fmtc.matrix (mat="709", col_fam=vs.RGB)
vsgan = VSGAN(video, device="cuda")
vsgan.load_model(r"RealESRGAN_x2plus.pth")
vsgan.run()
video = vsgan.clip
video = video.fmtc.matrix (mat="709", col_fam=vs.YUV, bits=16)
video = video.fmtc.resample (css="420")
video = video.fmtc.bitdepth (bits=8)
video= core.std.Merge(clipa=video, clipb=chroma, weight=[0, 1])
video.set_output()
It runs fine with 1.5.0, during an encode HWinfo64 reports ~3,5-4GB VRAM allocated
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 15th February 2022, 12:02   #91  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
Yeah, that will not work with current VSGAN as the syntax changed.
Instead of:
Code:
vsgan = VSGAN(video, device="cuda")
vsgan.load_model(r"RealESRGAN_x2plus.pth")
vsgan.run()
video = vsgan.clip
you would call:
Code:
vsgan = ESRGAN(clip=video ,device="cuda")
vsgan.load(r"RealESRGAN_x2plus.pth") # load() instead of load_model()
vsgan.apply() # <- not run()
video = vsgan.clip
see: https://vsgan.phoeniix.dev/en/stable...g-started.html and the changelog of v1.6.0

Cu Selur
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 19th February 2022, 14:28   #92  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Quote:
Originally Posted by Selur View Post
Yeah, that will not work with current VSGAN as the syntax changed.
see: https://vsgan.phoeniix.dev/en/stable...g-started.html and the changelog of v1.6.0

Cu Selur
Thanks for the help, I'll give it another try with the new syntax.
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 21st April 2022, 18:08   #93  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Is it normal for Linux to be almost 2x faster than Windows when using the exact same vapoursynth script with VSGAN?

Windows: 3.98fps
Linux: 7.67fps

Or did I do something wrong with the enviroment setup for pytorch/vapoursynth/vsgan?
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 22nd April 2022, 11:10   #94  |  Link
ReinerSchweinlin
Registered User
 
Join Date: Oct 2001
Posts: 454
A speed difference of this magnitude is a sign of different configurations... Maybe, if you are running NVIDIA Cards, setting the card to compute mode in windows will help?
ReinerSchweinlin is offline   Reply With Quote
Old 22nd April 2022, 15:59   #95  |  Link
knumag
Registered User
 
Join Date: Oct 2018
Posts: 7
Hi.
Im using a 1060 6GB and only getting around 0.2 fps with "2x_VHS-upscale-and-denoise_Film_477000_G.pth" from https://upscale.wiki/wiki/Model_Database
Also getting this warning in cmd "vsgan\utilities.py:36: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_new.cpp:998.)
torch.frombuffer("
Is that speed correct with 1060 6GB?
When looking at task manager the gpu jumps to 20% every 2-3 seconds, but idle between.
See that my dedicated memory is almost full. Is that the reason for it being that slow?
Do i need another card with more vram?
I have a threadripper 1950x, but no way to use that instead of GPU?

With "RealESRGAN_x2plus.pth" I'm getting 0.5 fps

Last edited by knumag; 22nd April 2022 at 16:29. Reason: new information
knumag is offline   Reply With Quote
Old 22nd April 2022, 17:38   #96  |  Link
mastrboy
Registered User
 
Join Date: Sep 2008
Posts: 365
Quote:
Originally Posted by ReinerSchweinlin View Post
A speed difference of this magnitude is a sign of different configurations... Maybe, if you are running NVIDIA Cards, setting the card to compute mode in windows will help?
It's an Nvidia RTX 3090, I can't find that option in the control panel, is it only available on Quadro cards?

I can see a compute mode with the CLI though, it's currently configured to Default:
nvidia-smi -q | Select-String -Pattern "compute"
Compute Mode : Default
__________________
(i have a tendency to drunk post)
mastrboy is offline   Reply With Quote
Old 22nd April 2022, 17:49   #97  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
What does the script look like? What is the resolution of the source?
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 22nd April 2022, 17:55   #98  |  Link
knumag
Registered User
 
Join Date: Oct 2018
Posts: 7
Pal SD from vhs after qtgmc.
But should be something wrong somewhere, I'm just getting black video as ouput...
Code:
import vapoursynth as vs 									              		 					   
core = vs.core            								           
core.num_threads = 8            		 						   
core.max_cache_size = 6000            		 					   
video = core.lsmas.LWLibavSource(source=r"1.mp4")     					   
from vsgan import ESRGAN											
video = core.fmtc.resample (clip=video, css="444")										
video = core.fmtc.matrix (clip=video, mat="709", col_fam=vs.RGB)								
vsgan = ESRGAN(clip=video ,device="cuda")										
vsgan.load(r"RealESRGAN_x2plus.pth")						
vsgan.apply()						
video = vsgan.clip						
video = core.fmtc.matrix (clip=video, mat="709", col_fam=vs.YUV, bits=16)						
video = core.fmtc.resample (clip=video, css="420") 							
video = core.fmtc.bitdepth (clip=video, bits=8) 							
video = core.resize.Spline36(video, 1440, 1080)						
video.set_output()
Changed to this and it's working
Code:
import vapoursynth as vs
core = vs.core            								           
core.num_threads = 8            		 						   
core.max_cache_size = 6000            		 					   
video = core.lsmas.LWLibavSource(source=r"1.mp4")     					   
from vsgan import ESRGAN											
video = core.resize.Bicubic(clip=video, format=vs.RGB24, matrix_in_s="709", range_s="limited")								
vsgan = ESRGAN(clip=video ,device="cuda")										
vsgan.load(r"RealESRGAN_x2plus.pth")						
vsgan.apply()						
video = vsgan.clip						
video = core.fmtc.matrix (clip=video, mat="709", col_fam=vs.YUV, bits=16)						
video = core.fmtc.resample (clip=video, css="420") 							
video = core.fmtc.bitdepth (clip=video, bits=8) 							
video = core.resize.Spline36(video, 1440, 1080)						
video.set_output()

Last edited by knumag; 22nd April 2022 at 18:28.
knumag is offline   Reply With Quote
Old 22nd April 2022, 19:36   #99  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,259
Script seems fine to me.
(as a side note: using https://github.com/HolyWu/vs-realesr...r/vsrealesrgan is nearly 2 times faster than vsgan with RealESRGAN here.)
Do you use the same driver version on Linux and Windows?
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 25th April 2022, 13:16   #100  |  Link
knumag
Registered User
 
Join Date: Oct 2018
Posts: 7
Quote:
Originally Posted by Selur View Post
Script seems fine to me.
(as a side note: using https://github.com/HolyWu/vs-realesr...r/vsrealesrgan is nearly 2 times faster than vsgan with RealESRGAN here.)
Do you use the same driver version on Linux and Windows?
Having problems installing vsrealesrgan.
Followed your guide here: https://forum.selur.net/thread-1858.html
But when installing vsdpir and vsrealesrgan, Im getting errors.

Code:
 Using cached VapourSynth-58.zip (558 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [15 lines of output]
      Traceback (most recent call last):
        File "C:\Users\knumag\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 64, in <module>
          dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY)
        File "C:\Users\knumag\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 38, in query
          reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ)
      FileNotFoundError: [WinError 2] The system cannot find the file specified

      During handling of the above exception, another exception occurred:

      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\knumag\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 67, in <module>
          raise OSError("Couldn't detect vapoursynth installation path")
      OSError: Couldn't detect vapoursynth installation path
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Any suggestions?
knumag is offline   Reply With Quote
Reply

Tags
esrgan, gan, upscale, vapoursynth

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:23.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.