Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 3rd February 2023, 07:55   #181  |  Link
PatchWorKs
Registered User
 
PatchWorKs's Avatar
 
Join Date: Aug 2002
Location: Italy
Posts: 303
Quote:
Originally Posted by Selur View Post
Not that surprising, since there are not many threads that are image centric.
Usually it's about video. And image based filters can easily lead to temporal inconsistencies.
Very interesting reply by aydin's author @ my GitHub "issue":
Quote:
Originally Posted by Ahmet Can Solak
I want to clarify that Aydin works on spatio-temporal data(TZYX) when all four dimensions are available and can do TYX as well. Aydin doesn't read video file formats directly though, if you can save it as a file format that is supported then you can denoise movies too. With biological live imaging datasets, we use Aydin for denoising movies on daily basis.
__________________
HYbrid Multimedia Production Suite project @ GitHub
PatchWorKs is offline   Reply With Quote
Old 14th February 2023, 08:43   #182  |  Link
PatchWorKs
Registered User
 
PatchWorKs's Avatar
 
Join Date: Aug 2002
Location: Italy
Posts: 303
@Selur and anyone else interested in participating are called to collaborate by "make bug reports/feature requests here on GitHub" according to the aydin's author:
https://github.com/royerlab/aydin/is...ent-1428538108
__________________
HYbrid Multimedia Production Suite project @ GitHub
PatchWorKs is offline   Reply With Quote
Old 11th March 2023, 11:21   #183  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,255
btw. has anyone tested if there are speed differences depending on the RGB color space VSGAN is fed?
Just noticed that for me (only testst a few models), RGBS seems to be ~10% faster than when I feed any other RGB color space.
-> Is it just me or can others confirm this?

Cu Selur
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 11th March 2023, 16:11   #184  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,340
Quote:
Originally Posted by Selur View Post
btw. has anyone tested if there are speed differences depending on the RGB color space VSGAN is fed?
Just noticed that for me (only testst a few models), RGBS seems to be ~10% faster than when I feed any other RGB color space.
-> Is it just me or can others confirm this?

Cu Selur
Similar for me ; RGBS was about 7-10% faster than RGB24 (source was RGB24)
poisondeathray is offline   Reply With Quote
Old 11th March 2023, 17:41   #185  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,255
Thanks for testing
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 11th March 2023, 22:30   #186  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,255
Argh, I got a strange problem when using vsdpir and vsgan inside the same script:

using both with RGH:
Code:
from vsdpir import dpir as DPIR
# adjusting color space from YUV420P8 to RGBH for vsDPIRDenoise
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
# denoising using DPIRDenoise
clip = DPIR(clip=clip, strength=5.000, task="denoise", device_index=0, trt=True, trt_cache_path="J:/tmp")
# changing range from limited to full range
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "C:/Users/Selur/Desktop/testing/1x_Dotzilla_Compact_80k_net_g.pth"
vsgan.load(model)
vsgan.apply() # 640x352
clip = vsgan.clip
works.

using both with RGBS:
Code:
from vsdpir import dpir as DPIR
# adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
# denoising using DPIRDenoise
clip = DPIR(clip=clip, strength=5.000, task="denoise", device_index=0, trt=True, trt_cache_path="J:/tmp")
# changing range from limited to full range
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "C:/Users/Selur/Desktop/testing/1x_Dotzilla_Compact_80k_net_g.pth"
vsgan.load(model)
vsgan.apply() # 640x352
clip = vsgan.clip
works.

Using VSDIR with RGBS and VSGAN with RGBH:
Code:
from vsdpir import dpir as DPIR
# adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
# denoising using DPIRDenoise
clip = DPIR(clip=clip, strength=5.000, task="denoise", device_index=0, trt=True, trt_cache_path="J:/tmp")
# changing range from limited to full range
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# adjusting color space from RGBS to RGBH for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, range_s="full", dither_type="error_diffusion")
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "C:/Users/Selur/Desktop/testing/1x_Dotzilla_Compact_80k_net_g.pth"
vsgan.load(model)
vsgan.apply() # 640x352
clip = vsgan.clip
works.

Using VSDIR with RGBH and VSGAN with RGBS:
Code:
from vsdpir import dpir as DPIR
# adjusting color space from YUV420P8 to RGBH for vsDPIRDenoise
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
# denoising using DPIRDenoise
clip = DPIR(clip=clip, strength=5.000, task="denoise", device_index=0, trt=True, trt_cache_path="J:/tmp")
# changing range from limited to full range
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# adjusting color space from RGBH to RGBS for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="full")
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "C:/Users/Selur/Desktop/testing/1x_Dotzilla_Compact_80k_net_g.pth"
vsgan.load(model)
vsgan.apply() # 640x352
clip = vsgan.clip
fails with:
Code:
Input type (float) and bias type (struct c10::Half) should be the same

Using VSDIR with RGBS and VSGAN with RGB48:
Code:
from vsdpir import dpir as DPIR
# adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
# denoising using DPIRDenoise
clip = DPIR(clip=clip, strength=5.000, task="denoise", device_index=0, trt=True, trt_cache_path="J:/tmp")
# changing range from limited to full range
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# adjusting color space from RGBS to RGB48 for vsVSGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGB48, range_s="full", dither_type="error_diffusion")
# resizing using VSGAN
from vsgan import ESRGAN
vsgan = ESRGAN(clip=clip,device="cuda")
model = "C:/Users/Selur/Desktop/testing/1x_Dotzilla_Compact_80k_net_g.pth"
vsgan.load(model)
vsgan.apply() # 640x352
clip = vsgan.clip
produces graphic glitches here https://ibb.co/jTCbghX

Cu Selur

Ps.: I also reported this to https://github.com/rlaphoenix/VSGAN/issues/31, but thought it might be interesting to others in case they run into the same issue.
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 12th March 2023, 07:53   #187  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,255
Okay, issue seems to be that VSDPIR overwrites the default tensor type. (so not a bug in VSGAN, but an unexpected behavior of VSDPIR)
Using:
Code:
import torch
torch.set_default_tensor_type(torch.FloatTensor)
after VSDPIR, should fix this it.

Behavior is fixed with https://github.com/HolyWu/vs-dpir/releases/tag/v3.0.1

Cu Selur
__________________
Hybrid here in the forum, homepage

Last edited by Selur; 12th March 2023 at 08:49.
Selur is offline   Reply With Quote
Old 19th October 2023, 04:14   #188  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 1,657
New 2x upscaling model for anime AniScale2
https://github.com/Sirosky/Upscale-H.../tag/AniScale2

It only works for digital created animes. It came with a base upscale model and a refiner model for sharpening. The model has depth of field awareness so the intended blur effect will be kept intact in the result.

I tested it a little on BLEACH and it looks pretty convincing.

Last edited by lansing; 19th October 2023 at 04:24.
lansing is offline   Reply With Quote
Old 19th October 2023, 17:11   #189  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 7,255
Interesting, thanks for the info.
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Reply

Tags
esrgan, gan, upscale, vapoursynth

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:48.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.