Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 27th March 2023, 00:28   #61  |  Link
Dogway
Registered User
 
Join Date: Nov 2009
Posts: 2,352
Most models are posted here.
Interested on realesrgan-x4minus as well, but the link is broken on the model_data page, I found a link in a reddit post, searching again now.

These are the Swin2SR models, the recommended for compressed jpeg photos. Haven't tested them, but as I could see Jpeg_dynamic seems the best, probably the others are also worth having a look:
Code:
Swin2SR_Jpeg_dynamic.pth
Swin2SR_ClassicalSR_X2_64.pth
Swin2SR_ClassicalSR_X4_64.pth
Swin2SR_CompressedSR_X4_48.pth
Swin2SR_Lightweight_X2_64.pth
Here they recommend SwinIR-L with CodeFormer (faces) or GFPGAN and v1.4 (also faces), might be worth to check out.


About LDSR I found a post from here that says:
Quote:
chaiNNer does not support LDSR, but you can use it for example on replicate:
https://replicate.com/nightmareai/latent-sr
EDIT: You can find 4xLDSR now here as onnx. The C version is good for heavy blocking images.

By the way, I wasn't aware either, but the wiki has also a page for "official" models which includes many of the ones listed above: https://upscale.wiki/wiki/Official_Research_Models
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread

Last edited by Dogway; 20th May 2023 at 13:11.
Dogway is offline   Reply With Quote
Old 29th March 2023, 21:29   #62  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,041
There is some promising update from neural-networks designers (authors of RIFE) about 'frame prediction' - https://github.com/megvii-research/CVPR2023-DMVFN . And recommended to check. It is expected to be better motion compensation engine in compare with current RIFE used in temporal denoising.

Can it be used in AVS via existing plugin or require plugin redesign ?
DTL is offline   Reply With Quote
Old 29th March 2023, 21:51   #63  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,346
Quote:
Originally Posted by DTL View Post
There is some promising update from neural-networks designers (authors of RIFE) about 'frame prediction' - https://github.com/megvii-research/CVPR2023-DMVFN . And recommended to check. It is expected to be better motion compensation engine in compare with current RIFE used in temporal denoising.
It uses 2 past frames to predict next frame

I posted some examples in this thread.

https://forum.doom9.org/showthread.php?t=184387


Quote:
Can it be used in AVS via existing plugin or require plugin redesign ?
Not currently.

If someone makes a ncnn/vulkan compatible version then possibly avs version could materialize. None of the direct pytorch variants of any project can run directly in avs
poisondeathray is offline   Reply With Quote
Old 30th March 2023, 06:19   #64  |  Link
DTL
Registered User
 
Join Date: Jul 2018
Posts: 1,041
"It uses 2 past frames to predict next frame"

It can be easily tested in tr=2 temporal denoising:
n-2 and n-1 feed as t-1 and t frames from 2 past frames forward interpolation,
n+2 and n+1 feed as pair of 2 next frames for backward interpolation to the past (engine should be time-axis symmetrical and do not know real time axis direction - sort of TENET movie idea)
got 2 interpolated frames from 2 previous and 2 next - and pass it after interleaving with current n-frame to blending engine like vsTTempSmooth (sample-based, or mvtools blocks-based).

Also developers promises finally move to multi-frames transforms analysis for better prediction and compensation for complex motion/transforms in case of non-constant speed motion and so on. But when it be released in some working demo may be still unknown.

As I understand from paper https://arxiv.org/pdf/2303.09875.pdf there is still very active scientific research on image processing exist in some asian region (China ?) but the results still far enough for real testing and/or usage in AVS. But also from that paper it looks that the research group completely miss the main important task for interpolation engines of temporal denoising and MPEG compressability improving. So the currently engines in development can not be directly used for replacement denoise engines with any-tr like mvtools/MDegrainN. And 'large and very large' tr of about 10 or even 100+.

Last edited by DTL; 30th March 2023 at 06:48.
DTL is offline   Reply With Quote
Old 4th April 2023, 00:54   #65  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
Here's some other models compatible with avs-mlrt: https://github.com/the-database/mpv-...2x_animejanai/

Quote:
2x_AnimeJaNai is a set of realtime 2x Real-ESRGAN Compact, UltraCompact, and SuperUltraCompact models intended for high or medium quality 1080p anime to 4k with an emphasis on correcting the inherit blurriness of anime while preserving details and colors. These models are not suitable for artifact-heavy or highly compressed content as they will just sharpen artifacts. The models can also work with SD anime by running the models twice, first from SD to HD, and then HD to UHD.
They are already onnx models so I won't add them to the collection.

---

@dogway, I'll get to your requested models in a bit ... I've been away from my home PC.
Reel.Deel is offline   Reply With Quote
Old 4th April 2023, 08:08   #66  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 702
Found https://huggingface.co/utnah/esrgan/tree/main

Maybe of interest?
anton_foy is offline   Reply With Quote
Old 19th May 2023, 13:44   #67  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Hi there!
I'm late to the party, but hey, never say never ehehehehe
So, I tried with:

Code:
ColorBars(848, 480, pixel_type="YV12")

ConvertBits(32)
ConvertToPlanarRGB()
mlrt_ncnn(list_gpu=true)
and indeed it shows my NVIDIA GTX 980Ti

but when I tried with:

Code:
ColorBars(848, 480, pixel_type="YV12")

ConvertBits(32)
ConvertToPlanarRGB()
mlrt_ncnn(network_path="\\avs000\Ingest\MEDIA\temp\onnx-models\VHS-Sharpen-1x_46000_G.onnx", builtin=false, list_gpu=false)


of course I have all the C++ Redistributable installed:



and, despite the error, I can see the GPU VRAM being used:




I was running Avisynth 3.7.3 x64 Beta 9 by Ferenc Pinter.
I tried to switch to the IntelLLVM builds as suggested in the read-me on GitHub, but it didn't make any difference.


On the other hand, when I tried a different model, it worked.
For instance:

Code:
ColorBars(848, 480, pixel_type="YV12")

ConvertBits(32)
ConvertToPlanarRGB()
mlrt_ncnn(network_path="\\avs000\Ingest\MEDIA\temp\onnx-models\1x_BroadcastToStudioLite_485k.onnx", builtin=false, list_gpu=false)
worked:




Before I fire up my Quadro P4000 and P5000, is it because the GTX 980Ti is too old for some models or is there some other reason behind it?

In particular, it looks like the models whose .onnx files are 65MB don't work and the ones that are smaller do.
For instance, 1x_ThePi7on-Solidd_Deborutify_UltraLite_260k_G.onnx also worked (and it's indeed just 4.6 MB).
Same goes for realesr-general-wdn-x4v3.onnx, which worked just fine and again it's just 4.7MB.

Last edited by FranceBB; 19th May 2023 at 13:57.
FranceBB is offline   Reply With Quote
Old 19th May 2023, 15:05   #68  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
My cheap GPU has trouble with the larger models. I can get the VHS-Sharpen-1x_46000_G model to work using the tilesize and overlap options and also fp16.

Code:
mlrt_ncnn(network_path=model, builtin=false, fp16=true, tilesize_w=width/4, tilesize_h=height/4, overlap_w=8, overlap_h=8)

-----

While I'm here, I found some other models in the following pages.
Reel.Deel is offline   Reply With Quote
Old 19th May 2023, 16:41   #69  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
Gotcha!
Yep, that way it worked, thanks!
FranceBB is offline   Reply With Quote
Old 25th May 2023, 14:16   #70  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 2,883
I tested realesr-general-wdn-x4v3_opset16.onnx

Here are the results depicted in some nice previews, each with its SSIM score.

Below you can find the script:


Quote:
#Indexing
LWLibavVideoSource("Test.mxf")
#ImageSource("\\mibctvan000.avid.mi.bc.sky.it\Ingest\MEDIA\temp\Lenna_(test_image).png")
Bob().Spline64Resize(848, 480)
original=ConvertBits(8).Converttoyv12().Text("Original", y=66)

#Downscale
SinPowResizeMT(width/4, height/4)

#Various Upscales
point=PointResize(width*4, height*4).ConvertBits(8).Converttoyv12()
#bilinear=BilinearResize(width*4, height*4).Converttoyv12()
nnedi3=nnedi3_rpow2(cshift="Spline64ResizeMT", rfactor=2, fwidth=width*4, fheight=height*4, nsize=4, nns=4, qual=1, etype=0, pscrn=2, threads=0, csresize=true, mpeg2=true, threads_rs=0, logicalCores_rs=true, MaxPhysCore_rs=true, SetAffinity_rs=false).ConvertBits(8).Converttoyv12()
esrgan=last.ConverttoPlanarRGB().ConvertBits(32).mlrt_ncnn(network_path="\\myshare\Ingest\MEDIA\temp\realesr-general-wdn-x4v3_opset16.onnx", builtin=false, list_gpu=false, fp16=true).ConvertBits(8).Converttoyv12()

#SSIM
pnt=SSIM(original, point, "\\myshare\Ingest\MEDIA\temp\point3SSIM.csv", "\\myshare\Ingest\MEDIA\temp\point3SSIM.txt", lumimask=1, scaled=0).Text("PointResize", y=66)
nne=SSIM(original, nnedi3, "\\myshare\Ingest\MEDIA\temp\nnedi3SSIM.csv", "\\myshare\Ingest\MEDIA\temp\nnedi3SSIM.txt", lumimask=1, scaled=0).Text("NNEDI3", y=66)
esr=SSIM(original, esrgan, "\\myshare\Ingest\MEDIA\temp\esrganSSIM.csv", "\\myshare\Ingest\MEDIA\temp\esrganSSIM.txt", lumimask=1, scaled=0).Text("ESRGAN", y=66)

#Preview
a=StackHorizontal(original, pnt)
b=StackHorizontal(nne, esr)

StackVertical(a,b)

And here are the images stacked as:

Original - PointResize
NNEDI3 - ESRGAN

I'm gonna pick one just to show why I'm sticking with NNEDI3:





Images collection:

Img1 - Img2 - Img3 - Img4 - Img5 - Img6 - Img7 - Img8 - Img9 - Img10 - Img11 - Img12 - Img13 - Img14 - Img15 - Img16

I guess I'm gonna stick with NNEDI3 for a while longer...
FranceBB is offline   Reply With Quote
Old 25th May 2023, 22:19   #71  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
The result of that model looks very artificial. Have you tried any other models? I'd be worried about temporal consistency also. For real world video, the proprietary models from Topaz are good. Unfortunately those onnx models are housed in a password protected zip file .
Reel.Deel is offline   Reply With Quote
Old 26th May 2023, 01:55   #72  |  Link
Emulgator
Big Bit Savings Now !
 
Emulgator's Avatar
 
Join Date: Feb 2007
Location: close to the wall
Posts: 1,531
Delivering a good choice from 186GB trained models Topaz (with Proteus v3 until 3.0.6 at least)
still had the "ugly face syndrom" when guessing at maybe-face-content from less than 20 pixel size.
For Proteus v4 this is promised to improve.
Some more training on face guessing from such small pixel patches should do.
But: Topaz VEAI 3 had Avisynth support killed, and users report many hassles.
My paid updates ended with 3.0.6, for the time being I wasn't willing to spend again, so I stick to last Topaz 2.6.4.

23.10.2023 23:21 Just today I came across my post by chance and oops:
Sorry for being completely OT. How did that get here ?
Must have been tired a bit on 26.03.2023 03:01, will try to move that where it belongs.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain)
"Data reduction ? Yep, Sir. We're that issue working on. Synce invntoin uf lingöage..."

Last edited by Emulgator; 23rd October 2023 at 22:22.
Emulgator is offline   Reply With Quote
Old 22nd October 2023, 17:57   #73  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 702
Code:
convertbits(32)
converttoplanarrgb()

mlrt_ncnn(network_path="C:\Program Files (x86)\AviSynth+\plugins64\ml_\models\1x-Film-Degrainer-1-000.onnx", fp16=true, builtin=false, tilesize_w=width/4, tilesize_h=height/4, overlap_w=8, overlap_h=8, list_gpu=true)
This loads the model fine and many (other models aswell) but it does nothing, it shows the original untouched clip. How to set the strength?
Or is it something else Im missing?
anton_foy is offline   Reply With Quote
Old 23rd October 2023, 00:39   #74  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,153
You can delete tilesize_w=width/4, tilesize_h=height/4, overlap_w=8, overlap_h=8 if your GPU stronger than Reel.Deel's GPU
kedautinh12 is offline   Reply With Quote
Old 23rd October 2023, 01:50   #75  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
@anton_foy

In your script you have list_gpu=true, try setting it to false. As for a strength setting, there is none.
Reel.Deel is offline   Reply With Quote
Old 23rd October 2023, 08:51   #76  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 702
Quote:
Originally Posted by Reel.Deel View Post
@anton_foy

In your script you have list_gpu=true, try setting it to false. As for a strength setting, there is none.
Thanks yes it works now! But really slow even with the RTX 4070.
anton_foy is offline   Reply With Quote
Old 23rd October 2023, 10:05   #77  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,153
Did you try delete my recommended parameters too?
kedautinh12 is offline   Reply With Quote
Old 23rd October 2023, 12:51   #78  |  Link
anton_foy
Registered User
 
Join Date: Dec 2005
Location: Sweden
Posts: 702
Quote:
Originally Posted by kedautinh12 View Post
Did you try delete my recommended parameters too?
Thanks I will try this too. So by adjusting tilesize and overlap it can work for slower cards but slower processing rather than the default values for a faster card?
anton_foy is offline   Reply With Quote
Old 23rd October 2023, 15:44   #79  |  Link
kedautinh12
Registered User
 
Join Date: Jan 2018
Posts: 2,153
Yeah, I tried delete it and the plugin will faster cause it will use more GPU memory. Try change fp16 to false too. If you meet error, you can change back to true

Last edited by kedautinh12; 24th October 2023 at 00:02.
kedautinh12 is offline   Reply With Quote
Old 23rd October 2023, 21:05   #80  |  Link
Reel.Deel
Registered User
 
Join Date: Mar 2012
Location: Texas
Posts: 1,664
Quote:
Originally Posted by anton_foy View Post
Thanks I will try this too. So by adjusting tilesize and overlap it can work for slower cards but slower processing rather than the default values for a faster card?
Using tilesize just means that the image is divided into sections, and because of overlap you end up processing more pixels. I don't think setting fp16 to true will have any negative effects on speed, even on higher end GPUs.
Reel.Deel is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:45.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.