Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
8th December 2023, 19:02 | #1 | Link |
Registered User
Join Date: Nov 2009
Posts: 2,361
|
vsMidas PyTorch version
I'm getting this error when calling the vsmidas filter. It wasn't bundled in Hybrid but it had all the requirements fulfilled already:
Code:
File "C:\Program Files\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsmidas\__init__.py", line 108, in midas module.load_state_dict(parameters) File "C:\Program Files\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for DPTDepthModel: Missing key(s) in state_dict: "pretrained.model.layers.3.downsample.reduction.weight", "pretrained.model.layers.3.downsample.norm.weight", "pretrained.model.layers.3.downsample.norm.bias", "pretrained.model.head.fc.weight", "pretrained.model.head.fc.bias". Unexpected key(s) in state_dict: "pretrained.model.layers.0.downsample.reduction.weight", "pretrained.model.layers.0.downsample.norm.weight", "pretrained.model.layers.0.downsample.norm.bias", "pretrained.model.layers.0.blocks.1.attn_mask", "pretrained.model.layers.1.blocks.1.attn_mask", "pretrained.model.head.weight", "pretrained.model.head.bias". size mismatch for pretrained.model.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([768, 1536]) from checkpoint, the shape in current model is torch.Size([384, 768]). size mismatch for pretrained.model.layers.1.downsample.norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for pretrained.model.layers.1.downsample.norm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([384]). size mismatch for pretrained.model.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([1536, 3072]) from checkpoint, the shape in current model is torch.Size([768, 1536]). size mismatch for pretrained.model.layers.2.downsample.norm.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for pretrained.model.layers.2.downsample.norm.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([768]). Code:
torch 2.0.1+cu118 torch-tensorrt-fx-only 1.5.0.dev0 torchvision 0.15.2+cu118 PD: Also, do you know of a good AI segmentation filter? Maybe with granularity option. I know OpenCV can do some but looking for some novel AI approach.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread Last edited by Dogway; 8th December 2023 at 19:04. |
10th December 2023, 09:21 | #2 | Link | |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
I can confirm that this doesn't work, so I would recommend opening a thread over at https://github.com/HolyWu/vs-midas/issues, since downgrading torch to 1.13.1 would definitely break and deinstall lots other stuff.
Quote:
Cu Selur Last edited by Selur; 10th December 2023 at 09:23. |
|
10th December 2023, 14:52 | #3 | Link | |
Registered User
Join Date: Nov 2009
Posts: 2,361
|
Quote:
I will try to install the original version under an env. Strangely it asks for Python 3.10.8 where I use 3.10.6 for Stable Diffusion so let's see.
__________________
i7-4790K@Stock::GTX 1070] AviSynth+ filters and mods on GitHub + Discussion thread |
|
14th December 2023, 20:22 | #5 | Link |
Registered User
Join Date: May 2011
Posts: 321
|
offtopic, Selur , in your scripts:
Code:
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5) |
|
|