Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#4821 | Link |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,530
|
R64 has been released.
Code:
r64: fixed compilation on osx where the default standard library doesn't have a full implementation of std::from_chars added -- as an alternate to . to indicate no output in vspipe since shells have a tendency to expand . added json output of video frame properties to vspipe fixed clearMap function, previously it would forget to properly clear the error state in maps which could cause crashes in frameeval and other filters 32 bit binaries are no longer provided for windows updated zimg to fix issues on zen4 cpus added support for cython 3.x
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
![]() |
![]() |
![]() |
#4823 | Link |
Registered User
Join Date: Sep 2006
Posts: 1,654
|
I'm having problem with vapoursynth not releasing all memory in vsedit2 after I closed the file. I had a script opening a dvd video, when I preview it, it took about 530 MB of ram. When I closed the file, there were still 200 MB of ram remained in use. I tried with Virtualdub2 and the same thing happened.
|
![]() |
![]() |
![]() |
#4824 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,530
|
Quote:
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
|
![]() |
![]() |
![]() |
#4826 | Link | |
Registered User
Join Date: Oct 2023
Posts: 4
|
Using ffmpeg pipe for script preview
Hello _Al_,
I found your post very interesting : Quote:
The script used for the test is the following: Code:
import vapoursynth as vs from vapoursynth import core import subprocess import ctypes ffmpeg = r'E:\VideoTest\TestSubs\ffmpeg.exe' source_path=r'E:\VideoTest\TestSubs\TestVideo.mp4' # Loading Plugins core.std.LoadPlugin(path="E:/VideoTest/TestSubs/BestSource.dll") #from https://forum.doom9.org/showthread.php?t=184255 #current color space: YUV420P10, bit depth: 10 #resolution: 720x300, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive clip = core.bs.VideoSource(source="E:/VideoTest/TestSubs/TestVideo.mp4") #this clip is not not needed, just to get width and height # Setting detected color matrix (470bg). clip = core.std.SetFrameProps(clip, _Matrix=5) # Setting color transfer info (470bg), when it is not set clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5) # Setting color primaries info (BT.709), when it is not set clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=1) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive # set output frame rate to 25fps (progressive) clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1) # adjusting output color from: YUV420P8 to YUV420P10 clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, range_s="limited") #clip = core.std.BlankClip(clip) w = clip.width h = clip.height Ysize = w * h UVsize = w * h//4 frame_len = w * h * 3 // 2 #YUV420 command = [ ffmpeg, '-i', source_path,'-vcodec', 'rawvideo', '-pix_fmt', 'yuv420p', '-f', 'rawvideo', '-'] pipe = subprocess.Popen(command, stdout = subprocess.PIPE, bufsize=frame_len) def load_frame(n,f): try: vs_frame = f.copy() for i, size in enumerate([Ysize, UVsize, UVsize]): ctypes.memmove(vs_frame.get_write_ptr(i), pipe.stdout.read(size), size) pipe.stdout.flush() except Exception as e: raise ValueError(repr(e)) return vs_frame try: clip = core.std.ModifyFrame(clip, clip, load_frame) except ValueError as e: pipe.terminate() print(e) clip.set_output() I commented the creation of BlankClip in order to understand better what is happen. Moreover I added the conversion from: YUV420P8 to YUV420P10 (without I don't see nothing). In this image you can see the result ![]() As you can see is not copied all the frame in output, but only a small part (see square 1) which is duplicated (see square 2). The rectangle 3 represent the part of the original script that is not overridden by ctypes.memmove(). More interesting it is possible to see that the square 3 is not in sync with square 1 & 2. I was unable to get you script working, could you help me ? Thanks, Dan Last edited by Dan64; 7th October 2023 at 14:42. |
|
![]() |
![]() |
![]() |
#4827 | Link |
Registered User
Join Date: Oct 2023
Posts: 4
|
Using ffmpeg pipe for script preview (2)
It seems that there is a problem in reading properly the raw video.
I tried the following commands Code:
ffmpeg.exe -i "TestVideo.mp4" -vcodec rawvideo -pix_fmt yuv420p -f rawvideo - | vlc.exe --demux=rawvideo --rawvid-fps=25 --rawvid-width=700 --rawvid-height=300 --rawvid-chroma=I420 - ffmpeg.exe -i "TestVideo.mp4" -vcodec rawvideo -pix_fmt yuv420p -f rawvideo - | ffplay.exe -f rawvideo -pixel_format yuv420p -video_size 720x300 -i - It seems to me strange that a problem like this has never been already discovered. So what's wrong ? |
![]() |
![]() |
![]() |
#4828 | Link |
Registered User
Join Date: May 2011
Posts: 314
|
Quick tested it with another 1920x1080 mp4, 4:2:0 video I have here and it worked, so ffmpeg's rawvideo loaded it ok, but your mp4 file did not work.
I suspect that height mod might be a problem, it is 4 for 300, maybe mod 8 is needed for ffmpeg. Would resizing it to 304 help for example? Source video, not in vapoursynth. |
![]() |
![]() |
![]() |
#4829 | Link | |
Registered User
Join Date: Oct 2023
Posts: 4
|
Quote:
But the problem on Vapoursynth side still remain... |
|
![]() |
![]() |
![]() |
#4830 | Link |
Registered User
Join Date: Oct 2023
Posts: 4
|
Using ffmpeg pipe for script preview (3)
I was finally able to get the following script working
Code:
import vapoursynth as vs from vapoursynth import core import subprocess import ctypes ffmpeg = r'E:\VideoTest\TestSubs\ffmpeg.exe' source_path=r'E:\VideoTest\TestSubs\TestVideo.mp4' # Loading Plugins core.std.LoadPlugin(path="E:/VideoTest/TestSubs/BestSource.dll") #from https://forum.doom9.org/showthread.php?t=184255 #current color space: YUV420P8, bit depth: 8 #resolution: 1280x536, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive #this clip is not not needed, just to get width and height clip = core.bs.VideoSource(source=source_path) # Setting detected color matrix (470bg). clip = core.std.SetFrameProps(clip, _Matrix=5) # Setting color transfer info (470bg), when it is not set clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5) # Setting color primaries info (BT.709), when it is not set clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=1) # Setting color range to TV (limited) range. clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1) clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive # set output frame rate to 25fps (progressive) clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1) clip = core.std.BlankClip(clip) w = clip.width h = clip.height Ysize = w * h Usize = w * h//4 Vsize = w * h//4 frame_len = Ysize + Usize + Vsize #YUV420 command = [ ffmpeg, '-i', source_path,'-vcodec', 'rawvideo', '-pix_fmt', 'yuv420p', '-f', 'rawvideo', '-'] pipe = subprocess.Popen(command, stdout = subprocess.PIPE, bufsize=frame_len) def load_frame_from_pipe(n,f): vs_frame = f.copy() try: for plane, size in enumerate([Ysize, Usize, Vsize]): ctypes.memmove(vs_frame.get_write_ptr(plane), pipe.stdout.read(size), size) pipe.stdout.flush() except Exception as e: raise ValueError(repr(e)) return vs_frame try: clip = core.std.ModifyFrame(clip, clip, load_frame_from_pipe) except ValueError as e: pipe.terminate() print(e) clip.set_output() ![]() P.S. The video used for test is available here: https://filebin.net/trb7yof9h0g335e0 |
![]() |
![]() |
![]() |
#4831 | Link |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,530
|
R65-RC1
Code:
frame properties in python are now return as str type instead of bytes when hinted as utf8 printable fixed how unprintable data is returned from plugin functions in python, previously it would leak a ctypes pointer with no length instead of returning a bytes object fixed a bug in the avx2 maskedmerge float premultiplied code path that would switch the two input clips reverted the from_chars code a bit more to make no locale affects float parsing fixed the sar adjustment for real this time
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
![]() |
![]() |
![]() |
#4832 | Link | |
Registered User
Join Date: May 2011
Posts: 314
|
Quote:
, it should be the same. If source video is 10bit though, then byte sizes would be different, 10bit uses 2bytes per value: Ysize = w * h * 2 Usize = w * h//2 Vsize = w * h//2 YUVsize = Ysize + Usize + Vsize |
|
![]() |
![]() |
![]() |
#4834 | Link |
Registered User
Join Date: Jul 2019
Location: Russia
Posts: 87
|
Upgraded my ArchLinux and now I have a similar error, how to fix it?
Code:
Failed to evaluate the script: Python exception: Expr: failed to convert '129.0' to float, not the whole token could be converted Traceback (most recent call last): File "src/cython/vapoursynth.pyx", line 3121, in vapoursynth._vpy_evaluate File "src/cython/vapoursynth.pyx", line 3122, in vapoursynth._vpy_evaluate File "/tmp/1/script.vpy", line 30, in #v = haf.QTGMC(v, Preset='Very Slow', Sharpness=0.4, FPSDivisor=1, TFF=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/havsfunc/havsfunc.py", line 2561, in QTGMC repair0 = QTGMC_KeepOnlyBobShimmerFixes(binomial0, bobbed, Rep0, RepChroma and ChromaMotion) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/havsfunc/havsfunc.py", line 3181, in QTGMC_KeepOnlyBobShimmerFixes ) File "src/cython/vapoursynth.pyx", line 2857, in vapoursynth.Function.__call__ vapoursynth.Error: Expr: failed to convert '129.0' to float, not the whole token could be converted |
![]() |
![]() |
![]() |
#4835 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,530
|
Quote:
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
|
![]() |
![]() |
![]() |
#4837 | Link |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,530
|
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
![]() |
![]() |
![]() |
#4838 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,157
|
Is there a port of ffmpegs colortemperature for Vapoursynth or an alternative filter or way to do this?
|
![]() |
![]() |
![]() |
#4839 | Link | |
Registered User
Join Date: Sep 2007
Posts: 5,285
|
Quote:
a crappy workaround to use any ffmpeg processing in vpy script is ffmpeg pipe into vsrawsource |
|
![]() |
![]() |
![]() |
#4840 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,157
|
I found this, which uses OpenCV and using:
Code:
import cv2 import muvsfunc_numpy as mufnp import numpy as np from PIL import Image kelvin_table = { 1000: (255,56,0), 1500: (255,109,0), 2000: (255,137,18), 2500: (255,161,72), 3000: (255,180,107), 3500: (255,196,137), 4000: (255,209,163), 4500: (255,219,186), 5000: (255,228,206), 5500: (255,236,224), 6000: (255,243,239), 6500: (255,249,253), 7000: (245,243,255), 7500: (235,238,255), 8000: (227,233,255), 8500: (220,229,255), 9000: (214,225,255), 9500: (208,222,255), 10000: (204,219,255)} def numpy2pil(np_array: np.ndarray) -> Image: """ Convert an HxWx3 numpy array into an RGB Image """ assert_msg = 'Input shall be a HxWx3 ndarray' assert isinstance(np_array, np.ndarray), assert_msg assert len(np_array.shape) == 3, assert_msg assert np_array.shape[2] == 3, assert_msg img = Image.fromarray(np_array, 'RGB') return img def pil2numpy(img: Image = None) -> np.ndarray: """ Convert an HxW pixels RGB Image into an HxWx3 numpy ndarray """ np_array = np.asarray(img) return np_array def convert_temp(image, temp): r, g, b = kelvin_table[temp] matrix = ( r / 255.0, 0.0, 0.0, 0.0, 0.0, g / 255.0, 0.0, 0.0, 0.0, 0.0, b / 255.0, 0.0 ) img = numpy2pil(image) return pil2numpy(img.convert('RGB', matrix)) range = "full" if core.text.FrameProps(clip,'_ColorRange'): range = "limited" clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, range_s=range) clip = mufnp.numpy_process(clip, convert_temp, temp=6500, input_per_plane=False, output_per_plane=False) Instead of using the kelvin_table one could probably use something similar to: https://academo.org/demos/colour-tem...-relationship/ which uses: Code:
/** * http://www.tannerhelland.com/4435/convert-temperature-rgb-algorithm-code/ * */ function KToRGB(Temperature){ Temperature = Temperature / 100; if (Temperature <= 66){ Red = 255; } else { Red = Temperature - 60; Red = 329.698727466 * Math.pow(Red, -0.1332047592); if (Red < 0){ Red = 0; } if (Red > 255){ Red = 255; } } if (Temperature <= 66){ Green = Temperature; Green = 99.4708025861 * Math.log(Green) - 161.1195681661; if (Green < 0 ) { Green = 0; } if (Green > 255) { Green = 255; } } else { Green = Temperature - 60; Green = 288.1221695283 * Math.pow(Green, -0.0755148492); if (Green < 0 ) { Green = 0; } if (Green > 255) { Green = 255; } } if (Temperature >= 66){ Blue = 255; } else { if (Temperature <= 19){ Blue = 0; } else { Blue = Temperature - 10; Blue = 138.5177312231 * Math.log(Blue) - 305.0447927307; if (Blue < 0){ Blue = 0; } if (Blue > 255){ Blue = 255; } } } rgb = new Array(Math.round(Red),Math.round(Green),Math.round(Blue)); return rgb; } |
![]() |
![]() |
![]() |
Tags |
speed, vaporware, vapoursynth |
Thread Tools | Search this Thread |
Display Modes | |
|
|