Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 17th January 2016, 21:11   #1  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
VapourSynth pipe to ffmpeg

Hi, I have the below scripts for opening an HD Quicktime Prores file at 24fps with Multichannel audio in VapourSynth, resize and AssumeFPS to PAL. This is being piped to ffmpeg for 10-bit output back to Prores. I then need to process the audio, which I was going to open the source Prores in ffmpeg as a 2nd input and process it there (speed change the audio, map channels, change format, etc). My question is....
Is this the best, cleanest way to deal with the audio when my end result will be ffmpeg processing?
I am using VapourSynth portable and Python 3.5.1 embedded.

VapourSynth script, thanks to poisondeathray and sneaker_ger:

Code:
import vapoursynth as vs
core = vs.get_core()
ret = core.lsmas.LibavSMASHSource(source=r'movie.mov')
ret = core.fmtc.resample (clip=ret, w=720, h=576, css="444", kernel="spline36")
ret = core.fmtc.matrix (clip=ret, mats="709", matd="601")
ret = core.fmtc.resample (clip=ret, css="422")
ret = core.fmtc.bitdepth (clip=ret, bits=10)
ret = core.std.AssumeFPS(ret, fpsnum=25, fpsden=1)
ret.set_output()
ffmpeg script:

Code:
"C:\Program Files\python-3.5.1\VSPipe.exe" --y4m "script.vpy" - | "G:\ffmpeg\ffmpeg.exe" -f yuv4mpegpipe -i - -i "movie.mov" -map 0:v -map 1:1 -c:v prores -c:a pcm_s24le -y output.mov

Last edited by speedyrazor; 17th January 2016 at 21:35.
speedyrazor is offline   Reply With Quote
Old 17th January 2016, 21:27   #2  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
I don't see anything wrong with it.

Just two notes about l-smash works:
  • LWLibavSource() creates an index, you can use LibavSMASHSource() instead for mp4/mov. (Just like what we discussed for AviSynth)
  • you can leave out the "format" parameter in VapourSynth. It will automatically use the source format and can handle it natively.
sneaker_ger is offline   Reply With Quote
Old 17th January 2016, 21:33   #3  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
Quote:
Originally Posted by sneaker_ger View Post
I don't see anything wrong with it.

Just two notes about l-smash works:
  • LWLibavSource() creates an index, you can use LibavSMASHSource() instead for mp4/mov. (Just like what we discussed for AviSynth)
  • you can leave out the "format" parameter in VapourSynth. It will automatically use the source format and can handle it natively.
Good tips, thanks. I will switch to LibavSMASHSource and remove the format (I have adjusted the above to show that). I see others keep referring to sox?
speedyrazor is offline   Reply With Quote
Old 17th January 2016, 21:37   #4  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Oh, I forgot about the audio. Of course you need to speed up the audio as well if you speed up the video. SoX with e.g. tempo filter is one solution. The other one would be to use AviSynth for audio - you already had that working IIRC?

SoX would work like that:
ffmpeg -i movie.mov -vn -f wav - | sox -t wav - -b 24 output.wav stretch 1.0427083333

Then use output.wav for muxing with video in ffmpeg.

Last edited by sneaker_ger; 17th January 2016 at 21:42.
sneaker_ger is offline   Reply With Quote
Old 17th January 2016, 21:43   #5  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
Quote:
Originally Posted by sneaker_ger View Post
Oh, I forgot about the audio. Of course you need to speed up the audio as well if you speed up the video. SoX with e.g. stretch filter is one solution. The other one would be to use AviSynth for audio - you already had that working IIRC?
I did indeed have that all working correctly in Avisynth, my only worry is now encoding speed, having to reed in the file twice, once for the video and once for the audio?

UPDATE: doesn't seem to be a problem, just ran a test (as above) on the server and it's averaging about 90fps. Nice!
I have written a Python application which runs upto 4 jobs at the same time, which I had running with Avisynth. So now just need to swap to VapourSynth and adjust the application a bit to incorporate the above script. Thanks for the help again.

Last edited by speedyrazor; 17th January 2016 at 22:59.
speedyrazor is offline   Reply With Quote
Old 18th January 2016, 10:26   #6  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
Soooo....
If I use Vapoursynth for Video and Avisynth for audio, then pipe both into ffmpeg, how would the piping work, as I am new to piping into ffmpeg?
speedyrazor is offline   Reply With Quote
Old 18th January 2016, 10:30   #7  |  Link
jackoneill
unsigned int
 
jackoneill's Avatar
 
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
Quote:
Originally Posted by speedyrazor View Post
Soooo....
If I use Vapoursynth for Video and Avisynth for audio, then pipe both into ffmpeg, how would the piping work, as I am new to piping into ffmpeg?
In Linux it would be with at least one named pipe (`mkfifo`). But Windows doesn't have them, does it?

Programs (ffmpeg) have only one standard input, so you can't pipe two things into them. With named pipes, one (or both) of the things would look like a normal file.
__________________
Buy me a "coffee" and/or hire me to write code!
jackoneill is offline   Reply With Quote
Old 18th January 2016, 10:44   #8  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
That makes sense, thanks.
How would I deal with speed-changing the audio with no video in the Avisynth script?

Normally I would do this:

Code:
v = lsmashVideoSource("movie.mov")
a1 = lsmashAudioSource("movie.mov", track=2)
a2 = lsmashAudioSource("movie.mov", track=3)
a3 = lsmashAudioSource("movie.mov", track=4)
a4 = lsmashAudioSource("movie.mov", track=5)
a5 = lsmashAudioSource("movie.mov", track=6)
a6 = lsmashAudioSource("movie.mov", track=7)
a7 = lsmashAudioSource("movie.mov", track=8)
a = MergeChannels(a1,a2,a3,a4,a5,a6,a7)
AudioDub(v, a)
AssumeFPS(25, 1, true)
ResampleAudio(48000)
Which speed changes the audio.

How would I do this not referencing any video:

Code:
a1 = lsmashAudioSource("movie.mov", track=2)
a2 = lsmashAudioSource("movie.mov", track=3)
a3 = lsmashAudioSource("movie.mov", track=4)
a4 = lsmashAudioSource("movie.mov", track=5)
a5 = lsmashAudioSource("movie.mov", track=6)
a6 = lsmashAudioSource("movie.mov", track=7)
a7 = lsmashAudioSource("movie.mov", track=8)
a = MergeChannels(a1,a2,a3,a4,a5,a6,a7)
return a
Also, using Trim and BlankClip (need to add silence) with just audio, would this just work the same?

Last edited by speedyrazor; 18th January 2016 at 13:22.
speedyrazor is offline   Reply With Quote
Old 18th January 2016, 16:18   #9  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
For pitch correction this seems to work:
Code:
a1 = lsmashAudioSource("movie.mov", track=2)
a2 = lsmashAudioSource("movie.mov", track=3)
a3 = lsmashAudioSource("movie.mov", track=4)
a4 = lsmashAudioSource("movie.mov", track=5)
a5 = lsmashAudioSource("movie.mov", track=6)
a6 = lsmashAudioSource("movie.mov", track=7)
a7 = lsmashAudioSource("movie.mov", track=8)
a = MergeChannels(a1,a2,a3,a4,a5,a6,a7)
a = a.TimeStretchPlugin(tempo = 25.0/(24000.0/1001.0)*100.0)
a = a.AssumeFPS(25, 1)
return a
But this does not seem to work for speeding up, fails with the below error:

Code:
a1 = lsmashAudioSource("movie.mov", track=2)
a2 = lsmashAudioSource("movie.mov", track=3)
a3 = lsmashAudioSource("movie.mov", track=4)
a4 = lsmashAudioSource("movie.mov", track=5)
a5 = lsmashAudioSource("movie.mov", track=6)
a6 = lsmashAudioSource("movie.mov", track=7)
a7 = lsmashAudioSource("movie.mov", track=8)
a = MergeChannels(a1,a2,a3,a4,a5,a6,a7)
a = a.AssumeFPS(25, 1, true)
a = a.ResampleAudio(48000)
return a
Code:
[avisynth @ 0510d000] Avisynth: division by zero at 0x0000251B in C:\Windows\SYSTEM32\avisynth.DLL
(\\10.0.1.103\folder\23.avs, line 9)
\\10.0.1.103\folder\23.avs: Unknown error occurred
speedyrazor is offline   Reply With Quote
Old 18th January 2016, 16:59   #10  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
If there is no video AviSynth does not have a starting point. 25 fps is a video frame rate. Basically, "AssumeFPS(25, 1, true)" is telling AviSynth to "change speed of audio (and video) by factor 25/X". But since there is no video AviSynth does not know what "X" is. (It should probably output a better error message, though...)

But:
Code:
TimeStretchPlugin(tempo = 25.0/(24000.0/1001.0)*100.0)
This does change speed! It just does not change the pitch. This is probably what you want to do.

See the documentation for info about tempo, pitch and rate parameters. If you want to change speed without preserving pitch you would use rate. Or you load a video (even a fake one with BlankClip()) and use AssumeFPS() or do a combination of AssumeSampleRate() and ResampleAudio().
sneaker_ger is offline   Reply With Quote
Old 18th January 2016, 20:44   #11  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
Thanks, I will give that a try. Just our of interest (and I could stay in 64 bit throughout) what about Avisynth+, could I sue that for opening audio, changing audio, etc?
speedyrazor is offline   Reply With Quote
Old 18th January 2016, 20:53   #12  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Theoretically yes. But I don't know about a) compatibility with ffmpeg b) availability of 64 bit TimeStretch(Plugin) suited for multi-channel.
sneaker_ger is offline   Reply With Quote
Old 20th January 2016, 15:20   #13  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
Hi, thanks for the help so far, its got me a long way, but now I am stuck again.
I am having trouble getting an ffmpeg pipe to work using VapourSynth and Python Popen commands (I know this may not be the place to ask this, but I am out of ideas). Running the pipe command on its own on Windows terminal it works correctly. Here is the python commands:

Code:
import subprocess
import os
from subprocess import Popen, PIPE, STDOUT

command1 = 'F:/ffmpeg/VapourSynth/VSPipe.exe --y4m C:/Users/myself/Desktop/script.vpy -'
command2 = 'F:/ffmpeg/ffmpeg.exe -f yuv4mpegpipe -i - -c:v prores -an //192.168.0.100/media/temp/OutMov.mov -report'

process1 = Popen(command1, stdout=PIPE, shell=False)
process2 = Popen(command2, stdin=process1.stdout, stderr=PIPE)

while True:
    line = process2.stderr.readline().decode('utf-8')
    print line

Running this with an ffmpeg report and printing stderr produces this:
Code:
pipe:: Operation not permitted.
Here is the ffmpeg report:

Code:
F:/ffmpeg/ffmpeg.exe -f yuv4mpegpipe -i - -c:v prores -an -y //192.168.0.100/media/temp/OutMov.mov -report
ffmpeg version N-77203-gb8e5b1d Copyright (c) 2000-2015 the FFmpeg developers
  built with gcc 5.2.0 (GCC)
  configuration: --arch=x86 --target-os=mingw32 --cross-prefix=/Users/myself/Desktop/Download/ffmpeg-windows-build-helpers-master/sandbox/mingw-w64-i686/bin/i686-w64-mingw32- --pkg-config=pkg-config --disable-w32threads --enable-gpl --enable-libsoxr --enable-fontconfig --enable-libass --enable-libutvideo --enable-libbluray --enable-iconv --enable-libtwolame --extra-cflags=-DLIBTWOLAME_STATIC --enable-libzvbi --enable-libcaca --enable-libmodplug --extra-libs=-lstdc++ --extra-libs=-lpng --enable-libvidstab --enable-libx265 --enable-decklink --extra-libs=-loleaut32 --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-version3 --enable-zlib --enable-librtmp --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libopenjpeg --enable-gnutls --enable-libgsm --enable-libfreetype --enable-libopus --enable-frei0r --enable-filter=frei0r --enable-libvo-aacenc --enable-bzlib --enable-libxavs --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-libschroedinger --enable-li  libavutil      55. 10.100 / 55. 10.100
  libavcodec     57. 17.100 / 57. 17.100
  libavformat    57. 19.100 / 57. 19.100
  libavdevice    57.  0.100 / 57.  0.100
  libavfilter     6. 20.100 /  6. 20.100
  libswscale      4.  0.100 /  4.  0.100
  libswresample   2.  0.101 /  2.  0.101
  libpostproc    54.  0.100 / 54.  0.100
Splitting the commandline.
Reading option '-f' ... matched as option 'f' (force format) with argument 'yuv4mpegpipe'.
Reading option '-i' ... matched as input file with argument '-'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'prores'.
Reading option '-an' ... matched as option 'an' (disable audio) with argument '1'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '//192.168.0.100/media/temp/OutMov.mov' ... matched as output file.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option y (overwrite output files) with argument 1.
Applying option report (generate a report) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file -.
Applying option f (force format) with argument yuv4mpegpipe.
Successfully parsed a group of options.
Opening an input file: -.
[AVIOContext @ 006d9800] Statistics: 0 bytes read, 0 seeks
pipe:: Operation not permitted
I am stumped. What am I doing wrong?
speedyrazor is offline   Reply With Quote
Old 20th January 2016, 16:16   #14  |  Link
speedyrazor
Registered User
 
Join Date: Mar 2003
Posts: 194
Please ignore my above post, it turned out to be a file path permisssions issue.
speedyrazor is offline   Reply With Quote
Old 29th January 2016, 14:54   #15  |  Link
dipje
Registered User
 
Join Date: Oct 2014
Posts: 271
Why the matrix 709 to 601?
As long as you feed YUV into ffmpeg it doesn't touch it and you won't get any color shift. All the stories about ffmpeg and colorshift and stuff is when you let it do any rgb <-> yuv conversion (it will default to 601 then, even if someone expects it to use 709). As long as you feed it YUV and encode in YUV, no color change. And hd-material prores is expected to be in 709.

And is there really a need (I think I don't see it but what do I know) to do the video and audio in one pass? Why not encode the video, encode the audio (with avisynth or whatever you want) into a separate .wav file and then mux them together?
If your input file is huge there might be some extra disk access, but you won't notice it as long as you're reading from a normal file I guess. If you're reading straight from a memory card, or you want a process to run on a server with 32+ instances at the same time running then disk access might be a factor, but otherwise I don't see a reason to do it in one go.
dipje is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 19:41.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, vBulletin Solutions Inc.