Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
26th May 2015, 14:01 | #1 | Link |
Registered User
Join Date: Jan 2008
Posts: 185
|
TFM() and DGSource() for VapourSynth?
After having used AviSynth for a decade for my video edits, I just discovered VapourSynth and it looks fantastic. Much more modern and compatible and without all the pain and suffering that the various MT/64-bit AviSynth versions involved. Kudos to everybody involved!
However, while a ton of my filters appear to have been converted, there seem to be two filters I really need that don't exist for VapourSynth yet. (1) For teleciding I use tfm.tdecimate(hybrid=1). Back when I tested all the various teleciders for AviSynth it seemed to be the best and has performed well for all these years. It does the right thing even if the cadence is off and a creditable approximation when there are--as occasionally happens--some genuine interlaced sequences intermixed with the telecined ones. Is VIVTC an adequate replacement? (2) My source filter is always DGSource(). I saw that there was a d2vsource() filter which is great, but for modern codecs like h264, you need DGIndexNV/DGSource(). Any work on this or alternatives? FFMS2 could work, I guess, but I do not trust its accuracy as I do DGSource(). |
26th May 2015, 14:37 | #2 | Link |
Registered User
Join Date: Aug 2011
Posts: 103
|
VFM works much the same as TFM. But compared to TDecimate, VDecimate lacks some important functions (such as vfr mode).
As for source filter: 1. For ts/m2ts input, lsmas.LWLibavSource is a good choice, it's safer than ffms2 when non-linear access occurs. 2. For mp4 input, lsmas.LibavSMASHSource definitely is the best choice. 3. For mkv input, both lsmas and ffms2 are OK. If you really want to use those AviSynth plugins, it's possible to load them with 32bit VS. Alternatively, you can try using them in an AviSynth script and loading it with vsavsreader. Last edited by mawen1250; 26th May 2015 at 14:46. |
26th May 2015, 15:19 | #4 | Link | |
unsigned int
Join Date: Oct 2012
Location: 🇪🇺
Posts: 760
|
Quote:
There are lots of filters you can look at for inspiration: https://github.com/vapoursynth/vapou...er/src/filters https://github.com/vapoursynth/vapou...aster/src/core In particular, BlankClip, because it's a simple source filter. If you have any questions, feel free to ask.
__________________
Buy me a "coffee" and/or hire me to write code! |
|
26th May 2015, 15:32 | #5 | Link |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,555
|
Don't forget the invert example which has comments explaining many things.
Otherwise I think blankclip is the best place to start looking since it's a source filter.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
26th May 2015, 22:12 | #11 | Link | |
Professional Code Monkey
Join Date: Jun 2003
Location: Kinnarps Chair
Posts: 2,555
|
Quote:
But since you pointed it out I'll make vdecimate export more information so you could create something similar with a script, if that's what you want.
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet |
|
26th May 2015, 23:52 | #12 | Link |
Registered User
Join Date: Jan 2008
Posts: 185
|
@Myrsloik You are right in principle--VFR really is the way to go with material that genuinely is a mixture between interlaced and telecined material.
Fortunately, these days almost all the material I work on is already either constant 24 frames-per-second progressive or 60 fields-per-second interlaced. Almost all than remains is almost purely 24 frames-per-second telecined to 60 fields-per-second, with no or almost no interlaced material. All of those convert well to constant 24 or 30 progressive frames-per-second. The tiny fraction (about 0% for me in recent years) that does not fit into these categories might convert better to VFR, but then I'd start to worry about compatibility with all my devices and players. |
27th May 2015, 05:52 | #14 | Link |
Pajas Mentales...
Join Date: Dec 2004
Location: Spanishtán
Posts: 496
|
In my knowledge, CUVID It exists for linux
Code:
/usr/lib/libnvcuvid.so /usr/lib/libnvcuvid.so.1 /usr/lib/libnvcuvid.so.352.09 /opt/cuda/include/cuviddec.h /opt/cuda/include/nvcuvid.h /opt/cuda/samples/3_Imaging/cudaDecodeGL/doc/nvcuvid.pdf Code:
└───╼ sudo GLPATH=/usr/lib make [sudo] password for sl1pkn07: "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o FrameQueue.o -c FrameQueue.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o ImageGL.o -c ImageGL.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o VideoDecoder.o -c VideoDecoder.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o VideoParser.o -c VideoParser.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o VideoSource.o -c VideoSource.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o cudaModuleMgr.o -c cudaModuleMgr.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o cudaProcessFrame.o -c cudaProcessFrame.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o videoDecodeGL.o -c videoDecodeGL.cpp "/opt/cuda"/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_20,code=compute_20 -o cudaDecodeGL FrameQueue.o ImageGL.o VideoDecoder.o VideoParser.o VideoSource.o cudaModuleMgr.o cudaProcessFrame.o videoDecodeGL.o -L../../common/lib/linux/x86_64 -lGL -lGLU -lX11 -lXi -lXmu -lglut -lGLEW -lcuda -lcudart -lnvcuvid mkdir -p ../../bin/x86_64/linux/release cp cudaDecodeGL ../../bin/x86_64/linux/release "/opt/cuda"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=compute_20 -o NV12ToARGB_drvapi64.ptx -ptx NV12ToARGB_drvapi.cu mkdir -p data cp -f NV12ToARGB_drvapi64.ptx ./data mkdir -p ../../bin/x86_64/linux/release cp -f NV12ToARGB_drvapi64.ptx ../../bin/x86_64/linux/release if run the sample: Code:
└───╼ ./cudaDecodeGL [CUDA/OpenGL Video Decode] Command Line Arguments: argv[0] = ./cudaDecodeGL [cudaDecodeGL]: input file: <./data/plush1_720p_10s.m2v> VideoCodec : MPEG-2 Frame rate : 30000/1001fps ~ 29.97fps Sequence format : Progressive Coded frame size: [1280, 720] Display area : [0, 0, 1280, 720] Chroma format : 4:2:0 Bitrate : 14116kBit/s Aspect ratio : 16:9 argv[0] = ./cudaDecodeGL > Device 0: < GeForce GTX 770 >, Compute SM 3.0 detected >> initGL() creating window [1280 x 720] > Using CUDA/GL Device [0]: GeForce GTX 770 > Using GPU Device: GeForce GTX 770 has SM 3.0 compute capability Total amount of global memory: 2039.7773 MB >> modInitCTX<NV12ToARGB_drvapi64.ptx > initialized OK >> modGetCudaFunction< CUDA file: NV12ToARGB_drvapi64.ptx > CUDA Kernel Function (0x01fca330) = < NV12ToARGB_drvapi > >> modGetCudaFunction< CUDA file: NV12ToARGB_drvapi64.ptx > CUDA Kernel Function (0x01fd11c0) = < Passthru_drvapi > Free memory: 1469.9062 MB > VideoDecoder::cudaVideoCreateFlags = <1>Use CUDA decoder setTextureFilterMode(GL_NEAREST,GL_NEAREST) ImageGL::CUcontext = 01d5b7c0 ImageGL::CUdevice = 00000000 reshape() glViewport(0, 0, 1280, 720) [cudaDecodeGL] - [Frame: 0016, 00.0 fps, frame time: 89543860224.00 (ms) ] [cudaDecodeGL] - [Frame: 0032, 804.6 fps, frame time: 1.24 (ms) ] [cudaDecodeGL] - [Frame: 0048, 654.3 fps, frame time: 1.53 (ms) ] [cudaDecodeGL] - [Frame: 0064, 683.3 fps, frame time: 1.46 (ms) ] [cudaDecodeGL] - [Frame: 0080, 657.7 fps, frame time: 1.52 (ms) ] [cudaDecodeGL] - [Frame: 0096, 637.4 fps, frame time: 1.57 (ms) ] [cudaDecodeGL] - [Frame: 0112, 637.5 fps, frame time: 1.57 (ms) ] [cudaDecodeGL] - [Frame: 0128, 599.1 fps, frame time: 1.67 (ms) ] [cudaDecodeGL] - [Frame: 0144, 627.6 fps, frame time: 1.59 (ms) ] [cudaDecodeGL] - [Frame: 0160, 686.3 fps, frame time: 1.46 (ms) ] [cudaDecodeGL] - [Frame: 0176, 598.0 fps, frame time: 1.67 (ms) ] [cudaDecodeGL] - [Frame: 0192, 548.1 fps, frame time: 1.82 (ms) ] [cudaDecodeGL] - [Frame: 0208, 516.1 fps, frame time: 1.94 (ms) ] [cudaDecodeGL] - [Frame: 0224, 603.4 fps, frame time: 1.66 (ms) ] [cudaDecodeGL] - [Frame: 0240, 725.9 fps, frame time: 1.38 (ms) ] [cudaDecodeGL] - [Frame: 0256, 747.2 fps, frame time: 1.34 (ms) ] [cudaDecodeGL] - [Frame: 0272, 753.0 fps, frame time: 1.33 (ms) ] [cudaDecodeGL] - [Frame: 0288, 733.6 fps, frame time: 1.36 (ms) ] [cudaDecodeGL] - [Frame: 0304, 715.5 fps, frame time: 1.40 (ms) ] [cudaDecodeGL] - [Frame: 0320, 656.2 fps, frame time: 1.52 (ms) ] [cudaDecodeGL] statistics Video Length (hh:mm:ss.msec) = 00:00:00.502 Frames Presented (inc repeats) = 329 Average Present Rate (fps) = 654.13 Frames Decoded (hardware) = 329 Average Rate of Decoding (fps) = 654.13 http://docs.nvidia.com/cuda/samples/...oc/nvcuvid.pdf for mac users, i'm not completely sure if cuvid works greeetings Last edited by sl1pkn07; 27th May 2015 at 15:35. |
27th May 2015, 14:36 | #15 | Link |
Guest
Posts: n/a
|
For OS X you'd probably want to use Video Toolbox. Its benefit is also not being tied to any specific hardware vendor. But obviously that makes videoh's porting effort much more difficult.
I've been planning to create such a plugin using it already, but I've just not had the time. Last edited by captainadamo; 27th May 2015 at 14:43. |
1st June 2015, 01:39 | #16 | Link |
Registered User
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
|
I would be quite interested in a Windows-only port, if it was compatible with 64-bit VapourSynth.
__________________
madVR options explained |
5th August 2015, 22:50 | #20 | Link |
Registered User
Join Date: Dec 2002
Posts: 5,565
|
Write an AviSynth script and use VapourSynth's native AviSource or vsavsreader.
Or do you mean a wrapper to create a VapourSynth compatible plugin? |
|
|