Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
![]() |
#1 | Link |
Registered User
Join Date: Mar 2021
Location: San Diego, CA, USA
Posts: 4
|
SIF Codec added to FFMPEG
We have added SIF Codec support to FFMPEG. You can download it here: https://sifcodec.com/downloads
New SIF Codec version 2.07 and libraries / utilities are available too. Current version is faster and support better psycho-visual model. Two new presets are added: Fast: 25 frames/s encoding for FHD Faster: 30 frames/s encoding for FHD |
![]() |
![]() |
![]() |
#3 | Link |
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,652
|
Is the name a reference to the old SIF resolutions used for videoconferencing and MPEG-1?
https://en.wikipedia.org/wiki/Common...mediate_Format Weird to remember how much of my career was spent really worrying about sample aspect ratios. Having everything recent in square pixel is such a simplifier! |
![]() |
![]() |
![]() |
#4 | Link |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,696
|
SIF is a patented wavelet image compression algorithm implementing a Simplified Interpolation Filter. It has been explained in this thread about the SIF1 codec.
Video compression based on wavelets has been tried with many different approaches, yet a "holy grail" is not found. Last edited by LigH; 17th September 2022 at 22:50. |
![]() |
![]() |
![]() |
#5 | Link | |
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,652
|
Quote:
|
|
![]() |
![]() |
![]() |
#6 | Link |
SuperVirus
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,350
|
The VfW files of the SIF codec make the 32-bit VirtualDub crash at opening.
![]() The 64-bit VirtualDub(2) doesn't crash at opening, but I uninstalled the codec anyway before trying to run a test encode. |
![]() |
![]() |
![]() |
#7 | Link | |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,696
|
The media-autobuild suite has a feature to support custom scripts to modify the default preparation of projects to be built. You can write shell scripts to put in special named folders which are executed to pre-update and pre-compile additional libraries and add command line parameters to compiler calls. If you know how...
I do not really know how. Could try and fail several times on my own. But would appreciate some experienced support. _ PS: Regarding the crash in 32 bit code, El Heggunte posted a report that sounds like the code works only in 64 bit CPU mode. Quote:
Last edited by LigH; 18th September 2022 at 17:20. |
|
![]() |
![]() |
![]() |
#8 | Link | |
Registered User
Join Date: Nov 2004
Location: Poland
Posts: 2,834
|
Quote:
Currently huge amount of data is stored in the cloud and a lot of it is UHD. There are plenty cases when we need to see proxy preview or take just HD version of it and currently this is done at relatively big cost (also bandwidth wise) due to egress fees. We read full UHD data just to see eg. 1024x576 preview. There are ways to solve it, but they stil cost time and money. As far as I've been told for wavelet based codec it should be easy to adjust it the way that for eg. HD only "needed" portion of data is read from the source, which would reduce costs and bandwidth massively without any intermediate processes. Looks like codecs are written without any plan how they will be used in real world. I have feeling that everything is done based on 30 years old way of thinking. No real innovation. Math does get more complex (and about everyone focuses just on this part), but there are also other (actually easier) ways of solving some of the compression issue. I keep repeating it, but Cineform still represents most advanced and feature packed compression tech, even if it's 20 years old and efficiency wise it's just an average codec. Yet, it's so clever (simplicity, active metadata, whole engine inside codec, etc.) and I'm puzzled no one even matched it. Tico or Daniel2 are great, but in the same time so primitive as they miss all bits around compression itself. Metadata is so underestimated and limited (yet we have crazy complex math inside every codec). Why can't I have simple way off adding XML or JSON based metadata into file (and adjusting it when needed), instead I have to keep it as a side files or store in database? Really that difficult? MXF seems to give this ability(?), but it looks so overdone as a container. There is definitely room for 1 intermediate codec which is actually written for the post industry, not just as a math exercise. Last edited by kolak; 18th September 2022 at 15:37. |
|
![]() |
![]() |
![]() |
#9 | Link |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,696
|
LuRaWave image compression supported that, like e.g. low quality decoding as free preview (also with full quality zones inside the image) and complete full quality decoding with DRM.
|
![]() |
![]() |
![]() |
#10 | Link |
Registered User
Join Date: Nov 2004
Location: Poland
Posts: 2,834
|
I even have not mentioned about encryption, which for big studios is also a problem. I worked on projects where you have guards on doors and no phone policy. Those things could be also simplified with clever approach, yet they use ancient solutions
![]() |
![]() |
![]() |
![]() |
#11 | Link |
SuperVirus
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,350
|
Neiromaster may be good at coding for Linux and unixoids in general, but he doesn't know how to use his code correctly in the VfW and DirectShow interfaces.
We already have Cineform and MotionJPEG 2000, the world doesn't need an eternally-experimental wavelet codec like SIF1. |
![]() |
![]() |
![]() |
#13 | Link |
SuperVirus
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,350
|
@ kolak: ^ I don't.
But the author of the codec released a VfW encoder and a DirectShow filter that didn't work. ![]() That's what he gets for not testing what he compiles. ![]() So if DirectShow is "irrelevant" today, please go say this to Neiromaster, not to me. And as my new Videohelp sigfile says... most programmers are machines that transform alcohol into bugs. ![]() Last edited by filler56789; 20th September 2022 at 03:49. Reason: more clarity. |
![]() |
![]() |
![]() |
#15 | Link | |
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,652
|
Quote:
For Efficiency, wavelets can do somewhat better than classic JPEG for still images. But they've never held a candle to the leading video codecs of the era. Efficient motion compensation with wavelets has been a long-unsolved problem. And even for still images, I don't think any wavelet solution has been competitive against a well-tuned HEVC IDR frame. The advantage of MPEG style codecs in interframe efficiency leaves wavelet solutions looking quite bad for the bandwidth. For Compatibility, deploying compatible encoders, servers, and clients end-to-end is a big lift, and would only be justified if it provided some compelling value. Web Assembly is making web-based decoders a lot more feasible, but don't have DRM support. AVC & HEVC Video codec encode and decode is already built into the hardware of of PCs, Macs, mobile devices, and living room players. AVC and HEVC also have scalable profiles with wavelet-like enhancement layers. Those are supported by a fair amount of hardware and software already, but I've not seen them used outside of videoconferencing applications. As it is making a AVC or HEVC proxy from a ProRes offers better quality at a given bitrate with broad compatibility. And allows for HW DRM if desired for higher resolutions. |
|
![]() |
![]() |
![]() |
#17 | Link | |
Registered User
Join Date: Nov 2004
Location: Poland
Posts: 2,834
|
Quote:
Any proxy creation costs time and money. If you have big UHD library (which some companies already do) and a lot of proxy preview needs it actually can be a serious money. We should be able to do it way better. Smaller resolution preview without reduced data read solves nothing when it comes to cloud storage. Eg. 10% better efficiency (due to "better" codec) is really irrelevant in such a case. I don't think intermediate codec should be designed mainly with efficiency in mind. This is why JPEG2000 is cool, but not really. For its crazy computational needs it offers not that much more. Finally someone realised it and now we have JPEG2000 HT. Atm. no one (except Apple) even implemented ProRes decode at smaller resolution (Resolve may be doing it now) even if it's in Apple official decoder as far as I understand it. Last edited by kolak; 21st September 2022 at 21:51. |
|
![]() |
![]() |
![]() |
#18 | Link | ||
Moderator
![]() Join Date: Jan 2006
Location: Portland, OR
Posts: 4,652
|
Quote:
Quote:
|
||
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|