Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > New and alternative video codecs

Reply
 
Thread Tools Search this Thread Display Modes
Old 13th September 2022, 03:30   #1  |  Link
BlackQQQ
Registered User
 
Join Date: Mar 2021
Location: San Diego, CA, USA
Posts: 4
SIF Codec added to FFMPEG

We have added SIF Codec support to FFMPEG. You can download it here: https://sifcodec.com/downloads
New SIF Codec version 2.07 and libraries / utilities are available too.

Current version is faster and support better psycho-visual model.
Two new presets are added:
Fast: 25 frames/s encoding for FHD
Faster: 30 frames/s encoding for FHD
BlackQQQ is offline   Reply With Quote
Old 16th September 2022, 16:46   #2  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,746
Would anyone be able to explain how to modify the media-autobuild suite to include this patch?
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 16th September 2022, 23:07   #3  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,738
Is the name a reference to the old SIF resolutions used for videoconferencing and MPEG-1?

https://en.wikipedia.org/wiki/Common...mediate_Format

Weird to remember how much of my career was spent really worrying about sample aspect ratios. Having everything recent in square pixel is such a simplifier!
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 17th September 2022, 22:47   #4  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,746
SIF is a patented wavelet image compression algorithm implementing a Simplified Interpolation Filter. It has been explained in this thread about the SIF1 codec.

Video compression based on wavelets has been tried with many different approaches, yet a "holy grail" is not found.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid

Last edited by LigH; 17th September 2022 at 22:50.
LigH is offline   Reply With Quote
Old 18th September 2022, 03:02   #5  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,738
Quote:
Originally Posted by LigH View Post
SIF is a patented wavelet image compression algorithm implementing a Simplified Interpolation Filter. It has been explained in this thread about the SIF1 codec.

Video compression based on wavelets has been tried with many different approaches, yet a "holy grail" is not found.
Yeah, there have been so many promising alternatives to block-based DCT + motion compensation, but nothing has proved real-world practical to date. I worry a lot of that is because we've optimized that approach so heavily over decades that even if we found a better fundamental transform, it would still look unpromising without its possible refinements.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 18th September 2022, 03:23   #6  |  Link
filler56789
SuperVirus
 
filler56789's Avatar
 
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,351
The VfW files of the SIF codec make the 32-bit VirtualDub crash at opening.
The 64-bit VirtualDub(2) doesn't crash at opening, but I uninstalled the codec anyway before trying to run a test encode.
filler56789 is offline   Reply With Quote
Old 18th September 2022, 08:30   #7  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,746
The media-autobuild suite has a feature to support custom scripts to modify the default preparation of projects to be built. You can write shell scripts to put in special named folders which are executed to pre-update and pre-compile additional libraries and add command line parameters to compiler calls. If you know how...

I do not really know how. Could try and fail several times on my own. But would appreciate some experienced support.
_

PS: Regarding the crash in 32 bit code, El Heggunte posted a report that sounds like the code works only in 64 bit CPU mode.

Quote:
Originally Posted by BlackQQQ
(in VideoHelp forum)
We use SSE 4.1 instructions in the current version and it will not work on the systems without SSE 4.1 support
This is only temporarily since we did not finish some functions on pure C. It will be corrected in upcoming versions
But the error message does not really relate. The CPU where it was reported does support SSE 4.1, in general. But I believe SIMD instructions need special constraints in 32 vs. 64 bit CPU modes.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid

Last edited by LigH; 18th September 2022 at 17:20.
LigH is offline   Reply With Quote
Old 18th September 2022, 15:09   #8  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by LigH View Post
SIF is a patented wavelet image compression algorithm implementing a Simplified Interpolation Filter. It has been explained in this thread about the SIF1 codec.

Video compression based on wavelets has been tried with many different approaches, yet a "holy grail" is not found.
I'm stil surprised no one has done simple thing like partial resolution decoding with partial data read at the same time.

Currently huge amount of data is stored in the cloud and a lot of it is UHD. There are plenty cases when we need to see proxy preview or take just HD version of it and currently this is done at relatively big cost (also bandwidth wise) due to egress fees. We read full UHD data just to see eg. 1024x576 preview. There are ways to solve it, but they stil cost time and money. As far as I've been told for wavelet based codec it should be easy to adjust it the way that for eg. HD only "needed" portion of data is read from the source, which would reduce costs and bandwidth massively without any intermediate processes.

Looks like codecs are written without any plan how they will be used in real world.
I have feeling that everything is done based on 30 years old way of thinking. No real innovation. Math does get more complex (and about everyone focuses just on this part), but there are also other (actually easier) ways of solving some of the compression issue. I keep repeating it, but Cineform still represents most advanced and feature packed compression tech, even if it's 20 years old and efficiency wise it's just an average codec. Yet, it's so clever (simplicity, active metadata, whole engine inside codec, etc.) and I'm puzzled no one even matched it. Tico or Daniel2 are great, but in the same time so primitive as they miss all bits around compression itself. Metadata is so underestimated and limited (yet we have crazy complex math inside every codec). Why can't I have simple way off adding XML or JSON based metadata into file (and adjusting it when needed), instead I have to keep it as a side files or store in database? Really that difficult? MXF seems to give this ability(?), but it looks so overdone as a container.
There is definitely room for 1 intermediate codec which is actually written for the post industry, not just as a math exercise.

Last edited by kolak; 18th September 2022 at 15:37.
kolak is offline   Reply With Quote
Old 18th September 2022, 16:02   #9  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,746
Quote:
Originally Posted by kolak View Post
I'm stil surprised no one has done simple thing like partial resolution decoding with partial data read at the same time.
LuRaWave image compression supported that, like e.g. low quality decoding as free preview (also with full quality zones inside the image) and complete full quality decoding with DRM.
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 18th September 2022, 18:04   #10  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
I even have not mentioned about encryption, which for big studios is also a problem. I worked on projects where you have guards on doors and no phone policy. Those things could be also simplified with clever approach, yet they use ancient solutions
kolak is offline   Reply With Quote
Old 19th September 2022, 19:08   #11  |  Link
filler56789
SuperVirus
 
filler56789's Avatar
 
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,351
Neiromaster may be good at coding for Linux and unixoids in general, but he doesn't know how to use his code correctly in the VfW and DirectShow interfaces.
We already have Cineform and MotionJPEG 2000,
the world doesn't need an eternally-experimental wavelet codec like SIF1.
filler56789 is offline   Reply With Quote
Old 19th September 2022, 23:07   #12  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
No idea why would you care about diretcshow at all today.
kolak is offline   Reply With Quote
Old 20th September 2022, 03:45   #13  |  Link
filler56789
SuperVirus
 
filler56789's Avatar
 
Join Date: Jun 2012
Location: Antarctic Japan
Posts: 1,351
@ kolak: ^ I don't.

But the author of the codec released a VfW encoder and a DirectShow filter that didn't work.
That's what he gets for not testing what he compiles.
So if DirectShow is "irrelevant" today, please go say this to Neiromaster, not to me.

And as my new Videohelp sigfile says...
most programmers are machines that transform alcohol into bugs.

Last edited by filler56789; 20th September 2022 at 03:49. Reason: more clarity.
filler56789 is offline   Reply With Quote
Old 20th September 2022, 12:00   #14  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,746
Others "convert caffeine into code as is tradition" (usual reply of a nice guy I know).
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 20th September 2022, 16:47   #15  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,738
Quote:
Originally Posted by kolak View Post
I'm stil surprised no one has done simple thing like partial resolution decoding with partial data read at the same time.

Currently huge amount of data is stored in the cloud and a lot of it is UHD. There are plenty cases when we need to see proxy preview or take just HD version of it and currently this is done at relatively big cost (also bandwidth wise) due to egress fees. We read full UHD data just to see eg. 1024x576 preview. There are ways to solve it, but they stil cost time and money. As far as I've been told for wavelet based codec it should be easy to adjust it the way that for eg. HD only "needed" portion of data is read from the source, which would reduce costs and bandwidth massively without any intermediate processes.
I've been involved in several projects looking at using wavelets for proxy viewing and bandwidth adaptation since the mid 90's. None have gone much of anywhere. The big challenges have been compatibility and efficiency. Going back to either Indeo 5 or Iterated's fractal (sic) IVF (sic?) codecs for use in ad agency workflows. I was also minorly involved in Microsoft's hybrid DCT/wavelet JPEG XR, which promised a more limited version of subband scalability. I've probably poked around a half-dozen other wavelet or waveletesque techs over the years without getting as far as the above.

For Efficiency, wavelets can do somewhat better than classic JPEG for still images. But they've never held a candle to the leading video codecs of the era. Efficient motion compensation with wavelets has been a long-unsolved problem. And even for still images, I don't think any wavelet solution has been competitive against a well-tuned HEVC IDR frame. The advantage of MPEG style codecs in interframe efficiency leaves wavelet solutions looking quite bad for the bandwidth.

For Compatibility, deploying compatible encoders, servers, and clients end-to-end is a big lift, and would only be justified if it provided some compelling value. Web Assembly is making web-based decoders a lot more feasible, but don't have DRM support. AVC & HEVC Video codec encode and decode is already built into the hardware of of PCs, Macs, mobile devices, and living room players. AVC and HEVC also have scalable profiles with wavelet-like enhancement layers. Those are supported by a fair amount of hardware and software already, but I've not seen them used outside of videoconferencing applications.

As it is making a AVC or HEVC proxy from a ProRes offers better quality at a given bitrate with broad compatibility. And allows for HW DRM if desired for higher resolutions.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 20th September 2022, 16:54   #16  |  Link
richardpl
Registered User
 
Join Date: Jan 2012
Posts: 271
Where are the numbers? Bitrate/encode size and VMAF results with alternative encodes?
richardpl is offline   Reply With Quote
Old 21st September 2022, 21:35   #17  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by benwaggoner View Post

For Compatibility, deploying compatible encoders, servers, and clients end-to-end is a big lift, and would only be justified if it provided some compelling value. Web Assembly is making web-based decoders a lot more feasible, but don't have DRM support. AVC & HEVC Video codec encode and decode is already built into the hardware of of PCs, Macs, mobile devices, and living room players. AVC and HEVC also have scalable profiles with wavelet-like enhancement layers. Those are supported by a fair amount of hardware and software already, but I've not seen them used outside of videoconferencing applications.

As it is making a AVC or HEVC proxy from a ProRes offers better quality at a given bitrate with broad compatibility. And allows for HW DRM if desired for higher resolutions.
There is simply no need for any proxy creation. With good codec this should come "for free". When we create actual master we can create "all other needed bits" as well during 1 process at the very beginning, not keep transcoding countless times later.
Any proxy creation costs time and money. If you have big UHD library (which some companies already do) and a lot of proxy preview needs it actually can be a serious money. We should be able to do it way better.

Smaller resolution preview without reduced data read solves nothing when it comes to cloud storage.
Eg. 10% better efficiency (due to "better" codec) is really irrelevant in such a case. I don't think intermediate codec should be designed mainly with efficiency in mind. This is why JPEG2000 is cool, but not really. For its crazy computational needs it offers not that much more. Finally someone realised it and now we have JPEG2000 HT.
Atm. no one (except Apple) even implemented ProRes decode at smaller resolution (Resolve may be doing it now) even if it's in Apple official decoder as far as I understand it.

Last edited by kolak; 21st September 2022 at 21:51.
kolak is offline   Reply With Quote
Old 22nd September 2022, 01:01   #18  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,738
Quote:
Originally Posted by kolak View Post
There is simply no need for any proxy creation. With good codec this should come "for free". When we create actual master we can create "all other needed bits" as well during 1 process at the very beginning, not keep transcoding countless times later.
Any proxy creation costs time and money. If you have big UHD library (which some companies already do) and a lot of proxy preview needs it actually can be a serious money. We should be able to do it way better.
It's a tradeoff between the simplicity of a single file and the compatibility + low bitrate quality of proxies using standard codecs.

Quote:
Smaller resolution preview without reduced data read solves nothing when it comes to cloud storage.
Eg. 10% better efficiency (due to "better" codec) is really irrelevant in such a case. I don't think intermediate codec should be designed mainly with efficiency in mind. This is why JPEG2000 is cool, but not really. For its crazy computational needs it offers not that much more. Finally someone realised it and now we have JPEG2000 HT.
Atm. no one (except Apple) even implemented ProRes decode at smaller resolution (Resolve may be doing it now) even if it's in Apple official decoder as far as I understand it.
Yeah, J2K is slow on encode/decode. Since it is an image sequence all the frames can be done in parallel on different threads, which can help, but that's still a lot of watts.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:23.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.