Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Video Encoding > New and alternative video codecs

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th June 2025, 18:56   #21  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by LMP88959 View Post
I really enjoy learning about psychovisual phenomena which is why I designed DSV2 with psychovisual considerations baked-in rather than something which was created to optimize for PSNR originally and had psy optimizations added as an afterthought.
That is the right way to do it!

Quote:
Did you work on developing VC-1? I was curious about why the designers chose a cubic half-pel filter (-1,9,9,-1), not sure if you know why that was chosen?
I was using that filter originally but it made subpixel motion extremely blurry (and I wanted to keep the filters at 4-taps max) so I did some R&D on my end and settled on 'smart' temporal switching between two sharper cubic filters which from my testing significantly outperformed the (-1,9,9,-1) filter.
I joined Microsoft at the end of 2005, so the VC-1 bitstream was locked down before I was able to make any contributions. I was quite involved in the evolution of VC-1 decoders and real-world VC-1 implementations, though.

Bear in mind WMV9 was released in 2003, and thus was optimized to run well enough on single-core x86-32 MMX CPUs as a baseline, so they had a fraction of the MIPS/pixel to work with than H.264 Main Profile. The major difference between WMV9 Main and WMV9 Advanced/VC-1 was allowing for adaptive QP on I-frames, which was overlooked in the original implementation. But performance was identical with the same parameters (although VC-1 implementations tended to have overlap transform and loop filter on by default, while WMV9 originally defaulted to them off for decoder performance reasons).

The in-loop deblocking filter was one of the big retrospective regrets by the VC-1 developers. They felt they over optimized on decoder performance and the codec would have been a lot more competitive against H.264 if they'd just used a few more taps so it could do a better job.
Bear in mind that VC-1 also had overlap transform, which also played a role in reducing blocking, and they were presumed to be working together.

The other big regret was implementing differential QP by doing a RLE bitmask of variable length codes of the per-macro block differential. The actual bitrate overhead of that signaling turned out to be high enough as to eliminate the value of using adaptive QP in many low-bitrate cases. H.264 did it a lot more efficiently so it was a safe always-on feature to use. In more advanced VC-1 encoders, advanced adaptive deadline techniques without any signaling overhead wound up addressing a lot of what adaptive QP did in other codecs.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 24th June 2025, 23:19   #22  |  Link
LMP88959
Registered User
 
Join Date: Apr 2024
Posts: 26
Quote:
Originally Posted by benwaggoner View Post
The in-loop deblocking filter was one of the big retrospective regrets by the VC-1 developers. They felt they over optimized on decoder performance and the codec would have been a lot more competitive against H.264 if they'd just used a few more taps so it could do a better job.
Bear in mind that VC-1 also had overlap transform, which also played a role in reducing blocking, and they were presumed to be working together.
Very interesting, was the overlap transform an experimental curiosity of the time or a way to avoid patent issues?
LMP88959 is offline   Reply With Quote
Old 25th June 2025, 20:09   #23  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by LMP88959 View Post
Very interesting, was the overlap transform an experimental curiosity of the time or a way to avoid patent issues?
I joined the team after it was already in there, so don't know how it was decided to be put in. It did help reducing blocking with higher QPs with less computational overhead for software decoders than in-loop deblocking did. IIRC overlap and loop defaulted to off when encoding in Windows Vista and XP service pack something, but was set to on for Windows 7, as GPU decoders were common and min bar CPUs were faster.

VC-1 was really a well thought-out design for a codec that had to run well on x86-32 with MMX, and was competitive with H.264 Baseline at similar bitrate and software decoder overhead. And available encoders were faster and more robust than the early commercial H.264 encoders like Main Concept. The VC-1 Professional Edition encoder could handle a much broader range of content than early H.264 encoders, like screen recordings with transparent GUI (Vista and Aqua) and cel animation.

But didn't have enough in the tank to compete with H.264 Main and High profiles, particularly when coupled with the miracle of x264 and how well it leveraged open source and dedicated community with a wide variety of use cases to test.

And when Microsoft decided that the Windows Media mission was achieved (no more uncapped per unit decoder licenses like MPEG-2), there wasn't any reason to work so hard swimming upstream and the company pivoted to standards-based codecs like AAC and H.264. Microsoft had a really nice early H.264 encoder that was competitive with x264 in quality (but not at speed, as it only had slice-level parallelism while x264 had frame-level). But it was buried in a .dll and never really promoted or had any tools released that used it other than Expression Encoder.

And there wasn't any business justification to fund a team continuously optimizing it at the level the x264 community was providing that.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 26th June 2025, 15:48   #24  |  Link
excellentswordfight
Lost my old account :(
 
Join Date: Jul 2017
Posts: 396
Quote:
Originally Posted by LMP88959 View Post
Wavelets got a lot of bad reputation for image/video coding (in my opinion) specifically because it was being researched mainly during the time when PSNR was almost exclusively the target metric. People saw good numbers but bad visual quality and for some strange reason decided to say wavelets were the problem and not the metric itself.
Isnt JPEG2000 a wavelet based codec? One if the most used DI/mezz codecs in the world.

Cineform was it as well if i remembered correctly, didnt see that wide adoption but it was very good, and was one of the best DI options and rather common on windows until we started to see prores support there.
excellentswordfight is offline   Reply With Quote
Old 26th June 2025, 17:50   #25  |  Link
LMP88959
Registered User
 
Join Date: Apr 2024
Posts: 26
Quote:
Originally Posted by benwaggoner View Post
VC-1 was really a well thought-out design for a codec that had to run well on x86-32 with MMX, and was competitive with H.264 Baseline at similar bitrate and software decoder overhead...But didn't have enough in the tank to compete with H.264 Main and High profiles, particularly when coupled with the miracle of x264 and how well it leveraged open source and dedicated community with a wide variety of use cases to test.

And there wasn't any business justification to fund a team continuously optimizing it at the level the x264 community was providing that.
Very interesting history, I rarely hear anything about VC-1 so it's nice to learn more about it, thank you

Quote:
Originally Posted by excellentswordfight View Post
Isnt JPEG2000 a wavelet based codec? One if the most used DI/mezz codecs in the world.

Cineform was it as well if i remembered correctly, didnt see that wide adoption but it was very good, and was one of the best DI options and rather common on windows until we started to see prores support there.
Yes they have use cases, mainly for mezzanine / intra-only high resolution content, because they outperform block based DCT codecs due to the inherent lack of scalability of a fixed block sized transform.

I should have specified 'low bit rate image/video coding' in my original comment because that's generally where people felt wavelets looked worse.
LMP88959 is offline   Reply With Quote
Old 26th June 2025, 20:35   #26  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by excellentswordfight View Post
Isnt JPEG2000 a wavelet based codec? One if the most used DI/mezz codecs in the world.
And in digital cinema. All our theaters are playing back 12-bit J2K in log xyz.

Quote:
Cineform was it as well if i remembered correctly, didnt see that wide adoption but it was very good, and was one of the best DI options and rather common on windows until we started to see prores support there.
Yeah, I was a Cineform Stan back in the day. Made HD editing on laptop viable almost 20 years ago. IIRC, is was a simplified wavelet with IBIB encoding. The team that made it was brilliant, and went on to do some other cool stuff in the field I am spacing on for the moment. Fun folks to hang out with as well.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 26th June 2025, 20:55   #27  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by LMP88959 View Post
Very interesting history, I rarely hear anything about VC-1 so it's nice to learn more about it, thank you
I spent a few years as a VC-1 and Windows Media Evangelist. Feel free to ask me all the questions. My book linked in my sig is almost 16 years out of date, which is a feature if you want some serious VC-1 deep dives (and the first half about video and encoding fundamentals, and preprocessing is evergreen stuff).

The coolest use for VC-1 was in the Xbox 360 video streaming service. It did 1080p variable GOP VBR adaptive bitrate streaming in 2009! Due to the loop filter and overlap transform issues in VC-1, even 10 Mbps could get blocky at 1080p with some stressful content. And once you had a blocky p-frame, the rest of the GOP was going to look bad. So they built analysis of the motion vectors and frequency distribution horizontally and vertically. Then they would apply anamorphic spatial compression per fragment to the largest that would maintain the target maximum QP. Which worked well psychovisually, as it would tend to compress along the axis of motion, where there was motion blur and less resolution was needed to maintain detail!

This got around the classic compression conundrum of having to pick a static frame size that didn't look too terrible for high complexity content while not being overkill for static content which would have been fine with higher resolution (and benefits the most from it).

It was also helpful for software decode (and that's all there was in the 360/PS3 generation), as the more complex the motion compensation was for a frame, the fewer pixels needed to have it applied. That also left more compute to apply CPU load dynamic out-of-loop deblocking and deringing post processing.

It was a great technique that didn't really get picked up after that. Modern codecs with their stronger in-loop deblocking, SAO, and similar in-loop artifact suppression features can "recover" from a QP spike better as a high QP frame can still make an okay reference frame. And support for bigger block sizes has meant that required bitrate increases for a given resolution increase is relatively less, so it's safer to err on the side of higher resolutions. Still, Yuri from BrightCove demoed 10% savings even with HEVC using a similar technique at SMPTE MTS in 2023. Film grain was the primary limiting factor in practical use.[/QUOTE]


Quote:
Yes they have use cases, mainly for mezzanine / intra-only high resolution content, because they outperform block based DCT codecs due to the inherent lack of scalability of a fixed block sized transform.

I should have specified 'low bit rate image/video coding' in my original comment because that's generally where people felt wavelets looked worse.
Yeah, wavelets are very compression for intra-only encoding. Where block-based transforms have really outcompeted is with motion compensation, which is very straightforward to do with blocks. But not with wavelets. I've seen all sorts of attempts to do good motion compensation with wavelets, but they always felt bolted on, not something that could be integrated with the core transform like with typical codecs.

Fingers crossed someone will have the genius idea to make it work someday. I thought I've had it a half dozen times myself ! But "temporal wavelets" just never gel into anything plausibly practical.

Thanks for great excuses to go down codec memory lane!

I encoded my first digital video file back in 1989, so this has truly been my life's work.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 27th June 2025, 00:16   #28  |  Link
Emulgator
Big Bit Savings Now !
 
Emulgator's Avatar
 
Join Date: Feb 2007
Location: close to the wall
Posts: 2,036
Nice reading, Ben !
Going through the Blu-ray editions I bought I am still happy with the natural detail retention vs. bitrate over all the VC-1 encodes I've seen.
It is just good codec R&D.
I came to value .wmv much later, after H.264 indeed, as more and more bitrate-starved blotchy H.264 videos came into my task
and after all one would come to the conclusion that a 1Mbps .wmv would look less maimed.
I just would have wished an earlier Expression Encoder release and I would have paid for that.
The release strategy around VC-1 was a bit unclear for me, but well, I was late to this party anyway.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain)
"Data reduction ? Yep, Sir. We're that issue working on. Synce invntoin uf lingöage..."
Emulgator is offline   Reply With Quote
Old 27th June 2025, 17:57   #29  |  Link
LMP88959
Registered User
 
Join Date: Apr 2024
Posts: 26
Quote:
Originally Posted by benwaggoner View Post
I spent a few years as a VC-1 and Windows Media Evangelist. Feel free to ask me all the questions. My book linked in my sig is almost 16 years out of date, which is a feature if you want some serious VC-1 deep dives (and the first half about video and encoding fundamentals, and preprocessing is evergreen stuff).
Fascinating! I need to read more about VC-1. It seems to have a cool set of features along with some niche use-cases which is always fun to see.

Quote:
Originally Posted by benwaggoner View Post
Yeah, wavelets are very compression for intra-only encoding. Where block-based transforms have really outcompeted is with motion compensation, which is very straightforward to do with blocks. But not with wavelets. I've seen all sorts of attempts to do good motion compensation with wavelets, but they always felt bolted on, not something that could be integrated with the core transform like with typical codecs.

Fingers crossed someone will have the genius idea to make it work someday. I thought I've had it a half dozen times myself ! But "temporal wavelets" just never gel into anything plausibly practical.

Thanks for great excuses to go down codec memory lane!

I encoded my first digital video file back in 1989, so this has truly been my life's work.
Yeah motion compensation is a tough problem, I don't think wavelets, or even current block transforms, are the best solution to it. I think the biggest issue with P-frame coding is the proliferation of artifacts at lower bit rates where all of these transforms end up adding noise or ringing to the compensated image, thus making more residual for the next frame to compensate. This is why I use a 2x2 Haar transform for P-frames in DSV. It's fast, it doesn't ring, but it doesn't compress too well either.

3D DCT and 3D wavelets have been tried and always fall flat mainly due to how inefficient they are at coding similarities between adjacent frames. Block matching is both fast and effective since similarities between frames are not always able to be described by a curve or function.

What did you use to encode your first digital video file by the way?
LMP88959 is offline   Reply With Quote
Old 29th June 2025, 23:51   #30  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 3,378
Quote:
Originally Posted by benwaggoner View Post
And in digital cinema.
Yep. And it's already been 2 and a half years since I wrote "The Ultimate Avisynth DCP (Digital Cinema Package) creation guide" here on Doom9 describing exactly that. One of the funny thing about JPEG2000 and its all intra nature is that it still scales up nicely after so many years. Basically, being all intra, each frame of the source is a .tiff picture that gets fed to the OpenJPEG encoder to create a .j2k for each frame. Those then gets appended together to create the actual JPEG2000 stream and such a stream is muxed in the mxf container. This allows to potentially scale as much as you want as you can encode as many frames as a CPU has cores in parallel. I've recently encoded every episode of The Last of Us for the marathon of the 1° series that was gonna be displayed at the cinema leading up to the first episode of the second series and I was in a bit of a hurry, so I just set the number of threads to 56 (I have a 56c/112th Xeon in one of my servers) and it scaled up pretty nicely.

Quote:
Originally Posted by LMP88959 View Post
3D DCT and 3D wavelets have been tried and always fall flat mainly due to how inefficient they are at coding similarities between adjacent frames.
There have also been experiments with the KLT (Karhunen Loève Transform) back when the first H.265 HEVC proposals were still being evaluated and before ending up with the actual DST and DCT. Ultimately, although the KLT was supposed to be optimal in many scenarios, its overall computational cost and lack of any known "fast" algorithm made it so that it wasn't worth pursuing.

Last edited by FranceBB; 30th June 2025 at 00:04.
FranceBB is offline   Reply With Quote
Old 30th June 2025, 11:53   #31  |  Link
a.ok.in
Registered User
 
Join Date: Oct 2024
Posts: 10
Quote:
Originally Posted by FranceBB View Post
There have also been experiments with the KLT (Karhunen Loève Transform) back when the first H.265 HEVC proposals were still being evaluated and before ending up with the actual DST and DCT. Ultimately, although the KLT was supposed to be optimal in many scenarios, its overall computational cost and lack of any known "fast" algorithm made it so that it wasn't worth pursuing.
Afaik KLT has been proposed even before H.264 and it is currently being tested in the experimental AV2 codec.

Last edited by a.ok.in; 30th June 2025 at 11:57.
a.ok.in is offline   Reply With Quote
Old 1st July 2025, 03:09   #32  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by Emulgator View Post
Nice reading, Ben !
Going through the Blu-ray editions I bought I am still happy with the natural detail retention vs. bitrate over all the VC-1 encodes I've seen.
It is just good codec R&D.
That is part of it.

Another secret weapon of VC-1 for Blu-ray was xscaler, a command line utility written by Spears and Munsil back when they worked at Microsoft. It was a really advanced, very tweakable dithering tool to get down from 10+ bit mezzanine sources to the 8-bit 4:2:0 of original flavor Blu-ray. And it was bundled with the VC-1 professional encoding tools, and the Blu-ray compressionists were trained in using it.

The best codec in the world can't fix upstream banding or dithering issues, so that really helped. Of course, the tool was codec agnostic, and got used by some of the same compressionists after they switched to H.264 disc authoring.

Quote:
I came to value .wmv much later, after H.264 indeed, as more and more bitrate-starved blotchy H.264 videos came into my task
and after all one would come to the conclusion that a 1Mbps.wmv would look less maimed.
I just would have wished an earlier Expression Encoder release and I would have paid for that.
The release strategy around VC-1 was a bit unclear for me, but well, I was late to this party anyway.
The release strategy around VC-1 was incredibly chaotic, buffeted in the winds of Ballmer-era Microsoft corporate dysfunction! Strategies around VC-1, codecs, digital media, Windows Media, and media in Windows changed roughly every six months from 2006-2011. Just getting the stuff released that we did required a lot of skunkworks, favor trading, and begging forgiveness instead of asking permission. Getting the last service pack of Expression Encoder out the door took heroic efforts by people who cared about the product deeply. Silverlight almost had hardware accelerated DRM protected multi codec decode in 2011, but it never shipped because it was estimated to require a few weeks of testing by people who had been reassigned to Windows Phone.

Literally everything I worked on in my six years at Microsoft had been cancelled before I left, often in some incredibly stupid and customer-harmful ways. My book and aspects of MPEG-DASH are about all I did there that matters anymore.

At Amazon I'm still iterating on stuff I started my first day 13 years ago. "Customer Obsession" is real.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 1st July 2025, 22:27   #33  |  Link
Emulgator
Big Bit Savings Now !
 
Emulgator's Avatar
 
Join Date: Feb 2007
Location: close to the wall
Posts: 2,036
Many thanks for your open words, Ben.
__________________
"To bypass shortcuts and find suffering...is called QUALity" (Die toten Augen von Friedrichshain)
"Data reduction ? Yep, Sir. We're that issue working on. Synce invntoin uf lingöage..."

Last edited by Emulgator; 1st July 2025 at 22:30.
Emulgator is offline   Reply With Quote
Old 2nd July 2025, 05:20   #34  |  Link
FranceBB
Broadcast Encoder
 
FranceBB's Avatar
 
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 3,378
Quote:
Originally Posted by benwaggoner View Post
it was a really advanced, very tweakable dithering tool to get down from 10+ bit mezzanine sources to the 8-bit 4:2:0 of original flavor Blu-ray. And it was bundled with the VC-1 professional encoding tools, and the Blu-ray compressionists were trained in using it.
having the dithering method and the encoder work together can do miracles, quite literally. Bit depth conversion and dithering is a problem within itself: make it too widespread, complex and changing and the encoder won't find the references and will require a lot of bitrate to encode the stream. Make it too static and recognizable (like in the normal ordered dither) and your brain will also detect the pattern. In the x264 days, the Sierra 2-4A method was introduced directly within the encoder as it supported being fed with 16bit stacked (aka "double height" with MSB / LSB stacked one of top of the other) and 16bit interleaved (aka "double width" with MSB / LSB interleaved together) with the support to the normal 16bit planar added later on. Back then a lot of optimisation went into it so that this kind of dithering could be recognised and efficiently compressed compared to other patterns and the results were like night and day. To see how bad the situation was before that (taking aside xvid as it was never used for professional authoring and therefore it was rare for it to have higher than 8bit sources anyway as an input) one could use either the lavc open source MPEG-2 encoder or x262 and try to use dithering. Back in those days dithering had to be done in the frameserver outside of the encoder and there were three main dithering methods available aside the usual ordered dithering: Stucki, Atkinson and the evergreen Floyd Steinberg. Unfortunately, regardless of which one you picked, it would take an insane amount of bitrate for any of those two MPEG-2 encoders to avoid completely destroying the gradients in the background. For instance, for complicated titles like House of the Dragon (which I had to encode recently), after going through the Floyd Steinberg error diffusion, it took the encoder 85 Mbits (XDCAM-85) in FULL HD 1920x1080 25fps 4:2:2 yv16 M=3 N=12 8bit BT709 to avoid destroying the background. Limiting it to the classic 50 Mbits (XDCAM-50) would make it completely destroy any dark background, which is pretty annoying if the majority of the show has problematic low lights with several individual candles and other changing things making up the majority of the lighting in the scene. x265 built on the same concept x264 used and introduced two dithering methods: one that can be enabled via --dither and a basic one that is used otherwise, but in both cases the encoder should be able to detect the pattern and encode efficiently. On the other hand, with 10bit becoming the standard, we've seen 8bit being used less and less so it's gonna be interesting to see if x266 is gonna use the same concept. After all, dithering can still be used to go from 16bit to 10bit, so I still think it has useful applications and it's important to have those patterns detectable and encodable in a compression-friendly way in an encoder. We'll see.

Last edited by FranceBB; 2nd July 2025 at 05:28.
FranceBB is offline   Reply With Quote
Old 2nd July 2025, 05:39   #35  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 6,017
Quote:
Originally Posted by Emulgator View Post
Many thanks for your open words, Ben.
QFT. Many thanks for all your contributions
__________________
These are all my personal statements, not those of my employer :)
Blue_MiSfit is offline   Reply With Quote
Old 2nd July 2025, 17:43   #36  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by FranceBB View Post
having the dithering method and the encoder work together can do miracles, quite literally.
Definitely! And heck, I don't know why we convert to 8-bit input pixels when we're going to convert back to a floating-point like internal representation anyway. Just use the higher precision to get more accurate quantization.

That said, xscaler wasn't codec aware at all. It was just very good and tunable for its era. Parametrized Floyd-Steinberg and such. We have access to at least as good stuff today.

Quote:
Bit depth conversion and dithering is a problem within itself: make it too widespread, complex and changing and the encoder won't find the references and will require a lot of bitrate to encode the stream. Make it too static and recognizable (like in the normal ordered dither) and your brain will also detect the pattern. In the x264 days, the Sierra 2-4A method was introduced directly within the encoder as it supported being fed with 16bit stacked (aka "double height" with MSB / LSB stacked one of top of the other) and 16bit interleaved (aka "double width" with MSB / LSB interleaved together) with the support to the normal 16bit planar added later on. Back then a lot of optimisation went into it so that this kind of dithering could be recognised and efficiently compressed compared to other patterns and the results were like night and day. To see how bad the situation was before that (taking aside xvid as it was never used for professional authoring and therefore it was rare for it to have higher than 8bit sources anyway as an input) one could use either the lavc open source MPEG-2 encoder or x262 and try to use dithering. Back in those days dithering had to be done in the frameserver outside of the encoder and there were three main dithering methods available aside the usual ordered dithering: Stucki, Atkinson and the evergreen Floyd Steinberg. Unfortunately, regardless of which one you picked, it would take an insane amount of bitrate for any of those two MPEG-2 encoders to avoid completely destroying the gradients in the background. For instance, for complicated titles like House of the Dragon (which I had to encode recently), after going through the Floyd Steinberg error diffusion, it took the encoder 85 Mbits (XDCAM-85) in FULL HD 1920x1080 25fps 4:2:2 yv16 M=3 N=12 8bit BT709 to avoid destroying the background. Limiting it to the classic 50 Mbits (XDCAM-50) would make it completely destroy any dark background, which is pretty annoying if the majority of the show has problematic low lights with several individual candles and other changing things making up the majority of the lighting in the scene. x265 built on the same concept x264 used and introduced two dithering methods: one that can be enabled via --dither and a basic one that is used otherwise, but in both cases the encoder should be able to detect the pattern and encode efficiently. On the other hand, with 10bit becoming the standard, we've seen 8bit being used less and less so it's gonna be interesting to see if x266 is gonna use the same concept. After all, dithering can still be used to go from 16bit to 10bit, so I still think it has useful applications and it's important to have those patterns detectable and encodable in a compression-friendly way in an encoder. We'll see.
Yeah, decent dithering is still valuable in 10-bit, particularly HDR 10-bit where we're not even using the full 64-940 range anyway.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 2nd July 2025, 17:44   #37  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 5,134
Quote:
Originally Posted by Blue_MiSfit View Post
QFT. Many thanks for all your contributions
Happy to provide. I imagine the codec historians of the future will treasure the archive.org logs of Doom9!

And everyone is always welcome to ask more questions of this sort. Want to hear about MacroMind Director Accelerator RLE encoding circa 1989 ?
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 8th July 2025, 03:34   #38  |  Link
LMP88959
Registered User
 
Join Date: Apr 2024
Posts: 26
Tiny little update I added to address an issue I noticed at lower bit rates when encoder with a longer GOP length.
Basically, the encoder keeps track of which blocks in the GOP have been marked as intra at some point and if there are too many that have been marked intra, the encoder inserts an intra-frame.
Blocks that are unmoving are double counted since those are more noticeable to the viewer.

https://github.com/LMP88959/Digital-...ideo-2/pull/19
LMP88959 is offline   Reply With Quote
Old 13th September 2025, 02:39   #39  |  Link
LMP88959
Registered User
 
Join Date: Apr 2024
Posts: 26
Huge encoder update! (encoder version 14)

- statistics tracking and reporting
- 'sfr' argument now works with y4m inputs
- intra frame psy improvements

The biggest changes were to motion estimation:
- improved motion vector RDO
- added dynamic psy-based block difference metric
- improved inter/intra/skip mode selection
- added a ton of estimation candidate vectors + an extra subpel test

I have updated the example encodes / comparisons in the GitHub's README, please check them out!
https://github.com/LMP88959/Digital-Subband-Video-2/
LMP88959 is offline   Reply With Quote
Old 13th September 2025, 12:48   #40  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 5,001
another waveleto

hmm after Rududu,Dirac,Snow,JPEG-2000 lot of new movement

the Chinese are also very actively invested working on it improving also with their Deep Learning Power for geospatial purposes (which is also a codeword for land reconnaissance UAV in military/space use terms)

Funny thing about VC-1 it was very successful for some who preferred skin retention now we can see that AV1 improved everything VC-1 did less efficient

BTW see who found his way to Microsoft Ben

https://patents.google.com/patent/US...=Sergey+Sablin

TSU->Elecard/Mainconcept->Aspex Semiconductor->Microsoft(Skype)->Meta(Facebook)->AV1

https://gitlab.com/users/ssablin/activity

https://gitlab.com/AOMediaCodec/SVT-..._requests/2507
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 13th September 2025 at 19:08.
CruNcher is offline   Reply With Quote
Reply

Tags
codec, open source, subband, wavelet

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 08:53.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.