Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > New and alternative video codecs

Reply
 
Thread Tools Search this Thread Display Modes
Old 31st May 2019, 02:10   #1701  |  Link
IgorC
Registered User
 
Join Date: Apr 2004
Posts: 1,308
Quote:
Originally Posted by benwaggoner View Post
But the general case of "AV1 can deliver the same subjective quality at meaningfully lower bitrates than HEVC" has yet to be demonstrated.
Unfortunately this can be the case.

Beamr HEVC 2 Mbps does visually better than AV1 3 Mbps. Both VP9 and AV1 are heavily optimizied for specific metrics.

Beamr 2 Mbps
AV1 3 Mbps
IgorC is offline   Reply With Quote
Old 31st May 2019, 09:55   #1702  |  Link
Asilurr
Registered User
 
Join Date: Jan 2019
Posts: 9
Quote:
Originally Posted by dapperdan View Post
Libvpx seems to equal x265, subjectively, objectively and in encoding time in the recent MSU study (and both are near the head of the pack).
At its worst, a contemporary version of libvpx will encode 8-bit 4:2:0 content as slowly as a contemporary version of x265. Often libvpx is noticeably faster than x265, given similar approaches to encoding complexity.

Here's a quick&dirty test, using an admittedly peculiar content source. Lena_std.tif from lenna.org, RGB24 converted to 8-bit 4:2:0, both downscaled and then upscaled two times each (i.e. 256x256, 1024x1024) into 250 frames long videos. All encoders are the latest versions available today on Wolfberry's public GDrive. All encoding times are reported by Win10's PowerShell: Measure-Command {start-process process -argumentlist "args" -Wait}, and expressed in milliseconds.

Code:
x264 common settings: --crf 15 --preset veryslow --tune stillimage
405 --threads 1: 03130.76 || 03142.22 || 03119.85 (03130.94)
405 --threads 2: 02101.52 || 02105.54 || 02105.00 (02104.02)
420 --threads 1: 26487.85 || 27508.10 || 26479.85 (26825.27)
420 --threads 2: 16336.86 || 15308.29 || 15327.17 (15657.44)

x265 common settings: --crf 15 --preset veryslow --frame-threads 1 --lookahead-slices 1
505 --no-wpp: 05160.82 || 05161.18 || 05154.55 (05158.85)
505 --wpp: 04131.71 || 04142.67 || 04131.78 (04135.39)
520 --no-wpp: 68123.55 || 68134.36 || 67103.71 (67787.21)
520 --wpp: 36649.91 || 37674.62 || 34628.27 (36317.60)

x265 common settings: --crf 15 --preset placebo --cu-lossless --rd-refine --tskip -qg-size 64 --ref 6 --bframes 16 --me sea --subme 7 --frame-threads 1 --lookahead-slices 1
505 --no-wpp: 063053.81 || 061024.00 || 062028.34 (062035.38)
505 --wpp: 039684.32 || 038664.65 || 038684.41 (039011.13)
520 --no-wpp: 796345.10 || 796345.10 || 796345.10 (796345.10)
520 --wpp: 235701.00 || 235701.00 || 235701.00 (235701.00)

libvpx common settings: --lag-in-frames=25 --passes=2 --end-usage=q --cq-level=20 --good --cpu-used=0 --kf-max-dist=250 --auto-alt-ref=6 --tile-rows=0 --enable-tpl=1 --frame-parallel=0 --ivf
905 --tile-columns=0 --row-mt=0 --threads=1: 05167.92 || 05164.62 || 05167.18 (05166.57)
905 --tile-columns=5 --row-mt=1 --threads=2: 04133.40 || 04143.00 || 04147.51 (04141.44)
920 --tile-columns=0 --row-mt=0 --threads=1: 50871.23 || 49853.74 || 49845.72 (50190.23)
920 --tile-columns=5 --row-mt=1 --threads=2: 31568.60 || 31571.60 || 28518.96 (30553.05)

XAB .. output type
X .. 4 for x264, 5 for x265, 9 for libvpx
AB .. 05 for 256x256, 20 for 1024x1024

Only one x265 beyondplacebo encode at each resolution due to appalling performance.
The average values are reported in brackets, for each output type and encoding settings.
As some sort of hardware footprint is inevitable (HDD versus SSD, memory specs and amount, platform, processor specs), the above targeted the lowest common denominator: parallelism disabled, then parallelism crippled to check the scaling. Conclusions based on this small-scale test:
1. x264 is at least twice faster than either x265 or lipvpx, at comparable encoding complexity.
2. x265 parallelizes better than libvpx as encoding complexity increases.
3. x265 versylow is at best comparable with the slowest libvpx settings, lagging behind as the resolution increases.
4. x265 beyondplacebo is significantly slower than the slowest libvpx settings.

As mentioned above, this is just one possible comparison. The enthusiasts should definitely do their own, and only then report whichever encoder to be the speed demon. It's very hard to conceive that chunk-based libvpx (the obvious caveats aside: logistical hassle and quality issues) is ever slower than x265 regardless of the hardware footprint, each of them at its highest encoding complexity. But instead of testing themselves (with whatever source they please, at whatever resolution and bit depth, on whatever encoding machine), people prefer to reiterate ad absurdum that libvpx is always slower than x265. And it was, indeed, years ago.

Last edited by Asilurr; 31st May 2019 at 10:02.
Asilurr is offline   Reply With Quote
Old 31st May 2019, 13:30   #1703  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 732
Quote:
Originally Posted by stax76 View Post
Last time I tried rav1e 2019-04-30 and it was SLOW AS HELL, like < 1 fps.
If it was on Windows, it is possible you downloaded a build with assembly disabled. Last time I ran test, it was the case (official windows build from the project), and I only got that information like two weeks later after spending days watching the atrociously slow FPS counter. I guess devs expect everybody interested to be on Linux or something. Naturally it would help if the encoder signalled assembly being used like good old x264/x265 do (good idea for linux distros too, because there used to be clueless packagers that disabled ASM accidentaly) or warned when it's not, but I think nobody got around to code that yet.

Quote:
Originally Posted by nevcairiel View Post
Encoders are no longer being made for this crowd. The primary design goal is massive-scale cloud encoding for YouTube, Netflix, Amazon, and everyone else that fits the encode-once, download hundreds of thousands of times scenario.

In such a scenario, even the slowest encoder is acceptable if it saves enough bytes.

In that scenario, VP9 also didn't fail. It gets used for a lot of content on the web.
Sadly it's not just about speed, but about quality too. They seem to just care about some metrics on low bitrate content, actual high quality encoding with transparent quality gets a finger.

Quote:
Originally Posted by nevcairiel View Post
Its not a design target for the codec itself, because the codec really doesn't care. I'm talking about encoders. The huge open-source push that made x264 as great as it is for "personal" encodes is unlikely to repeat itself. Companies driving encoder development do not target doom9ers. You can already see this on x265 where the community involvement is pretty low, and this will only get worse as the computational complexity of codecs goes up and the "personal use" usecases get less attractive.

This will not change with any future codec. Not with AV2, not with VVC, or anything that follows. The computational complexity increase in all those future codecs just makes it impractical for "hobbyist" use.
There might also be another factor at play: Google and friends siphon away those enthusiasts to work on decoders/encoders for whatever formats they come with. I guess the developers are happy doing that, but I can't help thinking that resource/creative power was kind of wasted polishing me too project like VP9, which never got good anyway. Perhaps it will be a waste with AV1 too, we shall see (hopefully not but track record from predecessors isn't good).

Last edited by mandarinka; 31st May 2019 at 13:44.
mandarinka is offline   Reply With Quote
Old 31st May 2019, 13:50   #1704  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,457
Quote:
Originally Posted by benwaggoner View Post
But high quality encoding doesn't work with chunks of a few seconds. Encoding longer sequences allows for IDRs and shot changes and more aggressive VBV use. YouTube can have quite a bit of keyframe strobing with difficult content for these reasons. YouTube quality wouldn't be acceptable for lots of premium content.
If only Youtube had like .. multiple .. videos coming in daily. Then they could encode them simultaneously on a single CPU each. (And serve AVC or fast setting VP9/AV1 until they are done.)
You could do a fast first pass for scenechange detection and vbv estimation and send the chunks along with that info.
sneaker_ger is offline   Reply With Quote
Old 31st May 2019, 16:37   #1705  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,770
Quote:
Originally Posted by mandarinka View Post
actual high quality encoding with transparent quality gets a finger.
This is not something any streaming service does though, so why should they invest into developing stuff for a goal they don't even need?
Its just how it goes. And for UHD Blu-ray discs, they can just throw massive bitrates at it to solve any such issues.

As said above, it all comes back to the same thing: If you want a codec for a use-case that noone else focuses on, then do the work, instead of the complaining.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 31st May 2019, 19:26   #1706  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 732
Quote:
Originally Posted by nevcairiel View Post
This is not something any streaming service does though, so why should they invest into developing stuff for a goal they don't even need?
Its just how it goes. And for UHD Blu-ray discs, they can just throw massive bitrates at it to solve any such issues.

As said above, it all comes back to the same thing: If you want a codec for a use-case that noone else focuses on, then do the work, instead of the complaining.
That's okay response to complains, but irrelevant response to criticism of the technicals. BTW, I'm not sure all the free software developers contributing are paid by those streaming companies, yet they contribute to this software that is directed at the companies interest only as you say. Perhaps there is some exploitation of volunteer goodwill?

Also you can't really expect users to jump to programming and so that shouldn't really be thought of as a solution. Even if with the open source principles, you are actually entitled to it. But, it's not like, easy to do. Second, users want to do the using, not switching roles to open source programmers.
It's probably easier to just not use such software (and use x264, x265, whatever is better). Though with this "open formats" movement/fandom/advocacy, there is this curious anomaly that people are so strong subscribers to the concept that they want to use it even if it "is not for them" at all... well there are lots of weird layers to these debates.
I think this superstrong mindshare that bends people's views is another reason why pointing out the technical problems should keep being done. And not brushed aside by "it's not for you" arguments. The enthusiasts al over the internets seem to think "it" is for them, by the look of it.
Maybe sometimes they also think all those winning compression tests and netflix blogs are also for them (while perhaps they also aren't?).

Last edited by mandarinka; 31st May 2019 at 19:33.
mandarinka is offline   Reply With Quote
Old 31st May 2019, 21:39   #1707  |  Link
soresu
Registered User
 
Join Date: May 2005
Location: Swansea, Wales, UK
Posts: 96
Quote:
Originally Posted by mandarinka View Post

There might also be another factor at play: Google and friends siphon away those enthusiasts to work on decoders/encoders for whatever formats they come with. I guess the developers are happy doing that, but I can't help thinking that resource/creative power was kind of wasted polishing me too project like VP9, which never got good anyway. Perhaps it will be a waste with AV1 too, we shall see (hopefully not but track record from predecessors isn't good).
If codec engineers are being siphoned away from anywhere it is other codec projects.

The guy under the name xiph_mont was working on a new audio codec called Ghost when Daala started up in earnest and it was abandoned - and he later moved to AV1 development with most of those that worked on Daala.
soresu is offline   Reply With Quote
Old 1st June 2019, 07:39   #1708  |  Link
dapperdan
Registered User
 
Join Date: Aug 2009
Posts: 180
Xiphmont was employed by Red Hat and Mozilla, both companies that have a business model (and a mission) incompatible with codec licenses and therefore strong motivation for "me-too" codecs that they can integrate properly.

(Which reminds me, VP9 has been the default codec choice for webRTC in Firefox and Chrome for a couple of years. I can't quickly find any stats on what kind of usage this gets. Originally the browsers agreed a compromise of supporting both VP8 and H.264 baseline and Firefox shipped that via a licencing hack where Cisco provide the binary blob and it doesn't cost them anything in licence fees because they were already at the annual cap.)
dapperdan is offline   Reply With Quote
Old 2nd June 2019, 13:56   #1709  |  Link
Tommy Carrot
Registered User
 
Tommy Carrot's Avatar
 
Join Date: Mar 2002
Posts: 852
What's the difference between deltaq and AQ in aomenc? Deltaq changes the quantizer of the frames, and AQ changes the quantizers of the blocks within the frames, or am i completely wrong?

Also, what is tpl-model, what does it do?
Tommy Carrot is offline   Reply With Quote
Old 3rd June 2019, 19:22   #1710  |  Link
TD-Linux
Registered User
 
Join Date: Aug 2015
Posts: 32
Quote:
Originally Posted by Tommy Carrot View Post
What's the difference between deltaq and AQ in aomenc? Deltaq changes the quantizer of the frames, and AQ changes the quantizers of the blocks within the frames, or am i completely wrong?
Deltaq is at superblock granularity, whereas "AQ" uses segment support which is down to 4x4 granularity. The two bitstream features are sorta redundant but their coding is optimized for different uses - Deltaq was designed for sub-frame rate targeting, and segments are more for mbtree/psy purposes.
TD-Linux is offline   Reply With Quote
Old 4th June 2019, 12:05   #1711  |  Link
Tommy Carrot
Registered User
 
Tommy Carrot's Avatar
 
Join Date: Mar 2002
Posts: 852
Quote:
Originally Posted by TD-Linux View Post
Deltaq is at superblock granularity, whereas "AQ" uses segment support which is down to 4x4 granularity. The two bitstream features are sorta redundant but their coding is optimized for different uses - Deltaq was designed for sub-frame rate targeting, and segments are more for mbtree/psy purposes.
Thanks for the explanation.
Tommy Carrot is offline   Reply With Quote
Old 6th June 2019, 08:39   #1712  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 897
vlc 3.0.7 is out, not sure how to check what version of dav1d it uses though.
hajj_3 is offline   Reply With Quote
Old 6th June 2019, 11:03   #1713  |  Link
birdie
.
 
birdie's Avatar
 
Join Date: Dec 2006
Posts: 135
Quote:
Originally Posted by hajj_3 View Post
vlc 3.0.7 is out, not sure how to check what version of dav1d it uses though.
It's been using dav1d since 3.0.5.
birdie is offline   Reply With Quote
Old 6th June 2019, 11:04   #1714  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 897
Quote:
Originally Posted by birdie View Post
It's been using dav1d since 3.0.5.
yes, but i wanted to know what version of dav1d v3.0.7 is using.
hajj_3 is offline   Reply With Quote
Old 6th June 2019, 11:26   #1715  |  Link
birdie
.
 
birdie's Avatar
 
Join Date: Dec 2006
Posts: 135
Quote:
Originally Posted by hajj_3 View Post
yes, but i wanted to know what version of dav1d v3.0.7 is using.
0.3.1
birdie is offline   Reply With Quote
Old 9th June 2019, 11:53   #1716  |  Link
vidschlub
Registered User
 
Join Date: May 2016
Posts: 18
Quote:
Originally Posted by birdie View Post
At this time shouldn't we consider AV1 a failure and move on to newer codecs, e.g AV2?
  • It doesn't have fast enough decoders to decode on mobile at 1080p on most devices (>80%).
  • It still doesn't have encoders which are anywhere fast enough to be usable by mere mortals.
  • Its hardware adoption is not there - the spec was finalized almost half a year ago, and AV1 is nowhere to be seen in Zen 2.0 (Ryzen 3000), Radeon RDNA 5700 or Intel Ice Lake. No word on its decoding acceleration even in the recently announced Arm's Cortex-A77/Mali-G77.


I find your comment extremely surprising considering your join date of 2006. I would expect such a comment from a newbie.


Unless the entirety of the developer and encoding community are lying to me, what is occurring with AV1 is absolutely par the norm for new codecs.
265 is still a dog to encode without a reasonable monster of a PC. How will an even more compressed, newer codec, that's open (and therefore can't break terrible patents) begin to compete only 6 months in?

The only thing that AV1 has going for it, is it's openness and the hope that /so many players/ throwing themselves at the problem, will slowly address the performance issues.

None the less, it's been 6 months. You're not going to see this getting hardware acceleration for at least 6 more months on any devices.

I suspect it'll be ubiquitous at best case scenario of 3 years. (more experienced members, welcome to correct me)
vidschlub is offline   Reply With Quote
Old 9th June 2019, 12:06   #1717  |  Link
vidschlub
Registered User
 
Join Date: May 2016
Posts: 18
Quote:
Originally Posted by nevcairiel View Post
Encoders are no longer being made for this crowd. The primary design goal is massive-scale cloud encoding for YouTube, Netflix, Amazon, and everyone else that fits the encode-once, download hundreds of thousands of times scenario.

In such a scenario, even the slowest encoder is acceptable if it saves enough bytes.

In that scenario, VP9 also didn't fail. It gets used for a lot of content on the web.
It never even began to occur to me that one day, my personal local library, would never be in AV1 format.

Yet here you are outlining exactly why it is very unlikely to and it kind of blows me away, you're totally correct.

AV1 /at scale/ when a video is being watched upwards of 500 times a week, makes so much more sense. Those encode times will eventually pay for themselves.
vidschlub is offline   Reply With Quote
Old 9th June 2019, 12:29   #1718  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,457
Probability of seeing AV1 decoding in Turing refresh?
sneaker_ger is offline   Reply With Quote
Old 9th June 2019, 13:06   #1719  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,770
Quote:
Originally Posted by sneaker_ger View Post
Probability of seeing AV1 decoding in Turing refresh?
None.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 9th June 2019, 16:49   #1720  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,589
nVidia and AMD (possibly Intel for Gen11 iGPUs) could add in drivers a hybrid decoding approach of AV1 using the GPU itself (shaders) but not ASIC yet.
__________________
Win 10 x64 (18362.356) - Core i3-9100F - nVidia 1660 (436.15)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 19:49.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.