Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > New and alternative video codecs

Reply
 
Thread Tools Search this Thread Display Modes
Old 4th October 2017, 11:20   #1  |  Link
pandy
Registered User
 
Join Date: Mar 2006
Posts: 1,049
Versatile Video Coding (VVC) / H.266: HEVC successor

https://jvet.hhi.fraunhofer.de/

Up to 64 more computationally complex than H.265 for encoding and perhaps even 16x times for decoding...

Any thoughts?
pandy is offline   Reply With Quote
Old 4th October 2017, 11:29   #2  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,229
Needs to be more efficient than that time wise.
burfadel is offline   Reply With Quote
Old 4th October 2017, 13:12   #3  |  Link
Jamaika
Registered User
 
Join Date: Jul 2015
Posts: 697
I know for sure. HEVC codec discouraged me from Fraunhofer. There was a long decoding time in JCT-VC codecs used in BPG images.
http://hevc.kw.bbc.co.uk/git/w/jctvc-hm.git

For me another new codec
Jamaika is offline   Reply With Quote
Old 5th October 2017, 02:12   #4  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 1,120
Quote:
Originally Posted by pandy View Post
https://jvet.hhi.fraunhofer.de/

Up to 64 more computationally complex than H.265 for encoding and perhaps even 16x times for decoding...

Any thoughts?
where does it say that?
hajj_3 is offline   Reply With Quote
Old 5th October 2017, 06:12   #5  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
Quote:
The increase of computational complexity compared to HEVC is reflected by the fact that
the encoder software run time increases by factors of approximately 12 and 60 in RA and
AI configurations, respectively. Correspondingly, the decoder run time increases by factors
of approximately 10 and 2.5 in RA and AI, respectively. These are average numbers over
the entire set of sequences from [3]. The worst case complexity may even be more dra-
matically higher compared to HEVC.
https://show.ibc.org/__media/Technic...DEO-CODING.pdf
sneaker_ger is offline   Reply With Quote
Old 5th October 2017, 13:12   #6  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 1,120
Quote:
Originally Posted by sneaker_ger View Post
• Submission deadline: February 2018.
• Evaluation of responses: April 2018.
• First test model: October 2018.
• First version of new video compression standard: October 2020.

does this mean that october 2020 is when it would be ratified or does this happen at some point after the "first version"?
hajj_3 is offline   Reply With Quote
Old 9th October 2017, 17:00   #7  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,750
Quote:
Originally Posted by pandy View Post
https://jvet.hhi.fraunhofer.de/

Up to 64 more computationally complex than H.265 for encoding and perhaps even 16x times for decoding...

Any thoughts?
Encoding time in a reference encoder doesn't really matter. 64x generally means it has 64x more ways to do things than HEVC, but real-world encoders aren't going to do an exhaustive search of all those modes! They'll use heuristics and early exits to deliver as good quality as possible within available time. As encoders have always done since the beginning of time. x265 is >>100x faster than the HEVC HM reference encoder even with --preset slower, for example.

Decoder time of 16x would be a huge problem. HEVC was carefully designed to have no more than 2x the complexity of H.264 even with all the options on. No one would ever come out with a video codec standard that requires 16x the silicon area or clock speed or memory or anything. Even MPEG-2 -> HEVC was only about 4x the decoder complexity per pixel at a given quality.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 10th October 2017, 20:30   #8  |  Link
iwod
Registered User
 
Join Date: Apr 2002
Posts: 756
What sort of bitrate reduction are they expecting?
iwod is offline   Reply With Quote
Old 10th October 2017, 21:17   #9  |  Link
sneaker_ger
Registered User
 
Join Date: Dec 2002
Posts: 5,565
At this point they don't seem to have set any public goal and instead say it's a "study" to see what may be possible with new tools.
Quote:
As of the current status, the bit rate reduction of JEM6, when compared to an HM16 soft-
ware encoder implementing the HEVC Main 10 Profile, is around 30 % in Random Access
(RA) configuration using motion compensation in a hierarchical B picture structure, and
around 20% in All Intra (AI) configuration (without motion compensation). This result was
obtained when averaging the bit rate reduction comparing at same PSNR by the so-called
Bjøntegaard Delta criterion
[...]
Measuring PSNR can be misleading as a criterion for judging quality. In order to assess
the subjective visual benefit, expert viewing tests were performed during the 7 th JVET
meeting in the context of evaluating responses to the Call for Evidence[...]Typically, bit
rate savings at same visual quality of between approximately 35 and 60% were observed.
Just read the doc from post #5 ...
sneaker_ger is offline   Reply With Quote
Old 11th October 2017, 15:30   #10  |  Link
iwod
Registered User
 
Join Date: Apr 2002
Posts: 756
Quote:
Originally Posted by sneaker_ger View Post
At this point they don't seem to have set any public goal and instead say it's a "study" to see what may be possible with new tools.


Just read the doc from post #5 ...
Great, the Web site were...ugh.

It is good they recognized they have an Open Source and Free Codec competition. Since they have gobbled up enough money with H.265 I hope they sort out their license fees and structure first BEFORE moving on.

Edit: I know that is not the purpose of the study. But they surely need to keep this in mind.

I am on the HEVC side for now, this is simply because it is the only choice. I dont think AV1 will go anywhere. But AV2, I hope they get it right.
iwod is offline   Reply With Quote
Old 14th October 2017, 09:44   #11  |  Link
pandy
Registered User
 
Join Date: Mar 2006
Posts: 1,049
Quote:
Originally Posted by benwaggoner View Post
Encoding time in a reference encoder doesn't really matter. 64x generally means it has 64x more ways to do things than HEVC, but real-world encoders aren't going to do an exhaustive search of all those modes! They'll use heuristics and early exits to deliver as good quality as possible within available time. As encoders have always done since the beginning of time. x265 is >>100x faster than the HEVC HM reference encoder even with --preset slower, for example.

Decoder time of 16x would be a huge problem. HEVC was carefully designed to have no more than 2x the complexity of H.264 even with all the options on. No one would ever come out with a video codec standard that requires 16x the silicon area or clock speed or memory or anything. Even MPEG-2 -> HEVC was only about 4x the decoder complexity per pixel at a given quality.
But when we compare x264 and x265 then increasing complexity of the JET looks quite unpleasant... I know that overall trend is to use more threads but still - increasing computational complexity (exponentially) quickly become serious issue.
Those 64x and 16x are just worse case scenario a bit exaggerated figures.
In 80 when MPEG-1 (and later 2) was born every 2 - 3 years CPU computational power was almost doubled, now we observe severe stagnation on this - rarely new CPU's are providing more than 20% processing gain...
pandy is offline   Reply With Quote
Old 14th October 2017, 11:06   #12  |  Link
bstrobl
Registered User
 
Join Date: Jun 2016
Posts: 55
Quote:
Originally Posted by pandy View Post
But when we compare x264 and x265 then increasing complexity of the JET looks quite unpleasant... I know that overall trend is to use more threads but still - increasing computational complexity (exponentially) quickly become serious issue.
Those 64x and 16x are just worse case scenario a bit exaggerated figures.
In 80 when MPEG-1 (and later 2) was born every 2 - 3 years CPU computational power was almost doubled, now we observe severe stagnation on this - rarely new CPU's are providing more than 20% processing gain...
We are slowly romping up against physical limits of chip production, and codec development is doing pretty much the same. Many of the changes are now an attempt at trying to discern an improvement or if its just noise due to the huge amounts of coding tools and their tiny contribution. At some stage it simply won't be worth upgrading codecs due to insane encode/development costs.

The JET folks really need to sort out their licensing mess though since AV1 can encode similar quality when compared to HEVC at 75% of the bitrate. Granted, AV1 seems a bit rushed but as long as most of the nagging issues from VP9 are fixed more people will hop onto it and stay if patent costs keep increasing with the h.26x codecs.
bstrobl is offline   Reply With Quote
Old 14th October 2017, 20:06   #13  |  Link
iwod
Registered User
 
Join Date: Apr 2002
Posts: 756
Quote:
Originally Posted by pandy View Post
But when we compare x264 and x265 then increasing complexity of the JET looks quite unpleasant... I know that overall trend is to use more threads but still - increasing computational complexity (exponentially) quickly become serious issue.
Those 64x and 16x are just worse case scenario a bit exaggerated figures.
In 80 when MPEG-1 (and later 2) was born every 2 - 3 years CPU computational power was almost doubled, now we observe severe stagnation on this - rarely new CPU's are providing more than 20% processing gain...
Yes, and it also happens the world have moved to Mobile, and we dont use CPU to do decoding anymore, in most cases we are doing Hardware accelerated decoding with Specific Hardware or DSP, and it also happen they are 10x more efficient at doing so.

But i do understand the concern. If I remember x265 started off being 5x to 10x slower then x264. And the reference encoder were very slow.

I am mainly concerned about two things,

1. Real Time Encoding, that is if you are encoding a live sports like Football Match, I think we are running into some limitation there in regards to Speed / Bitrate and Quality.

2. Decoding: As benwaggoner mentioned, the decoding complexity hasn't increased much, MPEG-2 -> HEVC is only 4x per pixel, which is very little increase in that space when we had 10 - 20x computation power. What has increased though is we moved from 480P to 4K, that is ~20x Pixels.

Unless 3D, VR, or some other Killer Apps comes in, JET or AV2 will properly be the last video codec improvement we see. Pretty much like Audio Codec, bandwidth increase in 5G will means most of those bitrate saving being less of a concern.
iwod is offline   Reply With Quote
Old 20th October 2017, 19:34   #14  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,750
Quote:
Originally Posted by iwod View Post
Unless 3D, VR, or some other Killer Apps comes in, JET or AV2 will properly be the last video codec improvement we see. Pretty much like Audio Codec, bandwidth increase in 5G will means most of those bitrate saving being less of a concern.
You know, I thought the same thing back in early 1997. We could almost deliver (analog 480i) broadcast quality on a CD-ROM! What was there left to do? I was actively planning my career pivot to film restoration.

Then Peter Jacobsen called up and asked if we can use this new RealVideo beta thing to embed some golf clips into his web site without buffering.

xHE-AAC is a big material improvement for important markets. We could have 10x the efficiency of HEVC today and deliver pristine UHD HDR over 4G networks.

Or deliver a great full-screen experience over a 2G network to people riding a bus in rural India. 720p @ 20 Kbps would still be a worthwhile improvement over 720p @ 30 Kbps.

I am confident that incremental improvements in compression efficiency will remain a multi-billion dollar market through my retirement. The main thing that could cause things to slow down is if we really hit a wall in terms of silicon process nodes, and stop getting big year-on-year MIPS per watt gains. If we were stuck on the same node, we might start running out of exciting new things to do after a decade or so.

There is still a huge market for improved MPEG-2 encoders...
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 20th October 2017, 19:46   #15  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,750
Quote:
Originally Posted by pandy View Post
But when we compare x264 and x265 then increasing complexity of the JET looks quite unpleasant... I know that overall trend is to use more threads but still - increasing computational complexity (exponentially) quickly become serious issue.
Those 64x and 16x are just worse case scenario a bit exaggerated figures.
The real comparison should be quality at equal perf. If a codec offers a 100% improvement at 64x encoding time, but a 50% improvement at the same encoding time, it's still a big win. I doubt anyone is doing HEVC with the same coverage of theoretical options as they do with H.264, but HEVC still delivers some big real-world efficiency improvements at practical speeds.

These theoretical encoding times are really JM versus HM; encoders which are far too slow for any practical use, and really not very good psychovisually. And for decoder complexity, that's what Profiles and Levels are for. If there is a 16x worst case, probably 85% of the practical efficiency gains can be found in the first 2x of decoder complexity. Hitting the right balance between efficiency gains and decoder complexity is a huge part of designing a major bitstream format, and one of the reasons that the MPEG/ITU codecs are so good. There are just so many eyeballs from so many different industry sectors pounding on it to squeeze every last bit and MIPS out.

Quote:
In 80 when MPEG-1 (and later 2) was born every 2 - 3 years CPU computational power was almost doubled, now we observe severe stagnation on this - rarely new CPU's are providing more than 20% processing gain...
Per core and for general compute, perhaps. But whenever a chip is "5-40%" faster, video encoding is the 40%. Because we can go widely parallel, and use SIMD instructions like AVX2 harder than about anything. And the good implementations get so much hand-tuned assembly optimizations.

The new 8th gen Core processors look to offer ~2x encoding throughput per dollar and per watt. More cores, less thermal throttling using AVX/AVX2, microarchitectural improvements, and hopefully some value in AVX512.

This is a bigger leap than most generations, but a slowdown in Moore's law will hit video the least and latest of almost technology.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 13th April 2018, 09:04   #16  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
At the moment, the sources appear to be available with building solutions for some MSVC versions and a makefile for Linux; would it be hard to adapt the latter for MSYS/MinGW? I would ask that in their repo as well...
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 13th April 2018, 09:15   #17  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Linux makefiles is what MinGW uses.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 13th April 2018, 09:50   #18  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,753
I just don't know if they require some quirks in details; Multicoreware has separate build files for msys and linux, that may be related to CMake though...
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 13th April 2018, 12:07   #19  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,558
It seems like hikvision is the only one who's shown any interest this year, and I actually know a few people there. I should reach out and see what they're working on; aside from that, the project is essentially dead. I guess improving HEVC is still the only priority for all the big research companies.
foxyshadis is offline   Reply With Quote
Old 13th April 2018, 15:06   #20  |  Link
qyot27
...?
 
qyot27's Avatar
 
Join Date: Nov 2005
Location: Florida
Posts: 1,419
Quote:
Originally Posted by LigH View Post
I just don't know if they require some quirks in details; Multicoreware has separate build files for msys and linux, that may be related to CMake though...
They do that for 'convenience' (although whose convenience is beyond me), not because it's anywhere near necessary. I've never used those build scripts for either Linux or cross-compiled MinGW builds - there's virtually no difference in how to configure CMake, which is all those scripts even do anyway - they're not actual Makefiles or MSVC solutions, only .sh and .bat scripts to tell CMake to use the Makefiles or Visual Studio generators.

The only thing that presents an actual need to configure them differently is that MinGW requires specifying a cross-compiling toolchain file - and that's only necessary when actually cross-compiling (read: Linux, OS X, or Cygwin, not MSys2).
qyot27 is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 18:19.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.