Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > New and alternative video codecs

Reply
 
Thread Tools Search this Thread Display Modes
Old 6th March 2021, 00:30   #21  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
You as a person can certainly join a standards body, and if you're associated with a company that's already a member, so much the better.

And yeah, a 512x512 only codec is going to be hard to prove the validity of, as you can't run a standard test corpus.

How codec development normally works is a standards body kicks off an evaluation project to see if there are potential improvements that merit a new/updated technology. Generally there are some unifying principals that can come from one person or a small group. For example, the first H.264 proposal said "let's do 4x4 blocks, multiple reference frames, and in-loop deblocking." All of which were tools that had been proposed earlier and had research validating them.

Once a project gets rolling, participants start proposing new tools with evidence demonstrating their incremental value. Often it's stuff like "if we add this new mode, we can improve PSNR by 0.2 on average." Tons of tools get proposed, and the group decides which are in and which are out based on efficiency improvements relative to encoder and particular decoder performance impact.

Coming in with a whole package of tools combined into a codec without a group working on a codec definition that can integrate those tools or looking for a clean-room starting point is hard. If you have a proposal that includes clear evidence of its advantages on a standard test corpus is much easier. The people working on standards bodies are typically very busy people, and aren't likely to dive deep on a novel design without that initial validation.

Individual tools or combinations of them with positive results published in peer-reviewed journals can also be very helpful.

It's really, really hard to predict the value of of a given tool or format design without a lot of testing. The history of codec development is littered with many more ideas than ever make it into standards. Many of them good! But one tool might interact badly with another, or turn out to not solve its problem as well as a different one. And bad stuff sneaks through that's only a problem in retrospect (comparing just mean luma PSNR is a very common fault).

For example, adaptive quantization. MPEG-4 part 2 had it, but it only worked by even-number offsets from the frame QP, which had to be calculated in advance.

So decent AQ required encoding the frame at different initial QP and then trying the deltas. Conceptually not super complicated, but building a good functional implementation was expensive in dev and compute time.

VC-1 supported AQ much more flexibly. But VC-1 had a RLE bitmask approach for signaling intra-frame variations. It worked okay when a frame was mainly the base QP with a few runs at a different one. But H.264-like per-macroblock QP variations took so much signaling overhead in the bitmask that AQ wound up not being worth it at moderate-low bitrates. A complex RDO system that weighed the value of the AQ with its signaling overhead was certainly possible, but again, complex to develop and tune.

Very smart people worked very hard for years on both codecs. And it took some time after the standards were complete to really understand how those seemingly no-big-deal design decisions really hampered compression efficiency.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 6th March 2021, 01:04   #22  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Thank you again for your insightful answer.

So, yes it's going to be too complicated, and as since 2007, NHW didn't find interest, I have now totally admitted today that it won't lead to a professional project and so will just stay a spare-time hobby and nothing more.-I however wanted to tell you that I appreciate that you underlined that many good ideas didn't get standardized, because it's a classic remark some people often tell me: if no one, no company is interested, it's because it's not a good work... I just however think NHW is not a bad codec but it didn't find interest...-

But again, today I am totally ok and "more relaxed", NHW will be just a very good spare-time hobby for me, and that's still very important for me, sometimes when I have a new idea I will test it, but I don't really think that I will adapt NHW to any arbitrary image resolution because that's too boring and not fun currently (!)...

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 6th March 2021, 01:19   #23  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
Quote:
Originally Posted by nhw_pulsar View Post
Thank you again for your insightful answer.

So, yes it's going to be too complicated, and as since 2007, NHW didn't find interest, I have now totally admitted today that it won't lead to a professional project and so will just stay a spare-time hobby and nothing more.-I however wanted to tell you that I appreciate that you underlined that many good ideas didn't get standardized, because it's a classic remark some people often tell me: if no one, no company is interested, it's because it's not a good work... I just however think NHW is not a bad codec but it didn't find interest...-

But again, today I am totally ok and "more relaxed", NHW will be just a very good spare-time hobby for me, and that's still very important for me, sometimes when I have a new idea I will test it, but I don't really think that I will adapt NHW to any arbitrary image resolution because that's too boring and not fun currently (!)...

Cheers,
Raphael
I'm happy to help! And hobbies are great. I think a lot of people wind up not enjoying stuff they love doing because they've not met some external definition of success. Trying to turn hobby into professional success often turns into doing a lot more stuff required for the fun stuff, and often less time actually focusing on the fun stuff.

That said, it's a lot easier to publish papers on research than you might realize.

Still, most good ideas don't turn into real-world things. I've got 48 patents granted and plenty pending and in preparation to file. And maybe 10 of those have ever been actually implemented in a material way. I've got seven patents on optimal encoding and decoding of VR video, and haven't worked on any VR video products since the late 90's.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 13th April 2021, 18:32   #24  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Hello,

I have improved the results of the NHW codec and so released a new version.This one has more neatness and more precision, and so is more interesting.

-As usual, I don't have improved for now the entropy coding schemes, and so we can still save in average 2.5KB per .nhw compressed file, and even more with Chroma from Luma for example.-

More at: http://nhwcodec.blogspot.com/

Any feedback very welcome, also if you can suggest some niche use cases that AOM and MPEG (or other standards) don't cover and where the ultra-fast NHW Project could be interesting, would be very helpful!...

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 7th May 2021, 16:06   #25  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Hello,

Just a quick note, some people told me in the past that I must adapt NHW to any image size else nobody will consider and use it.That's certainly right, but again I am working on NHW on my spare time and this task is too exhausting for me.Also, I feel now that it is useless, as I have read that in the very next months, the great AVIF and JPEG XL will be definitely adopted everywhere on the Internet.

Some people also told me to forget about image compression and turn my codec into a video codec, but that's also too much work for my free time.So I have come to the conclusion that very soon when AVIF and JPEG XL will be adopted, then I will start to progressively abandon/stop the NHW Project, because it is also nearly 15 years that I am working on it and since the last years, I have seriously started to lose my energy for this project, and furthermore now there is no future for it.

However it would be great to find someone that could take over the NHW Project.Turning it into a video codec for example, because there was interest in NHW for a MotionJPEG replacement, because a company was considering MJPEG for low-power camera devices, and MJPEG was too heavyweight for their processor (dual-core 240MHz Xtensa LX6, if I remember correctly).For now with rough calculation based on the latest Independent JPEG Group (IJG) C source code, NHW is 1.5x faster to encode than JPEG (and 2x faster to decode), but with switching off heavy processings with very few quality loss, NHW can be 2x faster to encode than JPEG and with a notably better quality/compression.This engineer told me that if NHW will be 2x faster to encode than MJPEG (and with a better quality furthermore) then this could be very interesting for such embedded products.

So again, as I think it will be soon pointless to work on NHW image compression whereas 2 new standards will be adopted on the Internet, I wanted to let you know that it would be great to find someone/company/organization that are interested in NHW and would like to take over the project and continue its research and development notably for some video codec use cases.

So I would like to start to take contact as I am progressively stopping my work on NHW, if you think you could and want to work on and take over the NHW Project, or would know such a person, do not hesitate to contact me, would be great!

Many thanks.
Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 13th May 2021, 22:11   #26  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Hello again,

Just a quick note about NHW progess, yes you can notice that with tweaking the pre_processing function you can have different flavors of NHW, some with more neatness, few others with more precision.For now I keep this version because it seems a good balance between neatness and precision but maybe I'll release a new version in the next weeks if I can better tune the pre_processing.

Also I am currently very slowly improving a banding analyzer/detector function that I coded a couple of years ago, in order to remove the banding from the decoded NHW images.Now this (fast) function, that makes its main analysis on the 128x128 wavelet DC image, seems to roughly dectect 95% of the banding in the NHW decoded images and 5% of not-related-to-banding parts.For now I did not test on enough images I think, I feel that if it detects 95% of the banding that's ok and furthermore the 5% of non-banding parts that it detects are not important in the image for me, so this function could be ok, but again I can be wrong as I did not test on enough images.If you are interested, I can link a NHW decoder binaries with my banding detector implemented that will highlight the detected banding in the decoded images, and maybe you can test it on more images and tell me if this analyzer/detector is ok for you, any help would be great.

Very quickly to finish, I would like to test the VVC image compression part, because someone told me that it can be extremely impressive, notably in terms of PSNR, even 15-20% better than AVIF (psnr-wise), which would be really very impressive.But also PSNR doesn't correlate well with visual perception, and so I would like to test VVC and see if visually it is also as very impressive.I have read that in the VVC specs, a Main 10 Still Picture Profile exists for image compression.Does someone manage to compile this Main 10 Still Picture Profile and can link the binaries? Really any link for this VVC profile binaries would be great and very helpful!

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 14th May 2021, 01:07   #27  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
Quote:
Originally Posted by nhw_pulsar View Post
Very quickly to finish, I would like to test the VVC image compression part, because someone told me that it can be extremely impressive, notably in terms of PSNR, even 15-20% better than AVIF (psnr-wise), which would be really very impressive. But also PSNR doesn't correlate well with visual perception, and so I would like to test VVC and see if visually it is also as very impressive. I have read that in the VVC specs, a Main 10 Still Picture Profile exists for image compression. Does someone manage to compile this Main 10 Still Picture Profile and can link the binaries? Really any link for this VVC profile binaries would be great and very helpful!
I'd expect that VVC's advantages in subjective quality would be even greater than in PSNR, as that's long been the trend in MPEG codecs.

Just encoding a single frame in Main or Main10 should be a good indicator of what's possible. Generally the Still Picture profiles are more about constraining features for non-video applications than actually adding anything still image specific.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 14th May 2021, 16:54   #28  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Quote:
Originally Posted by benwaggoner View Post
I'd expect that VVC's advantages in subjective quality would be even greater than in PSNR
Really looking forward to testing VVC image compression part then! Maybe BPG could be updated with VVC intra? Would be great!

Just a very quick question, as VVC image compression is announced to be exceptional, why then MPEG did not position themselves for an image compression codec for the Internet? Because they do not consider this royalty-free area?

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 14th May 2021, 18:51   #29  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
Quote:
Originally Posted by nhw_pulsar View Post
Really looking forward to testing VVC image compression part then! Maybe BPG could be updated with VVC intra? Would be great!

Just a very quick question, as VVC image compression is announced to be exceptional, why then MPEG did not position themselves for an image compression codec for the Internet? Because they do not consider this royalty-free area?
Royalty-bearing image codecs are a non-starter for a lot of reasons, including that Google and Mozilla would never implement them.

Of course, the myriad interframe encoding patents don't apply to still images, so it would need a different portfolio as well. But the revenue opportunities are a lot lower there, so there isn't motivation to actually do it, leaving codec-as-image solutions in an ambiguous IP state, which is scary for any company big enough to sue. I think AVIF's licensing is going to be a key driver for its broader adoption as it gets around those issues. And encode/decode perf is a lot faster and psychovisual tuning is simpler for still-only.

The main driver for video-codecs-for-images has been with devices that already have licensed decoders. iOS is HEIC (HEVC still) by default now. They implemented a SW decoder for very old devices, but iOS was >80% HW HEVC at HEIC launch due to rapid mobile replacement rates.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 14th May 2021, 19:17   #30  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Quote:
Originally Posted by benwaggoner View Post
Royalty-bearing image codecs are a non-starter for a lot of reasons, including that Google and Mozilla would never implement them.

Of course, the myriad interframe encoding patents don't apply to still images, so it would need a different portfolio as well. But the revenue opportunities are a lot lower there, so there isn't motivation to actually do it, leaving codec-as-image solutions in an ambiguous IP state, which is scary for any company big enough to sue. I think AVIF's licensing is going to be a key driver for its broader adoption as it gets around those issues. And encode/decode perf is a lot faster and psychovisual tuning is simpler for still-only.

The main driver for video-codecs-for-images has been with devices that already have licensed decoders. iOS is HEIC (HEVC still) by default now. They implemented a SW decoder for very old devices, but iOS was >80% HW HEVC at HEIC launch due to rapid mobile replacement rates.
Ok, thank you for the explanations.Just (personal opinion) it's a little pity, because as a fan of image compression, I found VVC intra next-generation innovations very attractive, like machine learning intra prediction modes and adaptive loop filter for example...

Maybe HEIC will be updated with VVC? Because frankly, last time when I tested HEIC (rather quickly however) I was disappointed by HEVC intra actually, I found that AVIF was better, and also NHW because it has notably more neatness...

Also last question, maybe VVC (inside HEIC or HEIF) will be too computational-intensive for mobile phones?

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 14th May 2021, 22:30   #31  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
Quote:
Originally Posted by nhw_pulsar View Post
Ok, thank you for the explanations.Just (personal opinion) it's a little pity, because as a fan of image compression, I found VVC intra next-generation innovations very attractive, like machine learning intra prediction modes and adaptive loop filter for example...
Oh, I think there's a very real chance that VVC might get traction as an image codec! But getting broad decoder support is a precondition, and is mainly driven by video use cases.

Quote:
Maybe HEIC will be updated with VVC? Because frankly, last time when I tested HEIC (rather quickly however) I was disappointed by HEVC intra actually, I found that AVIF was better, and also NHW because it has notably more neatness...
The quality/efficiency of HEIC will vary with the quality and configuration of the encoder being used, more than JPEG or PNG. With proper tuning of x265, I was able to get comic book style art size down 95% versus JPEG. Video codecs, even intra-only, have so many more knobs to configure.

HEIF is the current image container format. HEIC is HEVC-in-HEIF, as AVIF is AV1-in-HEIF. So perhaps VVC would be be HEIV?

Quote:
Also last question, maybe VVC (inside HEIC or HEIF) will be too computational-intensive for mobile phones?
If there is a HW decoder, no worries at all. Sometimes codec images are faster than a SW JPEG, as the decoder can write the output bitmap straight into GPU memory.

For a SW decode, it could be an issue for some use cases and some implementations. But in general, the per-generation increases in decoder complexity are a lot smaller than the Moore's Law performance improvements in the same timeframe, and image decoding skips a lot of the more computational expensive parts of video decoding. So HEIV should be faster for the devices it would launch on that HEIC was on the devices it launched on.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 14th May 2021, 22:57   #32  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Quote:
Originally Posted by benwaggoner View Post
Oh, I think there's a very real chance that VVC might get traction as an image codec! But getting broad decoder support is a precondition, and is mainly driven by video use cases.
You mean traction on mobile phones? Because as you said, I fear that for Internet, there are too many obstacles...


Quote:
Originally Posted by benwaggoner View Post
The quality/efficiency of HEIC will vary with the quality and configuration of the encoder being used, more than JPEG or PNG. With proper tuning of x265, I was able to get comic book style art size down 95% versus JPEG. Video codecs, even intra-only, have so many more knobs to configure.

HEIF is the current image container format. HEIC is HEVC-in-HEIF, as AVIF is AV1-in-HEIF. So perhaps VVC would be be HEIV?
Ok, so that was a problem of encoder configuration and tuning for HEIC.Because the one I used when I tested HEIC lacked of neatness/sharpness, but this is certainly adjustable and can be corrected in a certain measure.


Quote:
Originally Posted by benwaggoner View Post
If there is a HW decoder, no worries at all. Sometimes codec images are faster than a SW JPEG, as the decoder can write the output bitmap straight into GPU memory.

For a SW decode, it could be an issue for some use cases and some implementations. But in general, the per-generation increases in decoder complexity are a lot smaller than the Moore's Law performance improvements in the same timeframe, and image decoding skips a lot of the more computational expensive parts of video decoding. So HEIV should be faster for the devices it would launch on that HEIC was on the devices it launched on.
Yes for decoding, that's certainly ok.I was more thinking of encoding and shooting a photo with the camera for example, as a forum member told that encoding a photo with VVC (VTM software) took him 1h30min on his top computer!... Hope this number has decreased now!...

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 17th May 2021, 16:54   #33  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Hello,

I just wanted to rectify when I said that HEIC was not good because it lacked of neatness... I could not find my HEIC version I tested with (1 or 2 years ago), but it's certainly that my memory is wrong or my eyes were very tired, because actually there are very good HEVC intra implementations!

Quickly, I re-downloaded BPG 0.9.8 (with x265, April 2018) and furthermore I mainly tested with -m 1 speed setting which is very fast, and actually this version of BPG x265 (of April 2018!) is excellent! To correct what I said, I don't find that x265 really lacks of neatness compared to the original image.That's right that NHW has more neatness, but x265 has an extremely good precision like AVIF.

So again sorry for my misinformation.

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 17th May 2021, 21:11   #34  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
Quote:
Originally Posted by nhw_pulsar View Post
Hello,

I just wanted to rectify when I said that HEIC was not good because it lacked of neatness... I could not find my HEIC version I tested with (1 or 2 years ago), but it's certainly that my memory is wrong or my eyes were very tired, because actually there are very good HEVC intra implementations!

Quickly, I re-downloaded BPG 0.9.8 (with x265, April 2018) and furthermore I mainly tested with -m 1 speed setting which is very fast, and actually this version of BPG x265 (of April 2018!) is excellent! To correct what I said, I don't find that x265 really lacks of neatness compared to the original image.That's right that NHW has more neatness, but x265 has an extremely good precision like AVIF.

So again sorry for my misinformation.
No problem. I generally find HEIC to do well as a JPEG superset, particularly in its ability to do nice sharp edges. Using --tskip can help a lot with mixed natural/synthetic content.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 27th May 2021, 21:36   #35  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Hi,

Just a quick message about VVC intra image compression as I could see an image comparison (made 5 months ago on 10 images) between VVenC 0.2.0.0 and AVIF 0.8.4 at high compression.

And in fact you was right benwaggoner, VVenC is visually very good, because AVIF has a better PSNR but VVenC is clearly visually more pleasant (for me).

-Here is the link: https://www.reddit.com/r/VVC/comment...a_compression/ -

Else about NHW, I made many tests with -l5 quality setting recently, and at -l5 for me NHW is consistently visually better than AVIF and BPG for example.For high compression (like -l9 and -l11 settings for example), it really depends on the quality of the source image.If you input a "good quality" neat image then NHW visually performs well at high compression (and always for me, it is visually better than AVIF and BPG), but if your source image is not a neat but degraded image then that's right that we more see artifacts and NHW is less efficient.

Any opinion welcome.

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 27th May 2021, 23:18   #36  |  Link
skal
Registered User
 
Join Date: Jun 2003
Posts: 132
Quote:
Originally Posted by nhw_pulsar View Post
Any opinion welcome.
Some suggestions:
why don't you update http://nhwcodec.blogspot.com/, (get rid of the old news from 2012?) and put the images you are referring to there (along with the command lines to reproduce your findings) ?

Next, you should release a tool to measure the 'neatness' of images (PNG for instance), in order to get a better idea of what you are alluding to constantly.

my 2c.
skal is offline   Reply With Quote
Old 27th May 2021, 23:51   #37  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Hi Skal,

Thank you for your useful advices.Yes totally right, I should make a big major update of my old demo page with current results.I would like to tag my next release as the 0.2.0 version (because I also need to update my versions...), and so this should be, -as you rightly suggested-, the occasion to make a June 2021 new demo page.

I tried to make a neatness metrics, but is unfinished now, but anyway here is the GitHub: https://github.com/rcanut/NHW_Neatness_Metrics , it could give you a first measure of neatness, yes to which I am alluding to constantly...

Many thanks.
Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 28th May 2021, 00:46   #38  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 3,730
Quote:
Originally Posted by nhw_pulsar View Post
Just a quick message about VVC intra image compression as I could see an image comparison (made 5 months ago on 10 images) between VVenC 0.2.0.0 and AVIF 0.8.4 at high compression.

And in fact you was right benwaggoner, VVenC is visually very good, because AVIF has a better PSNR but VVenC is clearly visually more pleasant (for me).

-Here is the link: https://www.reddit.com/r/VVC/comment...a_compression/ -
Yeah, the problem with the whole VPx series that AV1 inherited is that On2 had a PSNR-Above-All ethos for their encoders, believing that was the key thing that would drive adoption of their codecs. And VP6 had a rich postprocessing system that did a good job of hiding PSNR-tuning artifacts by synthesizing grain and select sharpening.

For software-developer centric cultures like Google and Facebook, it's really hard for them to internalize that there are NO reliable subjective quality metrics, and that the more we tune for the metrics we have, the more that metric's subjective correlation goes down. So libaom got tons of tuning around VMAF, on top of the PSNR basis of VPx, and so it has lots of subjective issues in VMAF's blindspots. VMAF can't distinguish between different adaptive quantization modes well, so it's of little use in tuning adaptive quantization!

ML is a nice tool to have for encoding optimization, but there's also lots of subject matter expertise that has to be embedded in the design in order to adapt well to the real-world variety of content.

x264 still beats VP9 in the real world due to that, even though it looks worse by just comparing BDRATE of PSNR. x264 has tons of algorithms and tuning to adapt to real-world video attributes.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 28th May 2021, 10:22   #39  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Quote:
Originally Posted by skal View Post
Some suggestions:
why don't you update http://nhwcodec.blogspot.com/, (get rid of the old news from 2012?) and put the images you are referring to there (along with the command lines to reproduce your findings) ?
Hi Skal,

So I have started to update my demo page with a (needed) more recent one (and also removed the old news from 2012).

I wanted to upload images that show the good results of NHW at -l5 quality setting, but I must admit... that I mainly test for now with actor and actress faces pictures, because it is really easier to find high quality pictures for such type of images, but I think there are copyrights and so am not allowed to publish them on my demo page...

Many thanks again.
Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Old 28th May 2021, 16:09   #40  |  Link
nhw_pulsar
Registered User
 
Join Date: Apr 2017
Posts: 49
Quote:
Originally Posted by benwaggoner View Post
Yeah, the problem with the whole VPx series that AV1 inherited is that On2 had a PSNR-Above-All ethos for their encoders, believing that was the key thing that would drive adoption of their codecs. And VP6 had a rich postprocessing system that did a good job of hiding PSNR-tuning artifacts by synthesizing grain and select sharpening.

For software-developer centric cultures like Google and Facebook, it's really hard for them to internalize that there are NO reliable subjective quality metrics, and that the more we tune for the metrics we have, the more that metric's subjective correlation goes down. So libaom got tons of tuning around VMAF, on top of the PSNR basis of VPx, and so it has lots of subjective issues in VMAF's blindspots. VMAF can't distinguish between different adaptive quantization modes well, so it's of little use in tuning adaptive quantization!

ML is a nice tool to have for encoding optimization, but there's also lots of subject matter expertise that has to be embedded in the design in order to adapt well to the real-world variety of content.

x264 still beats VP9 in the real world due to that, even though it looks worse by just comparing BDRATE of PSNR. x264 has tons of algorithms and tuning to adapt to real-world video attributes.
Yes, there seems to be no reliable subjective quality metrics, but also visual quality highly depends on the viewer appreciation.I however think that for VVC intra there will be a consensus that it is visually excellent, but for example I am not joking but with PSNR metrics, I saw a BPG image with a PSNR of 38dB and the same image with NHW with a PSNR of 30.3dB, and visually I found that NHW image was more pleasant...

But again subjective quality is very subjective (!...), because I decided to tune NHW for neatness which I find visually more pleasant, and I was very surprised when another viewer told me that he found NHW visually horrible (!)... mainly because it washes details out and this viewer doesn't notice/view the neatness of my codec...

-Just a last remark, concerning neatness with NHW, yes there is a pre_processing/sharpening of the image in the encoder, but there is no sharpening in the decoder, so the NHW decoder is ultra-fast.-

Cheers,
Raphael
nhw_pulsar is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:55.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, vBulletin Solutions Inc.