Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
19th February 2024, 21:17 | #61 | Link | ||
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,856
|
Quote:
Quote:
|
||
23rd February 2024, 18:55 | #62 | Link |
Registered User
Join Date: Jul 2015
Posts: 761
|
Intra Block Copy VVC errors have been corrected. Videos can be played.
Converting VTM videos requires adding framerate (-r). Video still stutters bit. https://github.com/ffvvc/FFmpeg/pull/198 Code:
In function 'prepare_intra_edge_params_8', inlined from 'intra_pred_8' at extra/vvc_intra_template.c:627:5: extra/vvc_intra_template.c:535:21: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 535 | left[i] = top[i] = left[0]; | ~~~~~~~~^~~~~~~~~~~~~~~~~~ extra/vvc_intra_template.c: In function 'intra_pred_8': extra/vvc_intra_template.c:625:21: note: at offset -105 into destination object 'edge' of size 3136 625 | IntraEdgeParams edge; | ^~~~ Last edited by Jamaika; 23rd February 2024 at 18:58. |
17th March 2024, 06:41 | #63 | Link | |
Registered User
Join Date: Mar 2020
Posts: 130
|
Quote:
We are looking at an Apple M3 / or current Top Tier CPU from x86 to barely decode 1080P FVC files at 30fps with 100% CPU usage with an dav1d level of decoder ( Which we likely wont get ). I dont expect this level of CPU performance to filter through to the low end any time soon or even within next 10 years. And we haven't even talked about 4K. So realistically any usage of FVC would requires dedicated hardware acceleration. But Of course I hope I am utterly wrong.
__________________
Previously iwod |
|
15th May 2024, 07:29 | #64 | Link |
Registered User
Join Date: Mar 2020
Posts: 130
|
From April's Meetings.
The rate reduction for natural sequences over VTM 11 in RA configuration for {Y, U, V} increased from ECM-11.0’s {-22.56%, -31.91%, -33.67%} to ECM-12.0’s {-24.01%, -33.20%, -35.34%}.
__________________
Previously iwod |
29th July 2024, 08:19 | #65 | Link |
Registered User
Join Date: Mar 2002
Posts: 864
|
I've been curious about ECM / H.267 for a while, and i could not find a working windows build of this codec anywhere, and i dont want to beg for builds either, so with some reluctance, i set up visual studio on my work PC (btw, it's cool and all, but i feel ~40 GB for it is a bit excessive.). To my surprise, i could compile ECM encoder without a hitch, no crashes, everything works as it should.
So i could finally try it myself. It is very slow, even slower than the AV2 encoder, so much so that testing it is kinda difficult. (a 100 frames long 720p clip took about 14 hours to finish). But the quality/efficiency is very impressive indeed. Nothing really comes close to it, especially at lower bitrates. It easily outclasses AV2 in it's current stage. Last edited by Tommy Carrot; 29th July 2024 at 08:23. |
1st August 2024, 14:24 | #66 | Link | |
Artem S. Tashkinov
Join Date: Dec 2006
Posts: 374
|
Quote:
https://github.com/fraunhoferhhi/vvenc/discussions/389 I'm not a fan of how VVC has been deployed so far. Last edited by birdie; 1st August 2024 at 17:47. Reason: to |
|
1st August 2024, 16:53 | #67 | Link | |
Registered User
Join Date: Jun 2024
Location: South Africa
Posts: 36
|
Quote:
|
|
1st August 2024, 17:49 | #68 | Link | |
Artem S. Tashkinov
Join Date: Dec 2006
Posts: 374
|
Quote:
https://github.com/fraunhoferhhi/vvenc/discussions/388 |
|
1st August 2024, 18:08 | #69 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,856
|
Quote:
Are you comparing AV1 and VVC at the same bitrates? I'd expect VVC to do somewhat better than AV1 at this in general. Of course, AV1 encoders are a lot more mature at this point. Reencoding from source that already has video encoding artifacts is always a tricky challenge. Generally the more simlilar the codecs are, the better; reencoding from H.264 to HEVC is generally cleaner than from H.264 to VP9 as HEVC can fall back to pretty much symmetrically encoding the input pixels. Similarly, I'd expect VP9 to reencode better to AV1 than VVC, as they share so much common architecture (but I've not tested that). As a best practice, cleaning up artifacts before reencoding is preferred. Garbage in is always at least as much garbage out, and often worse than that. The bitrates required to not come out worse than a source already encoded for distribution are often as high or higher than the original bitrate anyway. Reference and enterprise encoders are always tested and tuned on uncompressed sources, as that's what premium content has. Rest assured CrunchyRoll's sources don't have those kind of artifacts! Given Google's heavy involvement in AV1 encoder development and use via YouTube, it would make absolute sense that libaom was tuned to handle artifacts typical in user-generated content, not just professional mezzanines or uncompressed test sources. Last edited by benwaggoner; 1st August 2024 at 18:23. |
|
2nd August 2024, 12:48 | #70 | Link |
Registered User
Join Date: Jun 2024
Location: South Africa
Posts: 36
|
Memory wasn't the best, so I did a quick test again. I would say that, at low bitrates, AV1 and VVC are on the same footing, but could be trading one artifact for another, and that will be subjective. What's certain, though, is that the bad source was "cleaned up," leading to better picture. At higher bitrates, both are preserving the artifacts of the source, leading to worse picture. So, this supports your suggestion that higher QPs, along with concealment, are saving the day. (It could also be that denoising, if present, is being cut down with more bitrate.)
The source is x264-encoded, 5 Mbps, 480p anime supposedly taken straight from the Blu-ray; it appears to be from the same master used for the DVD releases. It is not in the best shape: soft, along with subtle ringing, I'd say. I agree that one should clean up artifacts before encoding, but in this case, AV1 killed two birds with one stone: more compression and better picture (to a limit)! Generally, though, I find that VVC is slightly ahead of AV1 and a touch sharper. |
3rd August 2024, 20:04 | #71 | Link |
Registered User
Join Date: Jun 2024
Location: South Africa
Posts: 36
|
Looks like there is some sort of denoising after all: MCTF. See birdie's link above as well as the following, p. 16:
https://ieeexplore.ieee.org/stamp/st...number=9503377 |
3rd August 2024, 20:15 | #72 | Link | |||
Broadcast Encoder
Join Date: Nov 2013
Location: Royal Borough of Kensington & Chelsea, UK
Posts: 3,020
|
Quote:
Anyway, from there our own mezzanine file was created (an H.264 1920x1080 level 4.1 4:2:0 8bit + AAC in mp4 with very very very high bitrate and almost always 23,976p) and then sent to the distribution encoder (along with the .ass) that would create the renditions automatically for the various resolutions and bitrates to populate the CDN. At that point, the publishing team would test those on the various devices to make sure everything was fine before the content was "unlocked" automatically at the scheduled time. It feels like a lifetime ago, considering that I've abandoned the streaming sector a long time ago to dedicate to linear broadcasting and I've been working at Sky for 8 and a half years. Quote:
Quote:
The goal by MPEG for H.266 VVC was to at the very least be 35% more efficient than H.265 HEVC (with 40% being the target) and in general VVEnc tests show it to be around 13% more efficient than AV1. Mind you, VVEnc is just 4 years old. Last edited by FranceBB; 3rd August 2024 at 20:20. |
|||
5th August 2024, 15:31 | #73 | Link | |
Registered User
Join Date: Jun 2024
Location: South Africa
Posts: 36
|
Quote:
Regarding AV1 and softness, I think future codecs will increasingly take this path because that is what is viewed, seemingly by many, as high quality these days. For my part, I find it regrettable. |
|
5th August 2024, 23:08 | #74 | Link | |
Registered Developer
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,366
|
Quote:
Of course the alternative to softness is blocking artifacts, like H264 was famous for. I rather have softness then blocking artifacts. Thats why in many cases people instantly preferred HEVC at very low bitrates, since a soft image is more watchable then a blocky H264 encode The real solution is more bitrate, but we aren't getting that. Of course thats the same reason new codecs may appear a bit sharper again - at the same bitrate you get a bit more quality, less need to reduce details, eg. less soft.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders Last edited by nevcairiel; 5th August 2024 at 23:10. |
|
6th August 2024, 12:28 | #75 | Link | |
Registered User
Join Date: Mar 2020
Posts: 130
|
Quote:
Right now ECM is showing about 25% BD-R compared to VTM. And up to 50% for Text and Graphics with Motions. It also increase decoding complexity by 8x, the highest we have seen in any codec generation. I am just not entirely sure how this will work on Mobile. May be we could use it with LCEVC at 720P to reconstruct 1080P files? Using VVC + LCEVC, which is what the Brazil TV 3.0 are going to be using in 2025, is proving to be high quality and extremely resource efficient. I remember earlier this year Brazil Government and University published a paper showing anywhere from -10% ( using more Bit Rate ) to 60% reduction [1] compared to VVC alone. And this is with early stage VVC and LCEVC encoder. Hopefully we will have other encoder to play with soon. I hope there will be a Beamr for VVC. [1] Somewhat interesting is that LCEVC is developed by V-Nova, a British Company and being true to British fashion they absolutely down play the potential of the tech. Which is quite amusing to me. XD
__________________
Previously iwod |
|
6th August 2024, 17:23 | #76 | Link | |
Registered User
Join Date: Jun 2024
Location: South Africa
Posts: 36
|
Quote:
|
|
8th August 2024, 21:15 | #77 | Link | |
Moderator
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,856
|
Quote:
It's more that the more advanced the codec, the better in-loop artifact reduction tools it has. Before in-loop deblocking, once a reference frame hit too high a QP, quality was trashed for the rest of the GOP, and you had shorter max GOP durations. Once you could get soft instead of blocky, future frames that referenced a high QP frame could spend bits adding detail instead of trying to erase erroneous detail of blocks. HEVC added SAO, which does similar stuff for ringing artifacts. VVC is able to do much better with motion vector artifacts than prior codecs, allowing high QP prediction not look as artificial. These sorts of tools all shift codecs towards having high QP result in just loss of detail instead of detail loss with introduction of wrong detail (ala MPEG-2, where 8x8 block patterns often became painfully visible). With a good encoder, that means it takes fewer bits to hit a certain level of high quality. For noisy content, it can also mean that it's not feasible to save all that many bits over prior codecs without losing some detail. It also means that bitrates can be pushed much lower without introducing distracting artifacts. If a broadcaster is using fixed RF bandwidth, adding more, softer channels makes compelling economic sense, as you can get away with a lot more compression before customer start complaining. And even for IP streaming, bandwidth costs are a big part of the total cost of the business, so reducing them can help the bottom line a lot. Still, it's not like we were getting artifact free high detail from those sectors before; it's pretty much premium content delivered over IP where the economics made sense for using enough bits to look consistently good. I'll take softer over soft-with-blocks any day. Although the psychovisual factors there can get complex; the added high frequencies of DCT artifacts in MPEG-2 offered a certain "sizzle" that some customers actually preferred over the uncompressed source. Another reason we can see early-development encoders tend towards softness is that PSNR is intrinsically biased in that direction with per-frame QP. Encoders need psychovisually optimized adaptive quantization to lower QP in flatter areas to provide a more subjectively balanced encode. SSIM and VMAF are only a little better at properly accounting for the value of preserving detail more in low-detail areas. |
|
Tags |
iso, itu, successor, vvc |
Thread Tools | Search this Thread |
Display Modes | |
|
|