Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 14th December 2021, 19:53   #21  |  Link
tonemapped
Video Fanatic
 
tonemapped's Avatar
 
Join Date: Jul 2021
Location: Surrey
Posts: 89
Quote:
Originally Posted by rwill View Post
You know, Blu-Rays are copy protected and are not meant to be used as a source for encoding the content further. So I guess keep your disc ?
Why are you registered if that's your primary view? It's legal to make a copy of owned content.

Quote:
Originally Posted by rwill View Post
You also seem to have a basic misunerstanding of data processing. Please read this Wikipedia article:

Garbage in, garbage out
I didn't create this thread to debate VC-1 vs H.264 quality. I'm not sure where you got that impression from.

Quote:
Originally Posted by rwill View Post
As I wrote previously: 'My guess is that most bits are spent on compression artifacts. The Grain is stuck to the hair in this "Source" already, an encoder cannot magically unstuck it.'
I'm more than aware that encoding it isn't going to remove defects on the source. Again, that's not the problem - the problem is x265 having poor handling compared to x264 when more complex grain is involved. Based on your test file of the same scene, you've simply stripped all the grain from it. That's not a solution.

Quote:
Originally Posted by rwill View Post
Now x265 is not among the best HEVC encoders that are available but it is not really bad either - I mean at least its free. I think using crappy sources like yours prevent proper detail retention with x265 further.
This is a problem I've encountered on a number of Blu-ray sources, including the German remastered Charmed collection (which is know for its excellent quality).

Why won't you name the encoder you claim you've created? The file you shared shows GPAC. Are you claiming you created that?

Quote:
Originally Posted by rwill View Post
The quality issue you try to solve is really negligible if one takes the state of the source material into account and what an encoder can still do given such input.
The issue I'm trying to solve is x265's poor handling of grain, compared to x264 at the same bitrate.

Quote:
Originally Posted by rwill View Post
I only posted the output of a different encoder to prove that it can be done in higher 'quality' compared to x265.
Which encoder?

Quote:
Originally Posted by rwill View Post
Proving that something can be done better is very important in compression for some people. It differentiates between realistic and unrealistic demands in the ever present request for higher compression efficiency.
And the relevance of that statement is...?
__________________
PC: R9 5900X | 32GB 3600 MT/s RAM | 2*1TB NVMe | RTX 3080 | water-cooled

NAS: SM 48-bay 240TB+ storage | Xeon 1220 | 32GB DDR4 ECC

HTPC: Pentium J5005 | 16GB RAM | 256GB SSD | 15W

Last edited by tonemapped; 14th December 2021 at 20:27.
tonemapped is offline   Reply With Quote
Old 14th December 2021, 20:08   #22  |  Link
tonemapped
Video Fanatic
 
tonemapped's Avatar
 
Join Date: Jul 2021
Location: Surrey
Posts: 89
Quote:
Originally Posted by Boulder View Post
At lower bitrates, you are better off with aq-mode 2 and SAO (maybe selective SAO could be used). Yes, SAO does eat grain for breakfast but you can't have both detail and a low bitrate if the source is not easy to compress. HEVC blurs while AVC creates blocking when the bitrate is not enough to keep the details. Blocking may look better in motion, depending on your viewing distance etc.

A lot of people think that HEVC is better because it is newer, but in fact the use cases are more or less related to being able to create a watchable video at a low bitrate at higher resolutions, at least for x265. You can easily see how most of the actual development is done on that part, and that is why I've always taken the developers' fancy new things with a grain of salt. I myself use x265 because of HW support for 10bit encodes and of course HDR encodes pretty much require using it.
I'm happy to use CRF, but used 6mbps for the sake of comparison, or do you mean low bitrate sources? The annoying this is, the majority of the encode looks great with grain at ~5mbps - apart from those types of scenarios where the grain 'clings' to moving objects. This is not a source problem, but something x265 fails at (even encoding 2-pass, very slow, 100mbps doesn't get rid of the issue), or something I'm doing wrong.

I don't use x265 for everything, but for less noisy sources it's excellent. Even with some noisy sources it's not bad. Sex and the City is one of the worst 'remasters' I've ever seen, with the usual trick of adding a thick layer of noise to help with perceived quality. Using slow (with a few changes), CRF 23, the file produced is 1.24GB (5,910 kb/s) and looks great.
__________________
PC: R9 5900X | 32GB 3600 MT/s RAM | 2*1TB NVMe | RTX 3080 | water-cooled

NAS: SM 48-bay 240TB+ storage | Xeon 1220 | 32GB DDR4 ECC

HTPC: Pentium J5005 | 16GB RAM | 256GB SSD | 15W
tonemapped is offline   Reply With Quote
Old 14th December 2021, 20:34   #23  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
Quote:
Originally Posted by tonemapped View Post
I'm happy to use CRF, but used 6mbps for the sake of comparison, or do you mean low bitrate sources? The annoying this is, the majority of the encode looks great with grain at ~5mbps - apart from those types of scenarios where the grain 'clings' to moving objects. This is not a source problem, but something x265 fails at (even encoding 2-pass, very slow, 100mbps doesn't get rid of the issue), or something I'm doing wrong.

I don't use x265 for everything, but for less noisy sources it's excellent. Even with some noisy sources it's not bad. Sex and the City is one of the worst 'remasters' I've ever seen, with the usual trick of adding a thick layer of noise to help with perceived quality. Using slow (with a few changes), CRF 23, the file produced is 1.24GB (5,910 kb/s) and looks great.
I mean low-bitrate encodes. Of course, that's pretty much a subjective thing as it depends on how well the source compresses. 6 Mbps might not be that much for 1080p - for example I'm currently encoding the LOTR movies and it looks like The Two Towers will end up at around 8 Mbps at CRF 19 using my slightly denoising script. If it had an AR of 1,78:1, it would require much more.

Having said that, I still suggest testing some very slight denoising to combat your issues.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 15th December 2021, 17:11   #24  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,819
Some things to try when strugging with fine grain content in x265:
  1. use --nr-inter to reduce grain swirling and bitrate consumption. It can go up to 2000; don't be afraid to test high values.
  2. Raise --nr-intra to where I frames don't have visibly more detail
  3. Reduce --ipratio and --pbratio for less frame strobing
  4. Raise psy-rdoq to retain a more even appearance of grain detail
  5. Reduce --aq-strength to keep grain detail more consistent across the frame
  6. Use --aq-mode 4 instead of 2
  7. Use --rskip 2 instead of 1, with a lower threshold. The original rskip was terrible with grain
  8. Try --rd 4 if --rd 6 isn't working well
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 16th December 2021, 19:30   #25  |  Link
tonemapped
Video Fanatic
 
tonemapped's Avatar
 
Join Date: Jul 2021
Location: Surrey
Posts: 89
Quote:
Originally Posted by Boulder View Post
I mean low-bitrate encodes. Of course, that's pretty much a subjective thing as it depends on how well the source compresses. 6 Mbps might not be that much for 1080p - for example I'm currently encoding the LOTR movies and it looks like The Two Towers will end up at around 8 Mbps at CRF 19 using my slightly denoising script. If it had an AR of 1,78:1, it would require much more.

Having said that, I still suggest testing some very slight denoising to combat your issues.
That's an impressive bitrate for a film with some complex scenes. What's the basis of your denoising script?

Quote:
Originally Posted by benwaggoner View Post
Some things to try when strugging with fine grain content in x265:
  1. use --nr-inter to reduce grain swirling and bitrate consumption. It can go up to 2000; don't be afraid to test high values.
  2. Raise --nr-intra to where I frames don't have visibly more detail
  3. Reduce --ipratio and --pbratio for less frame strobing
  4. Raise psy-rdoq to retain a more even appearance of grain detail
  5. Reduce --aq-strength to keep grain detail more consistent across the frame
  6. Use --aq-mode 4 instead of 2
  7. Use --rskip 2 instead of 1, with a lower threshold. The original rskip was terrible with grain
  8. Try --rd 4 if --rd 6 isn't working well
  • I find intra frame noise reduction produces horrible results no matter what I do (and change elsewhere). I conducted a test with Inter-frame denoising at an absurd value (1000) and it produced a really great result. I also conducted a test with inter-frame noise reduction at 75 and SAO enabled with selective SAO at 1. That also produced a good result. Both were at CRF 21.
  • Using intra-frame NR seems to produce blurred results with a lot of detail lost (not just the perceived detail grain/noise offers). Is this something you've experienced?
  • I generally always reduce the ipratio and pbratio to 1.30 and 1.20, or if a complex source then 1.2 and 1.1 (or thereabouts).
  • I've tried values from the default 1.00 up to 30.00 and, perhaps it's my eyes, not really seen much difference between them - and that's with rskip on 2.
  • Oddly, I've found an AQ of 1 producing the best results with x265, with 3 and 4 prone to a strobing effect. Not sure how else to describe it. I usually set AQ strength to 0.80 for most content, and then ~0.60 for very noisy content. However, it appears AQ acts very differently on x265 compared to x264 and I've not managed to find a paper or even a simple answer on how it works differently etc.
  • I'll try changing RD to 4. Have you experienced content where 4 produces a better result compared to 6, and if so, why do you think that occurred?

I've replied with bullet points for clarity - I don't want it to be seen as rudeness.
__________________
PC: R9 5900X | 32GB 3600 MT/s RAM | 2*1TB NVMe | RTX 3080 | water-cooled

NAS: SM 48-bay 240TB+ storage | Xeon 1220 | 32GB DDR4 ECC

HTPC: Pentium J5005 | 16GB RAM | 256GB SSD | 15W

Last edited by tonemapped; 16th December 2021 at 19:31. Reason: Fixed bullet points
tonemapped is offline   Reply With Quote
Old 16th December 2021, 22:11   #26  |  Link
tonemapped
Video Fanatic
 
tonemapped's Avatar
 
Join Date: Jul 2021
Location: Surrey
Posts: 89
Since I managed to get the Sex and the City boxset for a bargain price, here's what some testing produces. The idea was not to be transparent since the Blu-ray isn't great, but for it to be 'close enough' at a much reduced file size.

I'm curious if anyone can tell which is which - it should be obvious. This is was of the worst examples I could find (in terms of quality). Since the grain/noise is 'bigger' on this, I'm using --aq-mode 3 which seems to work better. For finer grain, AQ 1 (or disabled) seems the best.

One is taken from the Blu-ray disc and is just over 6.31GB at 1920*1080. The other is the x265 encode and is 1.40GB at 1440*1080 (screenshots using spline36 to resize to 1920*1080). The screenshot from the encoded file is a b-frame.

Quote:
--crf 23 --preset slow --output-depth 10 --profile main10 --rd 5 --psy-rd 2.45 --psy-rdoq 3 --rskip 2 --no-rect --aq-mode 3 --aq-strength 0.7 --nr-inter 75 --ipratio 1.35 --pbratio 1.25 --subme 6 --bframes 8 --rc-lookahead 120 --deblock -4:-4 --no-sao --no-strong-intra-smoothing



__________________
PC: R9 5900X | 32GB 3600 MT/s RAM | 2*1TB NVMe | RTX 3080 | water-cooled

NAS: SM 48-bay 240TB+ storage | Xeon 1220 | 32GB DDR4 ECC

HTPC: Pentium J5005 | 16GB RAM | 256GB SSD | 15W

Last edited by tonemapped; 16th December 2021 at 22:18. Reason: Added x265 params
tonemapped is offline   Reply With Quote
Old 17th December 2021, 09:05   #27  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
Quote:
Originally Posted by tonemapped View Post
That's an impressive bitrate for a film with some complex scenes. What's the basis of your denoising script?
I can share the denoiser function after some cleanup. It's just similar stuff that many others do: 1) predenoise the clip used for motion search, 2) do analysis on that clip and 3) use MDegrain on the sharpened original clip for limited motion compensated denoising to avoid excessive detail loss. Does slight spatial denoising where block matching is bad (usually blocks with lots of motion).



The final average bitrate for the TTT was around 8.1 Mbps. However, I must stress that there is something wrong with CTU 64 and 1080p resolution at least when it comes to CRF encodes (EDIT: and rskip mode 2). I did a color corrected encode of FOTR EE (cropped resolution 1920x800) and started wondering why the flat backgrounds don't look good at all in motion. I then compared CTU 64 and 32 on a shorter sample from the movie and the filesize difference was very big. I've already noticed this earlier but thought it was only in case of --limit-tu 0. I am quite sure that the encoder does some stupid decisions with CTU 64 and rskip 2 because the avg QP differs a lot but frame types are of course the same. I really need to test if it applies to >1080p resolutions as well. Hope not, because it would mean some more rework for me

https://forum.doom9.org/showthread.p...47#post1919347
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...

Last edited by Boulder; 17th December 2021 at 20:47.
Boulder is offline   Reply With Quote
Old 17th December 2021, 09:08   #28  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
And forget the exotic aq-modes, they just don't work well with regular encodes. I'd say modes 1 or 2 are the ones to use, and 1 preferred in most cases. The functionality was ported from x264 and I don't think the devs have done any real work on them. That's why they may not work like in x264 because the environment is not the exact same.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 17th December 2021, 21:09   #29  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
I've pasted my function here: https://pastebin.com/0KegSZiR - I suppose you know how to handle Avisynth functions. If you don't have it already, I strongly recommend the latest official Avisynth+ build by pinterf and then possibly his latest test build from the Avisynth+ thread in the Avisynth Development section. Chances are that some of the dependencies require that.

Dogway's filter packs are used by the function, you'll probably need at least ExTools, TransformsPack, SPresso and SMDegrain. https://forum.doom9.org/showthread.php?t=182881
Get pinterf's MVTools and DFTTest as well.

What I usually do is:
1) Use debug=1 to select the prefiltering (adjusted by prefilter=1, 2 or 3) on the motion search clip. This depends on the amount of noise and grain as you want to have a rather clean clip to do the analysis on.

2) Use debug=6 to set thscd1 and thscd2 based on the amount of blocks it detects as "changed too much". In the upper left corner, if the bottom value exceeds thscd2, it triggers a scene change in the denoising. This must be set for each source as they differ a lot. Thscd1 affects which blocks are detected as "changed too much".

3) Finally, debug=2 to set limit (limits MDegrain) and blurlimit (limits spatial blur), it interleaves between the original and denoised frame. Debug=3 is the same but it shows the differences in luma amplified. Debug=4 does the same for chroma.

The defaults are set so that the denoising is very slight as I like to keep the grain. Still, it can easily shave off a nice amount of bits from the final encode without actually being noticable.

For example, my ROTK script looks like this (on a 3900X):
Code:
part1=DGSource("M:\lotr_rotk_part1.dgi",ct=144,cb=144,cl=4,cr=4).Trim(0,183541)
part2=DGSource("M:\lotr_rotk_part2.dgi",ct=144,cb=144,cl=4,cr=4).Trim(24,0)
part1+part2
MDG(limit=0.1, prefilter=1, blurlimit=0.3, thscd1=420, thscd2=90)
Neo_f3kdb(preset="medium", grainy=0, grainc=0, output_depth=16, sample_mode=4, mt=false)
AddBorders(8,0,8,0)
Prefetch(threads=24, frames=12)
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...

Last edited by Boulder; 19th December 2021 at 18:00.
Boulder is offline   Reply With Quote
Old 19th December 2021, 19:23   #30  |  Link
tonemapped
Video Fanatic
 
tonemapped's Avatar
 
Join Date: Jul 2021
Location: Surrey
Posts: 89
Quote:
Originally Posted by Boulder View Post
... However, I must stress that there is something wrong with CTU 64 and 1080p resolution at least when it comes to CRF encodes (EDIT: and rskip mode 2). I ... started wondering why the flat backgrounds don't look good at all in motion. I then compared CTU 64 and 32 on a shorter sample from the movie and the filesize difference was very big.

https://forum.doom9.org/showthread.p...47#post1919347
Genuinely fascinating. I've also noticed that CTU 64 vs CTU 32 for 1080p encodes produces results I perceived to be odd, but thought it was normal and perhaps an oddity of x265. I've found in tests that 3-pass (slow first pass) encodes at 7,000 kbps in 720p, backgrounds look more natural compared to the 1080p encodes at 16,000 kbps. This is on a source with very little noise and backgrounds should be clean with no banding at the bitrates used.

What's your results from n-pass tests?


Quote:
Originally Posted by Boulder View Post
I've pasted my function here: https://pastebin.com/0KegSZiR - I suppose you know how to handle Avisynth functions.
I do.

Quote:
Originally Posted by Boulder View Post
Dogway's filter packs are used by the function, you'll probably need at least ExTools, TransformsPack, SPresso and SMDegrain. https://forum.doom9.org/showthread.php?t=182881
Get pinterf's MVTools and DFTTest as well.
I believe I have most of those, although SMDegrain is having issues on Windows 11 Pro (for me). No idea why.

Quote:
Originally Posted by Boulder View Post
What I usually do is:
1) Use debug=1 to select the prefiltering (adjusted by prefilter=1, 2 or 3) on the motion search clip. This depends on the amount of noise and grain as you want to have a rather clean clip to do the analysis on.

2) Use debug=6 to set thscd1 and thscd2 based on the amount of blocks it detects as "changed too much". In the upper left corner, if the bottom value exceeds thscd2, it triggers a scene change in the denoising. This must be set for each source as they differ a lot. Thscd1 affects which blocks are detected as "changed too much".

3) Finally, debug=2 to set limit (limits MDegrain) and blurlimit (limits spatial blur), it interleaves between the original and denoised frame. Debug=3 is the same but it shows the differences in luma amplified. Debug=4 does the same for chroma.

The defaults are set so that the denoising is very slight as I like to keep the grain. Still, it can easily shave off a nice amount of bits from the final encode without actually being noticable.

For example, my ROTK script looks like this (on a 3900X):
Code:
part1=DGSource("M:\lotr_rotk_part1.dgi",ct=144,cb=144,cl=4,cr=4).Trim(0,183541)
part2=DGSource("M:\lotr_rotk_part2.dgi",ct=144,cb=144,cl=4,cr=4).Trim(24,0)
part1+part2
MDG(limit=0.1, prefilter=1, blurlimit=0.3, thscd1=420, thscd2=90)
Neo_f3kdb(preset="medium", grainy=0, grainc=0, output_depth=16, sample_mode=4, mt=false)
AddBorders(8,0,8,0)
Prefetch(threads=24, frames=12)
Thank you for sharing your snippet. What's your view of something like TemporalDegrain2 instead of MDegrain, and perhaps GradFun3 instead of neo_f3kdb? Certainly in the latter I see better results with noisy sources.
__________________
PC: R9 5900X | 32GB 3600 MT/s RAM | 2*1TB NVMe | RTX 3080 | water-cooled

NAS: SM 48-bay 240TB+ storage | Xeon 1220 | 32GB DDR4 ECC

HTPC: Pentium J5005 | 16GB RAM | 256GB SSD | 15W
tonemapped is offline   Reply With Quote
Old 19th December 2021, 20:13   #31  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
Quote:
Originally Posted by tonemapped View Post
Genuinely fascinating. I've also noticed that CTU 64 vs CTU 32 for 1080p encodes produces results I perceived to be odd, but thought it was normal and perhaps an oddity of x265. I've found in tests that 3-pass (slow first pass) encodes at 7,000 kbps in 720p, backgrounds look more natural compared to the 1080p encodes at 16,000 kbps. This is on a source with very little noise and backgrounds should be clean with no banding at the bitrates used.

What's your results from n-pass tests?
I've never properly tested a multipass IIRC, since I don't do any of that for production use. I would expect that it still shows the same symptoms. My findings earlier were that the encoder favours merges a lot when CTU is 64 and you use rskip mode 2.

Quote:
Thank you for sharing your snippet. What's your view of something like TemporalDegrain2 instead of MDegrain, and perhaps GradFun3 instead of neo_f3kdb? Certainly in the latter I see better results with noisy sources.
As far as I know, TemporalDegrainV2 also does similar things, i.e. MDegrain is used there too. I've noticed that neo_f3kdb does very slight debanding with my settings so I've kind of settled on that. Strong settings can easily start removing detail from the image.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 20th December 2021, 08:38   #32  |  Link
tonemapped
Video Fanatic
 
tonemapped's Avatar
 
Join Date: Jul 2021
Location: Surrey
Posts: 89
Off topic, but ...

Currently running three x265 jobs - including one testing out TemporalDegrain2 as a substitute (as mentioned above). Still amazed after going from a 4C/8T 6700K @ ~4.5 GHz (overclocked) to a 12C/24T 5900X at ~4.30 GHz (stock). The new CPU also runs 20C cooler (although it does have a 360mm radiator)!



Boulder, the reason I've gone off topic is because you mentioned you also have a 12C CPU. What sort of thread/pool allocation do you tend to use with x265? In x264 it's easy to work out, but with x265 it seems that the extra threads really hurt quality if allocated to a single job, hence running three jobs at the same time. I've done tests on a 32C/64T server at work, but that's different as the default setup is four simultaneous jobs.
__________________
PC: R9 5900X | 32GB 3600 MT/s RAM | 2*1TB NVMe | RTX 3080 | water-cooled

NAS: SM 48-bay 240TB+ storage | Xeon 1220 | 32GB DDR4 ECC

HTPC: Pentium J5005 | 16GB RAM | 256GB SSD | 15W
tonemapped is offline   Reply With Quote
Old 20th December 2021, 11:29   #33  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
Quote:
Originally Posted by tonemapped View Post
Boulder, the reason I've gone off topic is because you mentioned you also have a 12C CPU. What sort of thread/pool allocation do you tend to use with x265? In x264 it's easy to work out, but with x265 it seems that the extra threads really hurt quality if allocated to a single job, hence running three jobs at the same time. I've done tests on a 32C/64T server at work, but that's different as the default setup is four simultaneous jobs.
That is an interesting question indeed. I've not encountered any quality issues with the default settings - I believe the biggest thing with faster presets is that they use lookahead slices which can affect frame type decisions negatively. The slower ones run just one slice.

I've also tested -F 1 to -F 4 and have not noticed any negative impacts by running four frame threads as per default. I think it may affect reference frame decisions negatively, but nothing that I could see with my own eyes (frame by frame or in motion).

So in short; I just run with the default settings and also assign 24 threads to Avisynth so the total CPU usage is usually 80-99%.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 20th November 2022, 00:26   #34  |  Link
funskel
Registered User
 
Join Date: May 2018
Posts: 3
Quote:
Originally Posted by benwaggoner View Post
A cool thing is that the degraining process, metadata, and grain synthesis tech is all codec agnostic. SoC vendors are implementing it generically, so any AV1 decode capable HW should be able to use the same technique with HEVC, VVC, AV2, etc.
The technique is codec agnostic, but AV1 patents are licensed only for implementations of AV1. I haven't found documentation of what patents are involved, but if ATEME's film grain patent is one (they are an AOM member), it might require a separate license to use it with another codec.
funskel is offline   Reply With Quote
Old 31st December 2022, 20:41   #35  |  Link
BuccoBruce
Registered User
 
Join Date: Apr 2022
Posts: 28
Quote:
Originally Posted by funskel View Post
The technique is codec agnostic, but AV1 patents are licensed only for implementations of AV1. I haven't found documentation of what patents are involved, but if ATEME's film grain patent is one (they are an AOM member), it might require a separate license to use it with another codec.
Interesting, I have yet to see it implemented in a video file though, and other than my own toying around with AOMenc's "out of band" grain synthesis tools, I don't think I've really seen anyone mess with this in any meaningful way.

x265 added support for side-band grain synthesis data a while back, and multicoreware released their own utility for generating noise data.

Technically it was in the spec for H.264 too...and it looks like now it's MPEG-C/9

It looks like ffmpeg might have decode support for film grain synthesis SEI messages in HEVC bitstreams as well? https://lists.ffmpeg.org/pipermail/f...st/128333.html

ITU calls it "H.274" now as well.

TL;DR Grain synthesis sounds cool, I've played with it before with AV1, and it's still got a long way to go

Back to HEVC and grain synthesis, at least with x265, an immediate observation comparing it with the aforementioned reference/test tool that you can compile alongside AOMEnc is that:
  • AOMEnc's grain tool actually has you supply two .yuv files - one with grain, and one without
  • It generates the grain parameters based on the difference between the two, which means degraining the video is on you
  • This mcw utility does the de-graining for you and generates a binary grain model file and...
  • I assume you're then meant to encode the de-grained "output.yuv" file it also produces, but there's technically nothing stopping you from telling it to send output.yuv to NUL, and then de-graining the source how you prefer and just encoding that while still supplying the generated grain model to x265 to then re-inject the grain parameters

On paper, I liked how AOM's test implementation sounded, since it meant I could use an obnoxious chain of AVISynth filters to degrain in a way that looked pleasing to me. It also meant I could "clean up" the grainy source a little bit first without actually doing any de-graining, like trying to remove any macroblocking introduced by compressing grain from 80+ year old film at 15 Mbps using AVC and use the difference between that and the "fully de-grained" input .yuv to "generate" the grain parameters. In practice? I only tested it once, but the grain synthesis parameters AOM's tool generated were...not great.

Speaking of grain that's compressed so much that it's turned into macroblocks...shame on you WHV...you could've given us an extra couple of discs and bumped the bitrate, these weren't exactly cheap for 80+ year old cartoons. You could've fixed the pitch issues on "very musical" shorts like Rabbit of Seville that you introduced somewhere in the restoration process instead of forcing me to hunt down older DVD copies and yoink the audio from there. You could've thrown in a TrueHD track as well, since DTS-HD MA 2.0, even with a smaller 768 kbps "core", would've likely wasted more space than TrueHD 2.0 + the mono AC-3 that's already on here. Really makes me chuckle that neither lossless audio format has any provision for actual mono sound, DTS doesn't have support for 1.0 channel sound at all, but at least newer TrueHD encoders seem to be a bit more clever about what I can only assume is using things like joint-stereo coding internally and saving some more bits on tracks like that, not that I have access to one for testing, but I digress...

Was it better than trying to make the resulting ~2-4 Mbps AV1 encode (I forget) that I ended up making with the grain left intact as is from the already bit-starved ~15 Mbps Blu-Ray source? YES, no question.

I also deliberately picked a test clip that you could see "elsewhere" in any number of different places and formats. With WB uploading the same clip in a longer video presumably "unfiltered" to YouTube - you can see how poorly YouTube's 1-2 Mbps AV1, AVC, and VP9 compression fared with grain that heavy. In addition to places like Amazon and iTunes, I think the same short is probably on Boomerang or HBO Max or both, with the former seemingly using nearly identical files, with very similar bitrates to their Blu-Rays in my experience (but usually 25 fps PAL masters meant for European terrestrial broadcast - no speed-up, just frame duplicated) and the latter using lower bit-rate x264 or x265 (they keep the SEI messages in there...lol, just like Netflix) from the same masters as either of the other two. Amazon and iTunes obviously do their own thing as well.

There's very clearly only one "newly restored master" for all of these old shorts, but how "early on" in the generational loss chain either of the three, WHV/WBHE, Boomerang, or HBO Max, got their hands on the material is only something they'd be able to tell you. benwaggoner, will you get in trouble if you hint at what kind of files WBHV might have given Amazon in the past?

Was it better than letting AOMenc try to de-noise on its own and generate grain parameters? Absolutely.

Did it look any good? Eh...it was certainly "as noisy" when comparing side-by-side with the source, but I could see..."grain macroblocks" if that makes sense. Clear boundaries in the areas of the drawn grain pattern, which make more sense when you look at the actual information the utility spits out, it's just frame/timestamps, regions, and what I think are seed numbers. You could also very easily see the "synthetic" structure of it all. I haven't actually looked at what the SEI data looks like, or if it's any different from the information the utility made.

It didn't look like natural film grain, it looked like an overlay. I mean, I know it already is technically nothing more than an overlay, but it took a long ass time to generate, so I certainly had higher expectations than that. I wouldn't be surprised if somebody writes some kind of GPGPU plug-in (please make it platform agnostic, and not bound to CUDA...) to generate its own synthetic grain in real-time that looks better than you could ever signal in tiny little sideband/SEI messages.

Did I ever use it again? No. My poor results might've been entirely on me, with just how grainy the source was, and how spotless the "no grain" comparison file was I made this time around. I'm not very skilled with making "good" filter chains for this kind of thing, and it was also contrasharpened. When I normally de-grain old animation like this, I tend to prefer sharper output over either completely removing all of the grain or saving the most amount of bits when I encode it at the end. I just want it to look "how I think it ought to."
Everybody who worked on these is long since dead, may they rest in peace, and it's really old, at times not well preserved, film. I get to decide what I like, purists pls cry elsewhere - if the "source" material available was better then maybe I'd be more of a "purist" about it...

There was a third option I didn't explore, using two slightly different "de-grained" files, one with no additional processing to generate the grain, and one with more of the filters I'd probably be applying to material like this - a little bit of contrasharp, etc. for the encode. If I ever find the extra hard-drive space for a bunch of .yuv video and the time to play with it some more, I might try it again with the sharpened file only being used for the final encode, and the "de-grained" parameter generation file being completely un-sharpened, but I'd probably wait for an improvement in the utilities themselves before considering it.
BuccoBruce is offline   Reply With Quote
Old 3rd January 2023, 19:37   #36  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,819
AFAIK no one is using the AOM reference FGS implementation in practice. Degraining is a whole other area of research with different participants, and where commercial implementations seem the most likely FGS route for the next few years. Speed is absolutely a concern for degraining as well. CUDA-based implementations can be quite good, but tons of existing encoding workflows in the cloud are on CPU-only instances, with CUDA enabled instances a lot more expensive.

While degraining and grain synthesis are generally well understood, the middle part where the grain gets parameterized while removed is relatively new stuff, with a whole lot of implementation complexities. A mature FGS solution would remove grain that can be parameterized, but pass through other noise types that can't be parameterized well under the AV1 FGS synthesis algorithm. There are other complexities, like some seed numbers providing suboptimal results, that need to be worked around. I hope we'll see a solution with good enough quality, fidelity, and performance to safely leave on by default in the next year or two. But that'll probably be commercial, not open-source. Once a good commercial implementation exists, I imagine the open source community will work to reverse engineer the techniques.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 2nd March 2023, 19:15   #37  |  Link
HD MOVIE SOURCE
Registered User
 
HD MOVIE SOURCE's Avatar
 
Join Date: Mar 2021
Location: North Carolina
Posts: 138
If anyone is still working on this, please let me know the success you've had, and what grain settings work best. I'd just like to see if there's anything you're doing now that's improved the look compared to x264 vs x265. Thanks.
HD MOVIE SOURCE is offline   Reply With Quote
Old 4th March 2023, 23:21   #38  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,819
Quote:
Originally Posted by HD MOVIE SOURCE View Post
If anyone is still working on this, please let me know the success you've had, and what grain settings work best. I'd just like to see if there's anything you're doing now that's improved the look compared to x264 vs x265. Thanks.
I am getting good results with what I listed above:
  1. use --nr-inter to reduce grain swirling and bitrate consumption. It can go up to 2000; don't be afraid to test high values.
  2. Raise --nr-intra to where I frames don't have visibly more detail
  3. Reduce --ipratio and --pbratio for less frame strobing
  4. Raise psy-rdoq to retain a more even appearance of grain detail
  5. Reduce --aq-strength to keep grain detail more consistent across the frame
  6. Use --aq-mode 4 instead of 2
  7. Use --rskip 2 instead of 1, with a lower threshold. The original rskip was terrible with grain
  8. Try --rd 4 if --rd 6 isn't working well
I'd suggest just defaulting to --rd 4 with grainy 4K content. --rd 6 only works well at 4K with no or low grain. --rd 6 with grainy content is generally better at 720p and below, and at 1080p with moderate or less grain.
A --nr-inter to --nr-intra ratio of 2-5x has been good in my experiments. --nr-intra 100 --nr-inter 350 are probably good starting points. These can have a big impact on bitrate with --crf.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 29th April 2024, 00:21   #39  |  Link
simple_simon
Registered User
 
Join Date: Feb 2003
Posts: 124
Quote:
Originally Posted by Boulder View Post
I've pasted my function here: https://pastebin.com/0KegSZiR
I know this is an old thread but I'm trying to test your script and getting errors. I don't know if some of the terminology changed in the Dogway dependencies? Do you have an update version?
simple_simon is offline   Reply With Quote
Old 29th April 2024, 04:44   #40  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,752
Quote:
Originally Posted by simple_simon View Post
I know this is an old thread but I'm trying to test your script and getting errors. I don't know if some of the terminology changed in the Dogway dependencies? Do you have an update version?
I think it's not changed much. What errors do you get?
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Reply

Tags
grain, x265

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 03:27.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.