Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.


Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Thread Tools Search this Thread Display Modes
Prev Previous Post   Next Post Next
Old 12th February 2023, 04:25   #25  |  Link
Join Date: Jan 2006
Location: Portland, OR
Posts: 4,827
Originally Posted by DTL View Post
As I see modern video cameras to get 'real world' images have good progress of 'internal denoisers' so the world of broadcasting really moves to this direction. When you buy new set of video cameras with lower noise to your studio with less noise you also got great benefit to the quality of the compressed by MPEG encoders content for broadcasting. But it may be also simulated by (much cheaper) denoise hardware unit before your master MPEG coder. Or even software of zero price if you make software file-based processing and use software MPEG encoder.

Also internally both temporal denoiser and MPEG encoder based on the same ideas of motion tracking (block or object based) so may use same hardware to make motion estimation (and may be MPEG encoder may reuse motion estimation from denoiser). May be it is already implemented in modern codecs after HEVC - like in AV1.

So the 'preprocessing' between real world and MPEG coder is really essential part of total moving pictures compression process. To remove 'real world' random data and to clean really required visual information about scene from random noise of intermediate scene-view transfer media of photons flux.
The inter-frame 'denoiser' simply simulate video camera with more accumulating time per each scene object in compare with 'primary scene frame-based' camera. Primary camera (if not using internal interframe digital denoise) is limited to inter-frame time interval to accumulate photons. Also can not perform individual scene objects trackng.

The 'motion compensated denoiser' can extend accumilating time to about total visibility time of the object in the cut-scene and perform individual tracking if each scene object is not static relative to the primary video camera. So it simulates massive array (equal to the blocks number in blocks-based denoiser) of 'secondary video cameras' with individual tracking and much more extending data accumulating time without motion blur.

And after this 2-stages physical + simulated secondary video cameras scene data transform you got more clean scene data to MPEG encoder and pass it to simple enough MPEG encoder and got better output quality because MPEG can now spent more bits to the real scene objects encoding and not to residual noise encoding after non-complete motion compensation of nosied blocks.
Yeah, the line between encoding and preprocessing was never all that clear, and certainly has been becoming less so as codecs and encoders advance.

Still allowing unfettered preprocessing seems risky, as techniques like adding contrast like encoders were using to goose VMAF scores is a risk.

The spirit of the test is "what can deliver output most like the source" - which has unavoidable subjective elements.

I'm open to suggestions on how best to address this in an updated version of the challenge.

I'm thinking using StEM2 10-bit, with separate 1080p SDR and 2160p HDR targets. Thoughts?
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +1. The time now is 15:43.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.