Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

Domains: forum.doom9.org / forum.doom9.net / forum.doom9.se

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 10th November 2014, 08:42   #1  |  Link
rudyb
Registered User
 
Join Date: Aug 2014
Posts: 29
HEVC CABAC Dependency

Hi

Can someone please validate if my understanding of the following statement is correct?

Based on reading about CABAC and its close dependency from one bin to another bin, and based on analyzing how CABAC is done in HM model, it seems that within one CTU, I can only process one bin per clock cycle. And everything is indeed serial inside one CTU, and parallelizing is not really trivial inside one CTU. For example, if I am parsing a TU block of 32x32, this will literally takes 32x32=1024 clock cycle to process the bins. And even so, I can not start processing the second TU block of 32x32 within the same CTU, unless I am done with parsing the first TU block completely.

In other words, parallelization within one CTU cannot happen. And, the only way I can use some parallelism is by using Tiles or WPP. And for the case of WPP, this implies that I should finish processing two CTUs of the first row, until I can start processing the first CTU of the second row.

So, parallelism cannot happen inside one CTU (or at least that is how HM achieves this by serial processing of bins inside each TU). And even when I use WPP or Tiles, still TU parsing is a serial process inside each CTU.

Please verify if my above understanding is correct?

Thanks,
--Rudy
rudyb is offline   Reply With Quote
Old 10th November 2014, 14:52   #2  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,275
To my understanding, CABAC is inherently serial, because the de/encoding of each "bin" will update the context model and thus effect the de/encoding of subsequent "bins".

Furthermore I think that WPP was especially invented to overcome this limitation: Since each block will be coded based on the context from its top-right block (rather than the context from its direct predecessor block), multiple lines of blocks can be processed in parallel. All that you need to be able to en/decode the next block in line N is that line N-1 is two blocks "ahead" of line N. But within each block, CABAC is probably as serial as it used to be without WPP.

Anyway, some attempt to parallelize the CABAC decoding process can be found here:
http://www.rle.mit.edu/eems/wp-conte..._jssc_2012.pdf
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 10th November 2014 at 16:57.
LoRd_MuldeR is offline   Reply With Quote
Old 11th November 2014, 02:10   #3  |  Link
rudyb
Registered User
 
Join Date: Aug 2014
Posts: 29
Thank. That is how I was thinking about it too.
--Rudy
rudyb is offline   Reply With Quote
Reply

Tags
cabac, hm model

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 04:02.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2026, vBulletin Solutions Inc.