PDA

View Full Version : Straight CBR mode


DocAliG
14th February 2007, 15:33
Hi all,

As i started to use x264, i've tested some parameters to obtain a straight CBR encoded video. But the result has been quite different, the instant bitrate is not really straight.

I was wondering if i missed something in the RD alg. or simply if x264 has not been designed for straight CBR.
In the first case: can you enlight me? :thanks:
In the second case, i was wondering (yes, again) if something has been schedule for this or if it's an opened issue left for the good will....

Thank you all,

Doc.

akupenguin
14th February 2007, 19:24
VBV is working.
If you mean, frames don't all have exactly the same size, then x264 doesn't try to do that and I don't know any other modern codecs that support it either, because it's just not useful.

DarkZell666
14th February 2007, 20:54
i've tested some parameters to obtain a straight CBR encoded video

Which parameters exactly ? Apart from turning --qcomp and --ratetol right down to 0, there isn't much you can do.

Setting --vbv-maxrate and --vbv-bufsize to the same value as your target bitrate would help (though I'm not too sure about the buffer's size meaning in fact :o).

Anyhow, as akupenguin said, this will only size of the frames in order to use <your_bitrate_here> kbits per second, not <your-bitrate-here>/<video_fps> kbits per frame.

Edit: setting --qpstep to a higher value (ex: 10) will help you get closer to CBR, but is likely to be useless or lead to useless artifacts.

DocAliG
15th February 2007, 16:14
VBV is working.
If you mean, frames don't all have exactly the same size, then x264 doesn't try to do that and I don't know any other modern codecs that support it either, because it's just not useful.

Ahhh.... Sometimes, applications require such (i agree) strange constraint. I know that the first goal is to store, but x264 is such a good codec (also because avc/h264 is a very good norm, eh eh), that people wants to use it for any applications.

On the other hand, some modern codecs do have straight CBR.... I won't go into details here.
Thanks anyway for the tips. I'll try, and, if it's not sufficient, implement my own strange RDO mode :)
:thanks:

Doc.

DocAliG
15th February 2007, 16:23
Which parameters exactly ? Apart from turning --qcomp and --ratetol right down to 0, there isn't much you can do.

Setting --vbv-maxrate and --vbv-bufsize to the same value as your target bitrate would help (though I'm not too sure about the buffer's size meaning in fact :o).

Thank you DarkZell666, the vbv options are precisely what i tried to use.... anyway, even when we try to get closer to a straight CBR line, it seems that locally, the bitrate still varying quite much.

Thanks anyway, i'll try the --qstep to higher values.

Doc.

akupenguin
15th February 2007, 21:53
But why would you want a "straight CBR line", and are you sure it's varying more than you asked it to?

DocAliG
16th February 2007, 12:54
But why would you want a "straight CBR line", and are you sure it's varying more than you asked it to?

Actually, when i say "straight CBR line" is to avoid "saw" behavior, because my application is working on particular network that makes everything above a given line quite costly. And it's always better to use the available bitrate at 100%. But in my case, not 105%. And because the given bitrate is not that much, i don't want to use it 95%..... In other words, optimize the global video encoding from the beginning to the end to have "the best" or at least, "one of the possibily best" tradeoff that's possible to have. I've checked some IEEEs, but still, i think the best way is to ask first, and get dirty after. No need to re invent the wheel....

From my point of view, this type of encoding will become more and more popular.... :) And would promote a little more x264.

When i tried to use x264 with very constrained vbv parameters, the encoder behavior starts to be fuzzy, non logic. it can over/under consumes bits which still looks like a very constrained VBR than a purely CBR.

If you can find some minutes just to estimate the time it would take to modify x264 source to achieve such RDO mode, that would be a great help for me....

Anyway, thank you again,

BR,
Doc.

kwtc
16th February 2007, 13:49
What you are asking for is CBR. In CBR, if you start streaming at the right moment (depends on your vbv settings), you will always be able to stream (no underflow) at the given bitrate (no overflow).

DarkZell666
16th February 2007, 14:07
The bottom line is here: because my application is working on particular network that makes everything above a given line quite costly.

But the streaming process and the encoding process isn't the same thing at all. Streaming a simili-CBR video stream at constant bitrate (say 64kBytes/s over a 512kbps ADSL line) is better than wanting to encode the video at precisely 20,48 bits/frame and streaming it as-is. You'll have to explain a bit more of your restrictions if you want further advice, since your case is rather particular.

DocAliG
16th February 2007, 16:49
to kwtc: yes, that's exactly what i'm asking. CBR. And my case it's true, is rather particular.
Let's forget encoding and decoding for a while. Let's focus on network. If the network have no tolerance to sending data faster, or supporting more date, not because it's limited, but more likely because any supplementary bit you have to transmit above the given line costs a lot, then, you have to take advantages of all the resources available, without losing anything. That's where the CBR mode is relevant. And after getting a constrained encoded content with x264, passed it through the instant bitrate graph, it is clear that the sometimes, the rate is under the line, sometimes, above. My question is on this behavior.


The bottom line is here:

But the streaming process and the encoding process isn't the same thing at all.

True. But i've never mentioned anything about streaming, did i ?.... ;)


Streaming a simili-CBR video stream at constant bitrate (say 64kBytes/s over a 512kbps ADSL line) is better than wanting to encode the video at precisely 20,48 bits/frame and streaming it as-is. You'll have to explain a bit more of your restrictions if you want further advice, since your case is rather particular.

I know, and i would really like to expose everything to be much more clearer than just generic. But no choices ;)

Let's say, it's not streaming, it's a particular network (i didn't say anything, but clues...) that requires a straight CBR. And i would rather stream something using x264 as it, because it's a great encoder, and much more efficient that way, but it's simply not feasible. The only way to have this "simulated CBR" is to encode the video in constrained chunks (with vbv options). That's not efficient..... But that's the only way to not pass the (dead) line :)

Doc

Manao
16th February 2007, 17:03
The problem is, you don't know what CBR ( as defined by the standard ) is.

CBR doesn't mean every frame will have the same size. CBR means that, provided you can buffer a definite amount of data prior to stream it, you can stream that data at a perfect constant rate, without the buffer to empty nor fill completely.

CBR in x264 allows the buffer to empty, but should prevent it from filling.

DocAliG
16th February 2007, 17:26
The problem is, you don't know what CBR ( as defined by the standard ) is.

CBR doesn't mean every frame will have the same size.

Oh, did i say that? ... ;) Indeed. I really do know what CBR is.

CBR means that, provided you can buffer a definite amount of data prior to stream it, you can stream that data at a perfect constant rate, without the buffer to empty nor fill completely.
CBR in x264 allows the buffer to empty, but should prevent it from filling.

Yes, the idea is here. We can be more schematic by imagine data as water.... ;)

Doc.

kwtc
16th February 2007, 17:32
When i tried to use x264 with very constrained vbv parameters, the encoder behavior starts to be fuzzy, non logic. it can over/under consumes bits which still looks like a very constrained VBR than a purely CBR.


Well, perhaps x264's CBR has some bugs ?

If not (I mean if it complies to the H.264 standard), then it is what you need. There is no need to add other strange CBR modes.

Hellworm
16th February 2007, 20:22
That x264 is locally (for a few frames) above or below your desired bitrate doesn't mean that it will consume more or less bandwidth than you requested it. Over period of multiple frames the bitrate is as you want it, so if you (to be absolutely secure to to not get over your limit) restrict the bandwidth of your streaming application to a hard limit, it is no Problem to recieve the stream with the appropriate buffer size (that you specified at the encoder end).
Thats what the buffer is for: to smooth out the local fluctuations in the bitrate, so that you practically send a pure cbr stream.

DarkZell666
16th February 2007, 20:37
DocAliG: you'll have to show a graph of what bitrate distribution you get with your settings (since it isn't constant enough for you), and compare it to another graph which represents what you're expecting (you'll have to invent that one).

Doesn't each second of video consume <bitrate> kbits ? All you keep on saying is "this is not CBR, this is not what I want". What are you really expecting ? You don't seem to be expecting CBR but some other obscurely restricted bitrate distribution that you aren't capable of explaining with technical words. And what other use of a network could you possibly be studying/deploying if it isn't for streaming or distributed encoding ? It doesn't even matter but you're sort of turning down every attempt to understand what you're going on about. If you want help you'll have to make sure everyone understands what you mean first.

Actually, how are you measuring the obtained bitrate ? If by any chance you're using ffdshow's overlayed informations, the bitrate value it shows isn't what I would call accurate.

akupenguin
16th February 2007, 20:41
Doesn't each second of video consume <bitrate> kbits ?
That isn't guaranteed either, though it will be close if you set the buffer to smaller than 1 second.
With a 1 second buffer, 1 second of video could use twice the max bitrate, while still being VBV compliant. i.e. the client's buffer is full before and empty afterwards.

foxyshadis
17th February 2007, 08:43
If you set the buffer to 1/framecount*bitrate, would that force all frames to be equal size? (Or neglegible difference at least.)

Manao
17th February 2007, 08:48
That would be bitrate / framecount, and yes it would. However, I doubt x264 would have the required accuracy ( meaning, it would do its best, but its best would mean the CPB to be violated almost on every frames ).

And, the only use of such a short CPB is low delay. He doesn't care about low delay, so he must set the CPB as big as possible, buffer the data, and send it at a constant rate.

foxyshadis
17th February 2007, 08:55
I know, I was just wondering if I was interpreting the values right. At first I was looking for an argument to specify the averaging delay, until I realized that buffer size encompasses that and it just has to be set right. Thanks.

DarkZell666
17th February 2007, 09:06
That isn't guaranteed either, though it will be close if you set the buffer to smaller than 1 second.
With a 1 second buffer, 1 second of video could use twice the max bitrate, while still being VBV compliant. i.e. the client's buffer is full before and empty afterwards.

Basically this means that setting the buffer size to half a second (<bitrate>/2 ?) solves his problem imho.

akupenguin
18th February 2007, 00:01
Setting the buffer size to 0.5 second means that the max bitrate over any 1 second period is 1.5x vbv_maxrate.

If each <timeperiod> uses exactly <timeperiod>'s worth of maxrate, that is equivalent to saying that each frame is uses exactly 1 frame's worth of maxrate. Consider: start with a window of size <timeperiod> that fulfills the constraint. Slide the window forward by 1 frame. In order to still fulfill the constraint, the frame that slid out of the window must have exactly the same size as the frame that got added to the other end of the window. (OK, so constant isn't the only solution to that recurrence equation, but a perfectly periodic allocation isn't any better.)

Just don't even think about "max bitrate over <timeperiod>", because that's not what matters, and not what VBV limits.

DocAliG
18th February 2007, 23:13
Thank you all for all the comments, this is very helpful for my x264 understanding.

To DarkZell666: I'll try to provide a more accurate explanation of what i need; but trust me, i know exactly what i want... :)

Actually, i can certainly even provide curves to be more accurate.... I'll do that tomorrow.

Thanks again,

Doc.

P.S.: Actually, a good software (not free, of course, but with a 2 weeks complete trial version), Semaphore 2, can analyze the stream.
Therefore, it's possible to define multiple purpose alerts, one of them is the bitrate....

DocAliG
20th February 2007, 17:21
/me back again
Here is the classical distribution (in attach.) we have on a short 1min 20 sec video (an ad available on the net, used for test purposes).
The graph shows the used bandwidth/time when simulating sending the video to a client. The video has been encoded using 2 pass constrained mode with vbv. Bitrate is 338 kbps.

The red highlighted part of the curve is the area where the bitrate exceeds the straight line.

Comments are welcome.....

Thanks...

Doc.

Manao
21st February 2007, 19:03
Can you upload the screenshots somewhere ? The mod(s) seems to be MIA currently.

Manao
22nd February 2007, 08:28
@mods : thanks !

@DocAliG : can you tell :
- what is the size of the VBV
- what is the initial occupancy of the VBV

What you show is meaningless if those two aren't known.

Also, if you could do a graph showing the state of the VBV ( the VBV starts at the initial occupancy. Every time you encode a frame, you increase the state by the size of the frame, then you decrease the state by the bitrate x duration of the frame ).

If the state never go over the size of the VBV, then the VBV isn't violated, and you aren't streaming properly. If it does, then it's x264's fault.

akupenguin
22nd February 2007, 09:25
Also, if you could do a graph showing the state of the VBV ( the VBV starts at the initial occupancy. Every time you encode a frame, you increase the state by the size of the frame, then you decrease the state by the bitrate x duration of the frame ).
x264 terminology is the reverse, it models the decoder's buffer. i.e. --vbv-init 1.0 means the first few frames can be huge if needed, while --vbv-init 0.0 gives them no budget to work with.

DocAliG
22nd February 2007, 11:33
@DocAliG : can you tell :
- what is the size of the VBV
- what is the initial occupancy of the VBV


Bitrate 338 kbps.

VBV Buffer size to 300
VBV Maxbitrate to 300
VBV Initial Buff to 1.0
Bitrate Variance to minimum (0.1)

The main problem happens after 40 sec of video..... At one point, the RDO has a inflexion point, that bypass the limitation. No matter how hard i constrain the encoder.... That why, for now, we are chunking and stitching, to avoid this kind of problem after a while....

akupenguin
22nd February 2007, 22:39
It is meaningless to ask for Bitrate > Max Bitrate. What were you thinking? The result can't be VBV compliant, so x264 doesn't even try.

DocAliG
23rd February 2007, 11:18
It is meaningless to ask for Bitrate > Max Bitrate. What were you thinking? The result can't be VBV compliant, so x264 doesn't even try.

Oh really... Indeed.
I'm quite surprised. Because, the first time i've ever tried this RD opt, i thought like you. I thought : It's meaningless to have a VBV maxrate < bitrate. In attach, same encoding, same bitrate, the only thing that change : Maxbitrate (set up to 338 kbps, same as nominal bitrate).
almost 89% of the bitsteam is exceeding the line..... (see graph)

And i've found out.... That there is a down limit.... Same parameters, when maxrate is set up to 150 kbps whereas the nominal bitrate is set up to 338 kbps, then only 25% of the bitstream is exceeding the line. Weird innit ?

I don't know what i was thinking about but.... ;)

I'll try to be clear:
no offense, but I'm not a geek-guy (at least i hope so) trying to encode the latest desp. housewives episode... I'm trying to understand x264 RDO, and offer the possibility to HELP correcting something is something has to be corrected. Lots of people and companies are using x264 now. I think it's important to know if something is wrong.

The best thing is maybe to continue this offline.

Best regards,

Doc.

akupenguin
23rd February 2007, 19:59
Ratecontrol != RDO.
In pm you said something about JM's "Search Range Restriction" and "RD Optimization" options. Those are also unrelated to ratecontrol.

DocAliG
27th February 2007, 16:15
Ratecontrol != RDO.
In pm you said something about JM's "Search Range Restriction" and "RD Optimization" options. Those are also unrelated to ratecontrol.

I definitely don't agree with that. Rate control and R-D Optimization are intrinsically correlated. Modifying the rate will modify the way the encoder does his R-D.
In the same way, the finest the motion vectors are, the more data it will take to encode ==> modifying the R-D.

I don't know how x264 exactly works, but that's the way we do it for MPEG - JM.

akupenguin
27th February 2007, 17:50
The following is definitely how x264, xvid, and libavcodec work, and afaik also applies to JM. Though JM has a rather dysfunctional ratecontrol (it doesn't do VBR nor VBV-compliant CBR, only some nebulous approximation of CBR), so maybe JM is even more brain-damaged than I thought and doesn't work this way.

Ratecontrol picks lambda. If RDO is enabled then lambda is used in RDO (ssd+lambda*bits), otherwise lambda is used in an analogous way in the non-RDO decisions (satd+lambda*bits). So you see, ratecontrol doesn't have to care which way lambda will be used, it only needs one algorithm to guess what value of lambda will produce the desired frame sizes.
(The reason one is called "rate-distortion optimization" and the other isn't, is that the latter skips dct/quant/dequant/idct (it just treats the whole residual as distortion) and uses only an approximation of mb-type and mv bits. Every single option in the encoder modifies rate and/or distortion, but that doesn't make them all RDO. If it did, then the term would be useless.)

Ratecontrol's job is to remove any influence of other options from the resulting bitrate. If you ask for CBR then you'll get that CBR, regardless of the values of any non-ratecontrol options. Yes, modifying e.g. motion search algorithm may change the bits and distortion resulting from any given value of lambda, but that just means ratecontrol will pick a different lambda to compensate. Then end result is that ratecontrol controls bitrate, and all other options control only quality and speed.

DocAliG
28th February 2007, 17:39
The following is definitely how x264, xvid, and libavcodec work, and afaik also applies to JM. Though JM has a rather dysfunctional ratecontrol (it doesn't do VBR nor VBV-compliant CBR, only some nebulous approximation of CBR), so maybe JM is even more brain-damaged than I thought and doesn't work this way.

Yes, JM uses the Lagrangian as well for RDO, in quite the same way. But the JM doesn't support the VBR, only CBR for the rate control. JVT-Q042, on Karsten IP Page. And it's conforming to the HRD described in the norm.

... Then end result is that ratecontrol controls bitrate, and all other options control only quality and speed.

That's the point were we cannot agree. Lambdas are chosen specifically (i.e. Lagrangian equations) in RDO regarding the rate. And that's quite logical, after all, that's Shannon.

But anyway, we are getting quite far away from the point. I was just asking to comment the two results: Why if i set up the maxrate to 300kbps and the nominal bitrate to 338kbps i have 39% of the bitstream exceeding the max, and when i set the maxrate and the nominal rate to 338 kbps, then i have more than 88% of the bitstream exceeding the max. ;)

:thanks:

Manao
28th February 2007, 18:54
That's the point were we cannot agree. Lambdas are chosen specifically (i.e. Lagrangian equations) in RDO regarding the rateDo you understand what you're saying ?

But anyway, we are getting quite far away from the pointYou still don't tell us if, when making the graphs, you took into account the initial vbv occupancy. If you do it properly, it will offset upward the diagonal line, and the line should end up over the size curb.

DocAliG
1st March 2007, 11:29
Do you understand what you're saying ?

I hope so.

You still don't tell us if, when making the graphs, you took into account the initial vbv occupancy. If you do it properly, it will offset upward the diagonal line, and the line should end up over the size curb.

The graph shows the bandwidth occupancy i.e. the network tells how the data is fluctuating inside. I do agree that if we take in account the vbv occupancy, the line will be shifted and end up slightly upward, but this is one of our constraint not to buffer a large amount of data. It's true that I forgot to mention this point.

akupenguin
1st March 2007, 22:40
And it's conforming to the HRD described in the norm.
Then what parameter in encoder.cfg lets me specify the VBV buffer size (or Coded Picture Buffer in h264's terminology)?

That's the point were we cannot agree. Lambdas are chosen specifically (i.e. Lagrangian equations) in RDO regarding the rate. And that's quite logical, after all, that's Shannon.
Consider two encodes, one with "bitrate=300, RDO=off" and the other with "bitrate=300, RDO=on". Would you expect them both to be 300 kbps? If so, then RDO doesn't affect bitrate. If not, how can you call it anything other than a bug in ratecontrol?

and when i set the maxrate and the nominal rate to 338 kbps, then i have more than 88% of the bitstream exceeding the max.
What Manao said.
But even if you do take into account inital VBV occupancy, that will only detect whether or not a stream is complaint. If you want a metric by which to measure how much it exceeds the limit, "% of the bitstream" is still meaningless. Consider: start with a stream that's just barely at the limit. If you increase the size of one frame near the beginning (and don't reduce the size of any other frame to compensate), then the stream will be 99% above the diagonal. If the frame you increase is near then end, then it'll be 1% above the diagonal. But both violated the limit by the same amount: 1 frame. (BTW, is this a common misconception? MSU made the same mistake in their H.264 codec comparison papers.)

but this is one of our constraint not to buffer a large amount of data.
VBV is required. There is no such thing as CBR without it.

DarkZell666
1st March 2007, 23:12
I'm wondering if he isn't confusing what we usually do to compare options using cqp or crf encodes: changing rdo options and measuring the filesize difference. Because obviously, target bitrate means target bitrate ... I wouldn't expect my 50MB-targeted encode to end up being 55MB without RDO options, and 45MB with RDO options :/

Hmm 2nd thoughts going on ...

In the facts, RDO does modify the effectively obtained bitrate (by 0.1% or so), because any given frame can only be coded at certain sizes (those that map to the quantizers that are candidate for encoding that specific frame), and ratecontrol can't always compensate as efficiently those slight flucuations by redistributing the bits from one frame to the other: the unused bits from one frame (thanks to RDO) need to be given out somewhere else (to reach target bitrare), but not all of those unused bits can effectively be distributed. If the quantity of bits saved is a tad smaller than what is necessary to encode the next frame at QP-1 (compared to the non-RDO encode), I guess x264 still gives those bits away (+ some pre-reserved bits It'll try to take back off later frames).

Am I somewhere near the truth here or not ? ^^ I mean ... ratecontrol isn't bit-precise between an RDO encode and a non-RDO encode. But I'm not sure if this very natural error margin is enough to explain what DocAliG is fighting against ...

akupenguin
1st March 2007, 23:15
Not bit-precise, but the error is much less than the size of one frame, so make that .0001% over a whole movie. And CBR modifies QP within a frame, so it's even closer. And that applies to any option, not just RDO.
And you're not supposed to measure filesize difference, unless quality is constant. So CABAC-vs-CAVLC is the only option where a pure filesize comparison is valid. Anything else has to compensate for a difference in quality, and it's just easier to let 2pass compensate the size instead.

DocAliG
2nd March 2007, 10:45
Oh, thanks everybody for feedback. No fighting, just trying to understand couple of things.

@akupenguin: You cannot determine your own vbv-buff size because the JM is calculating it automatically. This is another way to think it. The DPB is the "mbuffer.c" file in the JM.

@DarkZell666: nothing like comparing file size... :) But thanks.

Anyway, many thanks to akupenguin and DarkZell666 for the discussion and the time you took for explaining. Manao made couple of tests yesterday that may tells me a little bit more on my own tool and this CBR problem. So a great thank to him for the time he's spent to do those tests.

@all: If you want, i can provide you token to test Joost TV (need email for that) ;).

Sergey A. Sablin
2nd March 2007, 13:49
Oh, thanks everybody for feedback. No fighting, just trying to understand couple of things.

@akupenguin: You cannot determine your own vbv-buff size because the JM is calculating it automatically. This is another way to think it. The DPB is the "mbuffer.c" file in the JM.

It still looks for me like you're mixing different buffers.
DPB is a decoded picture duffer and no one here is talking about it. This buffer is used for reference pictures and has nothing to do with any bitrate, transmission etc.
CPB is a coded picture buffer - the buffer used in decoder to keep arrival data which is waiting it's decoding time.

JM has neither strict HRD compilance nor CBR as it doesn't write any padding. JM neither write HRD parameters, so to verify stream you have to manage all parameters by yourself.
And finally rate control which produce random CPB size is useless, since you need definitely known size to transmit your data through network and to correctly decode signal on fixed function chip decoders. There is no other way to think it - user has to specify coded picture buffer size, stream rate and initial delay and encoder using rate control has to produce stream with required parameters, else it is not RC - it is BS.

DocAliG
2nd March 2007, 16:06
It still looks for me like you're mixing different buffers.
DPB is a decoded picture duffer and no one here is talking about it. This buffer is used for reference pictures and has nothing to do with any bitrate, transmission etc.
CPB is a coded picture buffer - the buffer used in decoder to keep arrival data which is waiting it's decoding time.

It's just simply not comparable. Just because x264 and the JM are two different codecs, built in two different ways.

JM has neither strict HRD compilance nor CBR as it doesn't write any padding. JM neither write HRD parameters, so to verify stream you have to manage all parameters by yourself.

JM should have only CBR. And should have HRD compliance as well.

And finally rate control which produce random CPB size is useless, since you need definitely known size to transmit your data through network and to correctly decode signal on fixed function chip decoders. There is no other way to think it - user has to specify coded picture buffer size, stream rate and initial delay and encoder using rate control has to produce stream with required parameters, else it is not RC - it is BS.

That's why NAL has been created. But it's true that in JM, user has to define everything by himself. That's sometimes quite tough.

Sergey A. Sablin
2nd March 2007, 16:30
It's just simply not comparable. Just because x264 and the JM are two different codecs, built in two different ways.
they're quite comparable as these two are just two different implementations of same ISO/ITU-T specification named MPEG-4 part 10 AVC/H.264. You are talking absolutely odd things.

JM should have only CBR. And should have HRD compliance as well.

maybe it should but there is no any overflow prevention (using filler SEI), so if it is even CBR than it is not compliance with HRD model. I dont see any underflow checking either.

That's why NAL has been created. But it's true that in JM, user has to define everything by himself. That's sometimes quite tough.

NAL units have nothing to do with HRD and any rate control. It is just packetizing.

DocAliG
2nd March 2007, 16:46
they're quite comparable as these two are just two different implementations of same ISO/ITU-T specification named MPEG-4 part 10 AVC/H.264. You are talking absolutely odd things.

A norm is made to describe a BITSTREAM and a DECODER. Never to describe an encoder. You can think of any encoding you want, providing that the bitstream is made correctly and the reference decoder can decode it. I don't know who is talking odd things, but....

NAL units have nothing to do with HRD and any rate control. It is just packetizing.

I know. I was just reacting to your sentence : "...since you need definitely known size to transmit your data through network..." that's why i've said : that's why NAL has been created. To adapt the stream and prepare it for the network. Or at least one reason why.

squid808
2nd March 2007, 16:56
DocAliG: DPB (Decoded Picture Buffer) is part of the H.264 decoder specification and has nothing to with rate control. And this is the meaning of DPB in JM. You are, indeed, talking very odd things.

DocAliG
2nd March 2007, 17:05
DocAliG: DPB (Decoded Picture Buffer) is part of the H.264 decoder specification and has nothing to with rate control. And this is the meaning of DPB in JM. You are, indeed, talking very odd things.

I wrote "The DPB is the "mbuffer.c" file in the JM.". Never talked about rate control having a link with DPB. Thanks.

squid808
2nd March 2007, 17:14
I wrote "The DPB is the "mbuffer.c" file in the JM.". Never talked about rate control having a link with DPB. Thanks.

Ok. That makes me kind of curious about how your sentence "The DPB is the "mbuffer.c" file in the JM." is related to this thread?

DarkZell666
2nd March 2007, 19:20
And CBR modifies QP within a frame, so it's even closer.Ooops I had almost forgotten that one, it makes my point less valid than I thought :) And I'll take it that by CBR you meant "ABR + activated vbv" too ^^ (you almost scared me off ;)).

DocAliG
8th March 2007, 14:54
After a great help from Manao (and akupenguin), we finally sorted this CBR problem out (or at least, i understood why x264 is working this way :D).

For those who have interest in CBR, the vbv is only compliant on single pass mode, which has been verified by Manao small patch, (and the little tool i used to provide the curves). When using two passes or more, the x264 vbv is no more compliant, but that's not a problem, because the gain in CBR on several passes would be negligible.

If i have some spare time, i'll try to implement this compliance for 2 passes, just to check if this can slightly increase the quality (by curiosity....). Some bitrates are so small that just a small enhancement can make the difference.

A big Thank again to Manao and to akupenguin for your time.