Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 ASP

Reply
 
Thread Tools Search this Thread Display Modes
Old 24th May 2003, 19:19   #1  |  Link
crusty
Ahhh....Nuts!!
 
crusty's Avatar
 
Join Date: May 2002
Location: Holland
Posts: 309
Understanding a Quantization Matrix

Ok Folks, here is an explanation of what a matrix really does.

With Xvid you can not only use different built-in Quantization Matrices, like H.263 and Mpeg, but you can also use custom matrices either made by yourself or someone else.

Since few people actually understand what a matrix does, I will try to give an explanation here that is possible to understand even if you're not a math expert.

You will all have heard about macroblocks by now. These are the 16x16 or 32x32 blocks of which a single mpeg4-frame is composed of.
These macroblocks are comprised of 4 8x8 blocks that are grouped together. These 8x8 blocks form the basis of MPEG-4 compression.

Instead of being composed of proper pixels, like a normal bitmap or a film still, a block is more like a representation of a complex formula that tries to mimic the content of the original picture as best as it can.

The Human eye is much more sensitive to changes in the brightness, or Luminance than it is in color changes.
So mpeg4 uses a type of color space in it's file structure that assigns less bits to color changes than it does to brightness changes.

A 8x8 block is not made up of pixels but it is made up of a single value which represents the average brightness (or color) value, and all the remaining values are mathematical representations of the amount of variation from this average value for the whole block.
To put it in other words, you have one basic mean, or average value, and all the variation, or detail of the picture , is represented by the end result of a certain complex formula.
The other places in the 8x8 block, which we invariably but inaccurately call pixels, represent different types of detail, and especially the variation of this detail.

Let's take a look at one block:

X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X

The first place in the upper left corner represents the average, or mean value. So if the whole block is dark brown or light red on average, this place says so.
Going from this place to the right or down, we get representations of the amount of variation from this value.
Now this is hard to grasp, so pay attention.
When going from left to right, or from top to bottom, the higher the amount of detail gets.
If you take the original picture of 8x8 bits, the amount of detail is transformed into values depending on a certain frequency at which the detail is present.
Finer detail is represented by higher frequencies.
So, the further you go to the right in the block, the higher the frequency (or the finer the detail).
Let's take three examples:
An 8x8 picture with one big iron bar in it has little detail, so it has a very low frequency.
An 8x8 picture with four broomsticks in it has some detail, so it has a medium frequency.
An 8x8 picture with rain, a cornfield or hair in it, has lots of detail in it and so it has a very high frequency.
As you can guess by now, the values in a block represent both the horizontal and vertical frequency in the original picture.
The formula that does the translation or transformation from detail to frequency is called Discrete Cosinus Transformation or DCT.

So when looking at a block in the way we have just gotten to understand, we can see the next things:

Code:
Average brightness and color of the whole block
                        I
                        I   Low frequency (bigger)detail
                        I   I
                        I   I     Medium frequency (normal)detail
                        I   I      I
                        I   I      I         Higher Frequency (fine)
                        I   I      I         detail
                        I   I      I         I
                        X--X--X--X--X--X--X--X
                        X--\--X--X--X--X--X--X
Low Frequency detail--- X--X--\--X--X--X--X--X
                        X--X--X--\--X--X--X--X
Medium Frequency detail X--X--X--X--\--X--X--X
                        X--X--X--X--X--\--X--X
                        X--X--X--X--X--X--\--X
High Frequency detail---X--X--X--X--X--X--X--A
                                             I
                                             I
                           Mathematical representation of the finest
                           detail, both horizontally and vertically
EDIT: Finally used the code tag...

Ok, now we understand how a block is built.
So how come Quantization matrices into play?

A Quantization matrix (QM from now on)looks something like this:

08 16 19 22 26 27 29 34
16 16 22 24 27 29 34 37
19 22 26 27 29 34 34 38
22 22 26 27 29 34 37 40
22 26 27 29 32 35 40 48
26 27 29 32 35 40 48 58
26 27 29 34 38 46 56 69
27 29 35 38 46 56 69 83

Now there is actually a rather complex proces behind this, but I'll try to describe it in a simple manner:
Every value in the QM is the threshold for the DCT detail-to-frequency translation.
All detail below the threshold will not be regarded as detail and will NOT be encoded. It will simply be discarded.
Now you can understand why Xvid is a so called lossy codec.
It throws detail away. The amount of detail thrown away is determined by the QM.
As you can see, the farther to the right and the farther to the bottom the higher the threshold gets.
The finer the detail, the higher the detail has to stand out of the rest of the picture to be encoded and not thrown away.

So say you have a picture of a girl with long blond hair standing in front of a very light gray wall and behind two prison bars. (don't we all love to see that now)
Off course it's hard to portray that in an 8x8 picture but bear with me for a minute.

The blond girl and the wall will have little contrast between them, so the difference between the average values for brightness and contrast and the maximum values will not be that much.
The values will be lower and will not go that often over the threshold. This would mean less difference that has to be encoded and the picture will have high compressability.
If the wall had been black, the contrast would be much higher and the difference between the average value (sort-of-grey) and the extremities (blond and black) would be much higher. So much more values would have gone over the threshold and would have to be encoded. So higher contrasting scenes result in lower compressability, which we already know offcourse.
Now the two black prison bars in front of the girl are detail, all be it not very fine. So they get a low frequency and they will be encoded if their difference from the average goes over the threshold.
So assuming they're not blond, they will be encoded.
Now the girl has hair, and the texture of hair is off course very fine. So the hair has a lot of detail and gets a very high frequency.
As you can see in our QM, the threshold for high frequencies is much higher than for low frequencies. So the fine detail (the hair) has to differ much more from the average values to be encoded.
So unless the difference is very high, which in this case we assume isn't, the details of the hair won't be encoded.
The matter would be different if she has something like coloured streaks in her hair which would increase the contrast.

So the end result is that the finer the detail, the bigger the contrast of this detail needs to be from the average values of the picture, to be encoded.
This is offcourse on a per-block basis and not on a picture as a whole, which generally consists of more than one 8x8 block. Let's hope you can see through the simplification.

Now you can understand why some matrices soften the picture, like H.263, while others like Mpeg produce a sharper picture.
The values in one QM simply give finer detail a lower threshold and are therefore more likely to encode finer detail, at the price of compressability.
You can also see that the QM that I took as an example isn't a very high compressability matrix; the values are rather low in general.

Some other points:
-End credits usually have very little detail, so you could design a QM especially for this, with VERY high compressability.

-A heavy compression matrix simply ups all the finer detail values so less of that is compressed.

-You could design specific matrices for specific type of content.
You could design matrices especially for sci-fi space adventures, Anime and animation movies, and jungle scenes.

-If you know the exact frequency of interlacing artifacts, you could up their threshold to filter them out!

-Same might work for other types of noise and artifacts.

-I don't know if the one QM is meant for both luminance and color information, I assume it does, but separate QM's for luminance and color would produce higher tweakability. (Don't know if it would be Mpeg4-compliant though, or if it's possible at all).

--------------------------------------------------------------------
Well that's it folks!
Hope I didn't make too many errors, and please correct me if and where I'm wrong.

Cheers,
Crusty

EDIT:
Edited some spelling errors
Removed part about Modulated QM
Reedit: altered macroblock to block. A macroblock is 4 8x8 blocks grouped together.
__________________
Core 2 Duo intel, overclocked at something (can't remember what i set it at), about 2 TB in hard storage, nvidia 9800 GT and a pretty old 19tftscreen.
Crusty007 at en.Wikipedia.org

Last edited by crusty; 10th February 2004 at 18:38.
crusty is offline   Reply With Quote
Old 24th May 2003, 20:37   #2  |  Link
mf
·
 
mf's Avatar
 
Join Date: Jan 2002
Posts: 1,729
Try using the [code] tag for nonproportional text.
mf is offline   Reply With Quote
Old 24th May 2003, 21:15   #3  |  Link
Acaila
Retired
 
Acaila's Avatar
 
Join Date: Jan 2002
Location: Netherlands
Posts: 1,529
Quote:
Modulated QM uses different matrices for different types of scenes within one clip. For different parts of the clip, it takes the QM with the highest compression for that particular clip.
- Modulated uses the MPEG matrix when a frame's quantizer is <=3, and the H.263 matrix when the quantizer is >=4.

- New Modulated HQ uses the exact opposite, so the H.263 matrix when a frame's quantizer is <=3, and the MPEG matrix when the quantizer is >=4.

So it's not as smart as you might have hoped, but it's still quite effective.
Acaila is offline   Reply With Quote
Old 24th May 2003, 21:20   #4  |  Link
crusty
Ahhh....Nuts!!
 
crusty's Avatar
 
Join Date: May 2002
Location: Holland
Posts: 309
Ok, removed that part...
Rest of it is OK?
__________________
Core 2 Duo intel, overclocked at something (can't remember what i set it at), about 2 TB in hard storage, nvidia 9800 GT and a pretty old 19tftscreen.
Crusty007 at en.Wikipedia.org
crusty is offline   Reply With Quote
Old 24th May 2003, 21:30   #5  |  Link
Acaila
Retired
 
Acaila's Avatar
 
Join Date: Jan 2002
Location: Netherlands
Posts: 1,529
As far as I know everything else is exactly correct .
Acaila is offline   Reply With Quote
Old 25th May 2003, 00:27   #6  |  Link
snowbeach
Regional Support
 
snowbeach's Avatar
 
Join Date: Apr 2002
Location: Ground-Zero
Posts: 214
Very good explanation crusty! Maybe I should add this to my guide! If I am allowed?
snowbeach is offline   Reply With Quote
Old 25th May 2003, 01:07   #7  |  Link
MrBunny
Registered User
 
Join Date: Oct 2002
Posts: 82
Re: Understanding a Quantization Matrix

Quote:
Originally posted by crusty
Some other points:
-End credits usually have very little detail, so you could design a QM especially for this, with VERY high compressability.
As Syskin says in this post: http://forum.doom9.org/showthread.ph...827#post305827 changing quant types (in this case to a new quant matrix for the credits) is not MPEG-4 compliant, though it is xvid compliant if on an i-frame. Just a little something to note.

Other than that, it's a really nice comprehensive guide. You also might want to make a quick reference to quantization error (from quantization rounding). It doesn't have much to do with compressibility, but might affect image quality.

Keep up the good work
MrBunny is offline   Reply With Quote
Old 25th May 2003, 01:44   #8  |  Link
crusty
Ahhh....Nuts!!
 
crusty's Avatar
 
Join Date: May 2002
Location: Holland
Posts: 309
@Snowbeach:
Permission granted
Have fun with it.

MrBunny
Quote:
You also might want to make a quick reference to quantization error (from quantization rounding). It doesn't have much to do with compressibility, but might affect image quality.
I would if I'd understand it. I know that rounding exists but I don't know the effect.

BTW, is your first name Energizer ?
__________________
Core 2 Duo intel, overclocked at something (can't remember what i set it at), about 2 TB in hard storage, nvidia 9800 GT and a pretty old 19tftscreen.
Crusty007 at en.Wikipedia.org
crusty is offline   Reply With Quote
Old 25th May 2003, 03:00   #9  |  Link
MrBunny
Registered User
 
Join Date: Oct 2002
Posts: 82
From what I recall of quantization and inverse quantization:

Quantization: Quantized coeff = Floor(DCT coeff/QM coeff)
Inverse Quantization: Reconstructed coeff = (Quantized coeff + 0.5) * QM coeff

The problem arises with the rounding error involved in the floor and using the midpoint value of each coefficient to reconstruct.

Consider the DCT coefficients 75, 88, and 99 with a QM coefficient of 25. Each will result in a quantized coeffient of 3 and each will reconstruct to 87.5. As you can see, there is significant error.
However, I don't visualize well in the frequency domain and don't know how much nor exactly the type of effect it would have in the spacial domain (after iDCT).

I guess at least one of the lessons that should be taken away is that you should be really careful with putting really high coefficients in a QM, since large coeffs will result in large error (when the QM fails to zero that particular coefficient). If you do want to simply zero a certain number of the highest frequency values, you might want to have a look at the avisynth based DCTfilter (http://forum.doom9.org/showthread.php?s=&threadid=38539 and http://forum.doom9.org/showthread.php?s=&threadid=45695) then use a "normal" quant type.

I think quantization error is one of the reasons QMs look like they do (smaller values for low freq and increasingly larger for higher freq). You need to have the most accurate reconstruction possible of the most important coefficients (thus small quants/low error for low freq), whereas it is less important for the less important high freq coeffs (high quants for high freq with hopes of zeroing).

I hope this isn't too confusing
MrBunny is offline   Reply With Quote
Old 25th May 2003, 10:42   #10  |  Link
Acaila
Retired
 
Acaila's Avatar
 
Join Date: Jan 2002
Location: Netherlands
Posts: 1,529
@MrBunny:

IIRC JPEG uses flooring, but MPEG uses rounding after quantization. Mainly because rounding to nearest integer results in a smaller quantization error.

Quote:
...is that you should be really careful with putting really high coefficients in a QM, since large coeffs will result in large error
But high values in the matrix will drive more coefficients to zero. Because quantization matrix values are 8-bit values, I believe the highest you can put in there is 256. If you want a certain coefficient to become zero, why shouldn't you just put 256 in that position?

Last edited by Acaila; 25th May 2003 at 10:44.
Acaila is offline   Reply With Quote
Old 25th May 2003, 17:49   #11  |  Link
MrBunny
Registered User
 
Join Date: Oct 2002
Posts: 82
@Acaila

I said to be careful about doing it, not to not do it at all
And I was thinking in terms of JPEG compression, I was unaware that MPEG handled quantization differently.

Can DCTed coefficients be larger than 256 (Or i just 128 if it rounds)? If it can, then even a QM w/ 256 coefficient isn't guaranteed to zero. In that case I still think that using the DCTfilter to zero coefficients is a better alternative than using a custom quant matrix. Otherwise, putting 256 should be sufficient.
MrBunny is offline   Reply With Quote
Old 25th May 2003, 18:33   #12  |  Link
Acaila
Retired
 
Acaila's Avatar
 
Join Date: Jan 2002
Location: Netherlands
Posts: 1,529
Yes in theory DCT coefficients can easily be larger than 256, however except for the top-left DC coefficient almost all other AC coefficients will be quite low in practice. So I don't see anything wrong in using that value in a matrix, however since I've never tried it out myself I could be wrong of course .
Acaila is offline   Reply With Quote
Old 25th May 2003, 19:42   #13  |  Link
crusty
Ahhh....Nuts!!
 
crusty's Avatar
 
Join Date: May 2002
Location: Holland
Posts: 309
So higher QM values will result in more variation thrown away in the first place, but will work with a bigger margin of error when it doesn't thrown away variation.
But as long as you use the higher values only for finer detail, it really doesn't matter that much now does it?
I don't know for sure, but I assume that higher rounding errors are less of a problem for finer detail.
The end effect would be that finer detail, if allowed through by the QM value, would simply have a bigger luminance error.
Is the human eye less sensitive for the luminance of fine detail?

The only type of scenes that I can think of that would be affected, off the top of my head, would be dark skies with stars in them or 'proper' space scenes.
This is because a real night sky with stars in them has high contrast fine detail.
The stars are basically points, but there is a difference in luminance that is very easily spotted. They vary easily from the average value, which would be the mean value of the sky (very dark blue), so they would easily get above even a high treshold, but they would then have a pretty large margin of error in luminance.
If you wanted to portray this type of scene realistically, you would have to find out the DCT frequency of the stars and then lower the QM value to allow for a smaller margin of error.
Otherwise the stars will look washed out and you might loose the 'twinkling' effect that a real-life night sky has.

Mind you, DVD's use mpeg2 compression which also uses DCT and macroblocks, so unless they specifically used a smaller QM value for the conversion from movie to DVD, this type of scene will look washed out on DVD anyway.
Also, most 'artificially' created (Special Effects) space scenes don't come close to realistically portraying stars. When astronauts first went up into space, they found the stars to be very bright and to have none of the 'twinkling' effect whatsoever, which offcourse is created by the atmosphere.
But in space the stars are so bright compared to the darkness of space that no TV or monitor can get close to portaying this kind of contrast.

Another thing;
I think that especially for anime movies many of the QM values could be higher, because contrast is usually much higher. I don't know for sure because I have never done an anime movie before, but I'll do Akira probably somewhere this month and I'll try it with different custom matrices.
I suspect a QM with lower values to low frequencies and higher to high frequencies will give a better result. And because anime has in general much sharper contrast than real-life footage, all the values could probably be higher to start with.
It might also be possible to find a 'sweet spot' for each different anime movie or serie because one movie could use finer lines than the other.
To all: Please try custom matrices based on this and report the effect.


Question to developers:
Is mosquito noise around sharp edges created by using too high frequencies for these edges?
I can imagine a very bright thin line getting a higher frequency than a very bright thick line, ergo the macroblock with a thin line getting additional thin lines in decoding because of the higher frequency used.
Is this correct or is this just plain nonsense?
If it's correct, then I assume that some form of postprocessing filter could be made to detect this type of noise and remove it, not just on Xvid, but on all Mpeg4.
__________________
Core 2 Duo intel, overclocked at something (can't remember what i set it at), about 2 TB in hard storage, nvidia 9800 GT and a pretty old 19tftscreen.
Crusty007 at en.Wikipedia.org
crusty is offline   Reply With Quote
Old 25th May 2003, 20:34   #14  |  Link
mf
·
 
mf's Avatar
 
Join Date: Jan 2002
Posts: 1,729
Quote:
Originally posted by crusty
Question to developers:
Is mosquito noise around sharp edges created by using too high frequencies for these edges?
I can imagine a very bright thin line getting a higher frequency than a very bright thick line, ergo the macroblock with a thin line getting additional thin lines in decoding because of the higher frequency used.
Is this correct or is this just plain nonsense?
If it's correct, then I assume that some form of postprocessing filter could be made to detect this type of noise and remove it, not just on Xvid, but on all Mpeg4.
Hah! finally I can engage in this conversation as well. Ringing around sharp edges is created by dropping coefficients.
mf is offline   Reply With Quote
Old 26th May 2003, 01:30   #15  |  Link
crusty
Ahhh....Nuts!!
 
crusty's Avatar
 
Join Date: May 2002
Location: Holland
Posts: 309
Dropping coefficients ?

Could you elaborate on that ?
__________________
Core 2 Duo intel, overclocked at something (can't remember what i set it at), about 2 TB in hard storage, nvidia 9800 GT and a pretty old 19tftscreen.
Crusty007 at en.Wikipedia.org
crusty is offline   Reply With Quote
Old 26th May 2003, 02:42   #16  |  Link
MrBunny
Registered User
 
Join Date: Oct 2002
Posts: 82
I believe he means zeroing coefficients (as we've been calling it).

It makes sense. Edges are typically not very "average" with respect to its surroundings. Thus it is not well covered by the DC coefficient and other lower freq components and generally requires a number of the higher freq components to represent it correctly. When some of these components get zeroed, the edge gets eroded (less sharp, less defined and unsettles the surrounding area) resulting in the mosquito noise we all hate.
MrBunny is offline   Reply With Quote
Old 26th May 2003, 09:24   #17  |  Link
Didée
Registered User
 
Join Date: Apr 2002
Location: Germany
Posts: 5,389
A little important point

After so much elaborating about which cells of the quantization table correspond to which frequencies, and to what kind of image detail, I feel the need to remind of this:

All of the above written is generally correct for intra-frames, resp. intra-blocks, only.
In I-frames the actual image gets coded, and therefore all the frequency stuff is directly related to image detail.

Now, in the usual way we all encode mpeg-4, most of the video stream (~ 98%) is P- and/or B-frames. And for these, its a quite different story.
What gets DCT'ed is the image after the motion compensation, and therefore it is not possible to directly draw the conclusion "fine detail" -> "high frequency", or the other way round. For example, it is perfectly possible that you have a block with pretty fine detail, but that block gets very nicely catched by ME, so the result that gets DCT'ed consists only out of low frequencies!

Because of that, the relation between quantization coefficients and image detail is not of that kind like our straight imagination would suggest. Keep that in mind!

Another point in this context:
For the above reasons, it makes a big difference if you "zero-out" some frequencies by either the encoder's quantization table (so: after ME), or by Tom's DCTfilter (before ME).
Zeroing by DCTfilter will directly remove image detail, whereas zeroing by encoder's matrix will remove the differences that are left over by ME. That are two different pairs of shoes!


Regards

Didée
__________________
- We´re at the beginning of the end of mankind´s childhood -

My little flickr gallery. (Yes indeed, I do have hobbies other than digital video!)
Didée is offline   Reply With Quote
Old 26th May 2003, 09:46   #18  |  Link
kilg0r3
! - User - !
 
kilg0r3's Avatar
 
Join Date: Nov 2001
Posts: 1,081
So what do we learn from this regarding the relation between an intra cel and the corresponding pframe cell?

After all there are matrices that distinguish between these two. Other, however, have the same value for I-and P-cells. Any general suggestion what might be a wiser way to go?
__________________
Keep your tone warm and your sigs decent!
kilg0r3 is offline   Reply With Quote
Old 26th May 2003, 18:38   #19  |  Link
crusty
Ahhh....Nuts!!
 
crusty's Avatar
 
Join Date: May 2002
Location: Holland
Posts: 309
MrBunny:
Quote:
When some of these components get zeroed, the edge gets eroded (less sharp, less defined and unsettles the surrounding area) resulting in the mosquito noise we all hate.
So, if higher QM values for higher frequencies results in less accurate edge compression and more mosquito noise, I suggest the following to the developers:
Insert some form of 'edge detection algorithm' between DCT and Quantization, at least for the higher frequencies. That way, if normal picture content is below the threshold, it will not be encoded, but if there is an edge in the picture, the resulting DCT value would get an additional bonus, so it would go over the treshold easier, therefore reducing noise.
It would probably hurt compressability a bit, but it would result in less mosquito noise. It would be part of the encoding process only, so it would not break mpeg4 compatability.
Call it 'DCT edge detection' or something and allow the user to adjust it's 'DCT edge bonus'.
The question is offcourse whether the improvement in quality would counter the cost in bitrate, but there's only one way to find out....program it and let us test it.
Nic, SysKin, Koepi and others....any thoughts on this?

Didée:
Quote:
Zeroing by DCTfilter will directly remove image detail, whereas zeroing by encoder's matrix will remove the differences that are left over by ME. That are two different pairs of shoes!
This leads me to conclude that, since Qpel works with smaller motions than Hpel, Qpel will be less efficient when using high QM values, because the effect of Qpel will be nullified by the high thresholds of the QM. The minimum use of bits by Qpel will remain the same tho.
Am I right in this?

Quote:
What gets DCT'ed is the image after the motion compensation, and therefore it is not possible to directly draw the conclusion "fine detail" -> "high frequency", or the other way round. For example, it is perfectly possible that you have a block with pretty fine detail, but that block gets very nicely catched by ME, so the result that gets DCT'ed consists only out of low frequencies!
Isn't it so that when a pictures content is catched by ME, the corresponding macroblock doesn't contain ANY texture bits, but just motion vectors?

Kilg0r3:
Quote:
After all there are matrices that distinguish between these two. Other, however, have the same value for I-and P-cells. Any general suggestion what might be a wiser way to go?
If DCT in P- and B-frames generally results in low frequencies, low settings for high frequencies seem to me to have little or no influence to the end result. Maybe a higher threshold for these could lead to higher compressability without too much quality loss.
Also, if high QM values for high frequencies seem to counter the effect of Qpel, I would suggest not using Qpel with high compression matrices, since the amount of bits requiered by the Qpel technique may not be countered by the beneficial effect of Qpel.
The higher rounding errors created by high QM values would then also make Qpel less interesting.

So, to sum up:
1-higher QM values will lead to bigger rounding errors, nullifying subtle luminance variations in Keyframes.
2-Qpel 'might' be useless with high QM values for high frequencies. Don't use Qpel with high compression QM's, at least not with high compression P/B-frame QM's.
3-Using high values for high frequencies in B/P-frame QM could improve compressability. Lowering the high frequency QM values in I-frames could be used to add bits to fine detail in keyframes, possibly creating better values for the B/P-frame process to work with.
4-using the same QM for both Inter- and Intra-frames is less efficient than using different QM's, because of the difference in method used.
Any discussion on these points would be greatly appreciated.

Also, does anybody have the QM's for DivX 3 Low-motion and Fast-motion?
I wonder if they are any different.
It would also be very interesting to see the effects of different QM's on low and high-motion content.

Too bad I'm trying to flush the content of my harddrive because I'm installing a new motherboard this week....otherwise I would be experimenting with this stuff all day long.
Only 30 movies to go.....
__________________
Core 2 Duo intel, overclocked at something (can't remember what i set it at), about 2 TB in hard storage, nvidia 9800 GT and a pretty old 19tftscreen.
Crusty007 at en.Wikipedia.org
crusty is offline   Reply With Quote
Old 26th May 2003, 18:57   #20  |  Link
Acaila
Retired
 
Acaila's Avatar
 
Join Date: Jan 2002
Location: Netherlands
Posts: 1,529
Quote:
This leads me to conclude that, since Qpel works with smaller motions than Hpel, Qpel will be less efficient when using high QM values, because the effect of Qpel will be nullified by the high thresholds of the QM. The minimum use of bits by Qpel will remain the same tho.
Am I right in this?
I don't know why you brought QPel into this, for it has next to nothing to do with quantization. QPel increases the sensitivity of Motion Estimation. Once that process is finished the motion estimated frame is subtracted from the current frame with the result being a residual frame. It in on this residual frame that quantization is applied. QPel works before QM, and so there's no way that high QM could nullify the effect of QPel (more like the other way around if anything).
QPel helps to catch more motion, so it decreases the size of the residual frame, and thus also decreases the amount of effect of the QM. If ME were to work for 100% the residual frame would be empty and no quantization (of the residual frame anyway) could take place.

Quote:
3-Using high values for high frequencies in B/P-frame QM could improve compressability.
And that's exactly what all matrices already do...
And since using too high values cause ringing as mentioned before, I believe the standard matrices are already quite well made. The only reason I could see anyone using a custom QM is for increasing the amount of detail/size of a video (since for higher compression you could just as well use a higher quant, no need for a custom matrix).

Quote:
Also, does anybody have the QM's for DivX 3 Low-motion and Fast-motion?
As far as I know the two flavors of DivX 3 have identical quantization matrices. The only difference is the quant ranges that they can use. I believe low-motion could use anything 2-31, while the high-motion codec could only use 4-31, but I'm not sure if these ranges are correct.

Last edited by Acaila; 26th May 2003 at 19:06.
Acaila is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 22:20.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.