View Single Post
Old 4th January 2012, 09:40   #18  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,901
Quote:
Originally Posted by nevcairiel View Post
The speed of the card is of little importance for progressive video.
The Video decoder is separate from the 3D engine, and always runs at the same speed. Its not faster in faster cards.
Assuming there is enough memory bandwidth, every card using VP5 should run at the same speed.
This is maybe true for Video decoder hardware inside Nvidia cards and ATI cards (with the exception of my Frankenstein (6)750 card), but it's not true for Intel Video Hardware.

For ATI and 5xxx series I know that UVD2.2 clocks are 400MHz for Video processor and 900 MHz for memory.

For AMD and 6xxx series (including 67xx cards) I assume that UVD2.2/UVD3 is clocked at the same frequency of 400/900 MHz, from the performance I see by UVD3.

From my GT440 I see that VP4 is clocked at max 3D clocks of the card.
I know nothing about other VP4 cards, but I believe Nevcariel saying that they all clock at the same speed (frequency).

So for progressive video - not interlaced - the pure video decoding performance will be the same for every AMD/ Nvidia graphics card using the same video processor, assuming there are no tricks from AMD/ Nvidia regarding BIOS/drivers for specific models and cards (favoring specific models over some others by clocking the video processor to different speeds for example)

For the interlaced video, the deinterlacing process is executed by GPU shaders or computational units or whatever name you call them.
So the GPU performance in general and not video processor alone, does matter for interlaced video.

Also GPU performance does matter regarding video processing like De-noise, De-blocking, Edge-Enhancement etc.
These post-processing filters and many more are executed by AMD GPU shaders at hardware/driver level for AMD cards.
Of course those kind of video filters and many others can be executed by software like FFDshow in the CPU if you have a weak graphics card and a powerful processor.

For Intel the QuickSync video engine has the same speed of the GPU inside the processor, which varies from 650 MHz for the low end processors, up to 1350 MHz for the upper class processors working in GPU turbo mode.
For example my Core i5-2400 uses QS from 850 MHz to 1100MHz when Core i7-2600K uses the same QS from 850 MHz to 1350 MHz.
So for Intel hardware the choice of the CPU/GPU has some difference in the performance of video decoders.

Quote:
Originally Posted by nevcairiel View Post
Anyhow, NVIDIA said that VP5 would be double the speed of VP4 approximately.
Sadly, thats still way below the Intel decoder. And IVB which is coming in 3-4 month will once again increase that performance.
From the figures of Wanezhiling it seems that for "easy" clips - meaning clips with low bitrate like 4. Girls - VP5 has less than double the performance of VP4.
But for "heavy" clips with huge bitrates like 9. Ducks, VP5 has more than 3 times the performance of VP4 which is great.
Because VP5 seems to be a very well balanced video processor, pushing the performance figures where it is needed - to large bitrates required by 4K x 2K and special encoded clips like from 7 to 10 in my collection.
And for those clips - 7 to 10 - the performance of VP5 is close to QS 1st generation.

Of course Ivy will extend the gap even more in order to support multiple 4K x 2K streams simultaneously and 4K x 4K (square resolution)
__________________
Win 10 x64 (19042.572) - Core i5-2400 - Radeon RX 470 (20.10.1)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all

Last edited by NikosD; 4th January 2012 at 10:40.
NikosD is offline   Reply With Quote