View Single Post
Old 4th August 2010, 20:02   #39  |  Link
ForceX
Registered User
 
Join Date: Oct 2006
Posts: 150
Quote:
Originally Posted by TheImperial2004 View Post
Totally agree . But !

I believe that the major issue here is to synth. the data between two different entities . What if the GPU is just too fast for the CPU to keep up with ? Of course we will need the CPU to do some calculations . If the CPU is 9x slower than the GPU , then whats the point ? In that case , the GPU will have to wait for the CPU to respond and complete its part of the job , *only* then the GPU will continue doing its part . Lagging is the major issue here . Feel free to correct me though
Of course if you pair up the latest and greatest GPU with an obsolete slow CPU you're going to hit a bottleneck. One assumes you are not going to get the best of one component and use it with lowest end other components.

One way to avoid hitting a bottleneck would be to program the encoder to perform most of the (decoding and) encoding in GPU while the CPU is used only to maintain I/O and task scheduling. That, I don't see happening anytime soon due to technical constrains. "Running out of work" due to thread waits is already a big problem in the multicore CPU world and the people involved are investing huge amount of time to improve caching and branch prediction etc etc. It's just that, they never tried to do it that well to co-ordinate with the GPU. But as CPUs and GPUs are getting "fused" (lol math co-processor redux), using the GPUs for accelerating compression is only a matter of time.
ForceX is offline   Reply With Quote