View Single Post
Old 4th August 2010, 18:35   #38  |  Link
mariush
Registered User
 
Join Date: Dec 2008
Posts: 589
Quote:
Originally Posted by TheImperial2004 View Post
Totally agree . But !

I believe that the major issue here is to synth. the data between two different entities . What if the GPU is just too fast for the CPU to keep up with ? Of course we will need the CPU to do some calculations . If the CPU is 9x slower than the GPU , then whats the point ? In that case , the GPU will have to wait for the CPU to respond and complete its part of the job , *only* then the GPU will continue doing its part . Lagging is the major issue here . Feel free to correct me though
GPUs are indeed much faster now than CPU processors, but the trend is for CPUs to gain more cores and you can see some tendencies to integrate the GPU and CPU in a single die, thus gaining much faster communication paths between them.

As for what you're talking about, I'm not sure that it can be improved a lot if the encoder keeps the "thinking" that it's supposed to receive a series of frames and that it must process it as they come.

Sure, it's needed for real time encoding and streaming but in lots of cases, the whole content to be encoded is already physically there.

I'm imagining for example, if you have a 10 GB video you need re-encoded and an 8 core processor, you could quickly parse the first 512 MB of this content, split it into 8 smaller chunks, upload to video card, let it do calculations while the 8-12 CPU threads do calculations on each of those 8 chunks... if cpu lags behind, just store the computations performed on GPU somewhere and upload to card the next 512 MB chunk and when it's all done, do some computations to glue these chunks together.

With cards nowadays having almost all at least 512 MB of memory on them with a lot of them having 768-1GB, that's (I think) plenty of space to fill with data to be crunched through while cpu glues everything up.

Not sure how much would these gpu results would be in disk space or normal memory and if it's fast enough to dump them to disk so that it would be faster than just doing it all on cpu - I don't see otherwise really a problem of using a lot of disk space to encode something - mbtree file already uses about 200 MB to encode 3 GB of content.

Last edited by mariush; 4th August 2010 at 18:38.
mariush is offline   Reply With Quote