View Single Post
Old 7th June 2011, 07:18   #15  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 10,344
Quote:
Originally Posted by Mikey2 View Post
That is where I disagree: at least in my experience, I have had excellent luck simultaneously running software [CPU] decoding and hardware [GPU] rendering. The proof is that I not only see no dropped frames, but I can also multi-task fairly smoothly. When I used DXVA, Cuda, or CPUID for HW decoding, naturally the CPU usage was nil; however, GPU-Z was showing my graphics cards [remember I have SLI] maxed out. Furthermore, I ran a DPC-Latency checker and it was...spotty to say the best.
DXVA decoding (or any kind of hardware decoding) does not benefit from SLI, and i'm unsure if rendering a video does.
The 8600 is a pretty old and slow card (and severly memory limited), running the same on a fast CPU is not comparable.

If you have a recent GPU, and assuming there are no bugs in the decoder and the GPU driver, GPU decoding is equally smooth as CPU decoding, and frees up some resources for other tasks, for example frame interpolation that some people like to do (and every % of CPU is valuable there, I've been told).

Me personally, i use GPU decoding mostly to keep my CPU cool, running CPU decoding in my HTPC makes the poor thing heat up so much, and the fan gets noisy. The GPU doesn't have that problem.
In addition to this, with LAV CUVID i also get awesome deinterlacing with madVR, otherwise i would have to do that in software too (with a probably lower quality)
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote