I ran a quick test with a CRF encode of 1280x720 material.
E8500 @ 3.8 GHz
NV GPU: 27.9 fps
CoreAVC via DSS: 28.4
Apparently CoreAVC is so good that its CPU utilization is comparable to the overhead of managing decoding on the GPU (memory transfers host<->device, etc.). It's hard to believe, so I'll look to see if I've missed something.
I'm beginning to think this is a dead end, since things can only get better on the CPU versus the GPUs, which are "cast in silicon".
Maybe I'll update the regular versions with the latest libavcodec code while we wait for a CoreAVC API/SDK.
|