Quote:
Originally Posted by LigH
I already wondered ... Neural Networks in a GPU implementation?! Want!
|
Wut? Most Neural Networks rely on GPUs to run them at any speed, because GPUs are actually pretty good at that, because it can be paralized really well. Not to mention even newer stuff with specialized hardware just for that task, ie. Tensor Cores.
We had NNEDI3 in madVR for a while, a Neural Network based scaler. But it was removed because it was ultimately decided that the added complexity in madVR was not worth it (since it was the only component to use OpenCL IIRC), and other algorithms could flat out replace it at higher speeds and quality.
Neural Networks can be really powerful, and who knows if some day one might return to madVR, but the real challenge with Neural Networks is training them. You need absolutely massive compute power to train a Neural Network. All those fancy NVIDIA demos you see for image processing, those are trained on super computers, large clusters with hundreds if not thousands of NVIDIA GPUs to train the network. Of course no mere mortal has that sort of resources to train a network.