There's only one step where I'd need another sampler with the source image, but more importantly that is still not quite enough to implement SuperRes. To implement SuperRes it's more or less necessary to be able to create new samplers and be able to send multiple samplers to one shader. It's technically possible to do SuperRes for one of the channels by storing things in the alpha channel, but that's not ideal.
The way this is achieved in MPDN is by building a chain of so called 'filters' which keeps track of allocating textures and sending the right textures to the right shaders. It might seem that you can use results from all previous stages, but under the hood it will try to allocate as few textures as possible, it also won't calculate results that aren't used and since recently it can even optimize away unnecessary conversions (so if you have X -> ConvertToYUV -> ConvertToRGB -> Y, it will simply do X -> Y).
|