Quite possible. As long as the curtain isn't pulled, the illusion can persist there would be something special underneath.

- Still, asking wouldn't hurt ... perhaps someone has a weak moment and actually leaks some insight.
Also, what I gathered from the internet: the video engine automatically uses the "highest" method, according to the capabilities of the card, as well as the actual content. Particular example: the GT220/240 is said to use "vector adaptive" deinterlacing only up to SD resolution, but for HD resolution it uses only "temporal" deinterlacing. Such behaviour is understandable, given that the usual application is realtime-playback. Though, for offline processing it could be interesting if it were possible to choose the deinterlacing method to liking. If you contact Nvidia again, maybe you could ask about that, too.
Okay, the comparison ...
Here's a quickly produced sample, from a realworld source: "Lord of the dance", native PAL DVD. To demonstrate clearly, the content was first bobbed, then upscaled 200% with pointresize, and slowed down from 50fps to 12.5 fps. Just to make it easy to see what's really going on.
<sample> (MediaFire, ~9 MB)
Three of them are more or less about the same. Hard to tell why one of the three should be preferable to the other two. They are exchangeable.
Hence ... if you're in a hurry, then it doesn't matter too much which one you pick. When calling for quality even if it takes longer, then there's not much of a choice.