Yes that can come into play which makes saying things like "the GTX 580 is a faster (or equivalent) cruncher in all projects at all times" a bit silly. However that will be true 90% of the time, if only 10% of the projects have actually bothered coding using CUDA to take advantage of higher CC ratings. Most have not. Many are actually coding in OpenCL which doesn't rely on CC ratings so much, but more upon which driver version you have.
If it means anything, the Titan will have a CC of 3.5, though you can be pretty sure no one will create an application requiring it. The only differences are that 3.5 includes something called Dynamic Parallelism and Funnel Shift, both which no other CC version has, and the "Maximum number of 32-bit registers per thread" is increased to 255 (CC 2.0 through 3.0 is 63. CC 1.0 to 1.3 is 127). How useful these 3 things would be to us, i don't know, but they are the only 3 differences from 3.0.
The 600 series is definitely more power-efficient than the 500 series. No doubt. But to measure GFLOPS/Watt between them is extremely difficult. Mainly because the 600 series all claim higher GFLOPS than their 500 series counterparts, but real world compute-testing shows the 600 series to actually be slower in some cases. Even with something as simple as sorting, the GTX 580 can be ~28% faster than a 680. As for personal experience, i replaced a plain GTX 460 with an overclocked EVGA GTX 660 a few months ago. The 460 had a core clock of 675 MHz and the 660 had a core clock of 1123 MHz, but was only 13% faster, when it should've been more than double according to the GFLOPS specs. Even though it was still a tad faster and even though it used less power, i gave it to my wife so should could play games with it and put my 460 back in and once again swore off buying Nvidia only to be a victim of their intentional scams Essentially, if you check the GLOPS ratings on wiki for the 600 series, to compare it properly to every other Nvidia GPU, divide it exactly in half.
The GTX 680 claims it can do 3090.4 GFLOPS. Pretend it says exactly half that (1545.2) and then you can compare it to the other GPUs (making the GTX 580 ~36 GFLOPS faster than the 680). The reason for this is the architecture change in Kepler which pretty much reduced the powerful cores to an imitation of AMD's weak stream processors. There's more of them, but they're smaller, simpler and slower. Of course if anyone ever codes specifically for the 600 series, meaning the exact same app wouldn't work on any previous gen., then we may see better results than half.
If you're looking at saving on your power bill, then yes the 600 series is better than the 500 series, however you'll have to keep the 600 series cards for years and years to save as much on your power bill as the difference in price for the cards.
Considering GFLOPS/Watt:
AMD 7970: 15.155
GTX 680: 15.85
GTX Titan: 18.816
Nvidia does win the GFLOPS/W contest
http://media.bestofmicro.com/5/G/348...%20Luxmark.png FP64 is taken into account here.