more parallel slower clocks...on top of maxwell's already efficient architecture. Throwing in an uneducated guess based on what they do with dual gpu cards.
more parallel slower clocks...on top of maxwell's already efficient architecture. Throwing in an uneducated guess based on what they do with dual gpu cards.
20 nm is hopeless. 16nm next year.
Sent from my SGH-M919 using Tapatalk
Nvidia should officially announce something made with the GM200 at the GPU Conference (March 17-20). Most likely they'll announce the Quadro M6000, but with any luck they'll also announce a 980Ti or maybe the Titan 2 (or whatever they're going to call it) at the same time, or at least soon afterwards.
Not sure if the new Titan will be all that great compared to the original. Nvidia has refused to make Tesla cards with Maxwell because of the lack of FP64 units.
Expect huge FP32 gains though. Comparing the 780Ti to the 980Ti, the 980Ti gets only 1 GFLOP more of FP64 (210 vs. 211), but gets 35% more FP32 (5046 vs. 6758). This assumes an 1100MHz base clock and that all 3072 cores are active, which is reasonable since the 780Ti also had all of its cores active.
To put it in perspective for me: which projects require the FP64?
PG Genefer, milkyway. Any GPU project that says only some AMD cards are supported.
Milkyway, some Primegrid, GPUGrid uses it just a tad though they don't bother mentioning it due to the insignificance, I remember hearing about Einstein beginning to use FP64 starting maybe a year ago. That's about it really, though there are likely many projects that use it in insignificant amounts like GPUGrid. On Primegrid though, if you're going for your GFN or PSA badge and you're not using a Titan or something from AMD, then i feel bad for you
Also the Asteroids@home GPU app requires FP64
I've wondered about this. What happens if you use a card that doesn't support any DP? Will it just run those calculations in some sort of emulation mode on the CPU or something? Or will the task just flat out fail? I don't know that I have ever had a GPU without at lease some DP.
Not sure about now, but a few years ago it would just fail. I remember the big thing used to be you needed an Nvidia GPU with a compute capability of at least 1.3, which meant it was capable of double precision. Now that doesn't really apply anymore since it seems you can get GPUs with a higher CC rating that don't support FP64 (like the GT 705 through GT 730). I'm thinking it may be emulated, though GPUGrid did drop support for older GPUs that don't have FP64.