Can't.... Newegg is still my go to place for hardware. Don't want to piss them off... :P
Printable View
If you get an EVGA Titan, here's a nifty link to control the LED lights which illuminate the logo on top of the GPU http://www.evga.com/forums/tm.aspx?m=1869730
Also works on the 690.
Titan on eBay ~$1,500 (+|-)
http://compare.ebay.com/like/3210781...Types&var=sbar
Newegg - Out of stock.....
http://www.newegg.com/Product/Produc...%20GTX%20TITAN
Man, for that price I would so much rather have a 7990! :p
A new cheaper Titan is being relased called the Titan LE. 5GB GDDR5, 320 bit, 2304 shader cores, same clocks as current Titan. Expected to be released in Q3 and cost ~$800
*sigh* And now new info surfacing...
Titan LE now equals GTX 780, but seems they decided on 2496 cores, however only 2GB memory bringing the price down to $600. Also there will be all the crippled FP64 we've all come to love (or despise with murderous contempt) :) As with the 600 series, divide GFLOPS ratings in half when comparing them to a 500 series or earlier GPU.
The 700 series is of course nothing but a refresh/renumbering of the 600 series. The GTX 770 will be the GTX 680, so feel free to roll your eyes hard when someone tells you the 770 is 23% faster than the 670, which will be renumbered as the 760 Ti. Slightly higher clocks may be thrown in here or there, but nothing to be impressed over.
How soon can we start yawning at these new GPUs? Mid-May.
Just wanted to chime in with this as it had come up between a friend and I...
In the "research" and reading I've done on the Titan, specs aside, just looking at the performance, it seems that there is an obvious boost with CUDA apps based on rendering and such. But with DC projects I've yet to find any reliable data that puts the Titan anywhere near the price point.
I'm sorry, I can't remember where, but I found some benchmark with regard to Folding that had the Titan taking out WU about 10% faster than a GTX 580. If that's true, I have to say it's quite disappointing. Yes I know, hearsay, not evidence, just going by memory here :) Please someone confirm/correct me on this if you have the info.
Sure, the 580 is one of the best crunchers imo, but a $1k card that just barely outperforms is ridiculous.
Now with regard to gaming, my understanding is that the Titan shines, in many cases walking away from the 690 in performance with substantial leads depending upon titles/settings. My impression has always been that Kepler is a gaming based architecture first, and a cruncher as an after thought. Conversely, the Fermi chips pound for pound out crunch Kepler cards but won't live up to the gaming performance.
When I built my rig it was for gaming in mind. I found a good middle ground on price/performance to throw in 2x 660Ti cards. It's an excellent setup and I run all modern games and full specs without issue. OTOH, I have a secondary box with a single 550Ti in it that can knock out a WU just about as fast as one of the 660Ti cards does.
To me, the Titan is far from a Tesla. Just from what I've seen, the card is for gaming and will also excel at CUDA based rendering. It does not however seem to be the best or even a good choice for DC work though, and it seems that Nvidia's CUDA marketing is focused upon rendering apps as there seems to be no mention of DC work when they market these cards to us.
Edit;
----------
This is not all encompassing, but I wanted to point out some benchmark specs within Einstein with regard to performance. In this case, the Titan vs the GTX 580. Each system has 2 of each card.
The Titan first;
Attachment 1307
Now the 580 setup;
Attachment 1308
Discuss?
Well the *ONLY* benefit to having a Titan is it's the best FP64 performance you'll get out of any Nvidia card, by far. FP32 results are ok, but not worth throwing $1k at, as you said. Of course this also only applies to projects that don't offer AMD GPU apps. There's never any reason to buy anything made by Nvidia if you can crunch using a Radeon. Primegrid is one example of a project where you must use Nvidia, however (i think only 1 app was adapted to AMD). The Titan would do some serious stomping here, as most Primegrid GPU apps require FP64 (100% requirement on all PSA CUDA apps). For milkyway, might as well stick with AMD since performance/$ is much more in your favor. For GpuGrid the boost from Titan's FP64 would be nominal, but still something. Since they give bonus points for returning completed WU's early, it may or may not have an effect there occasionally.
Hey John, I understand all of that. AMD exceeds in many ways. I'm just kind of looking at it from an upgrade perspective, i.e. Titan vs. what Nvidia already offers.
Well if you need FP64, upgrading to a Titan is the way to go. If not, stick with the GTX 500 series - better at crunching than the 600 series, and also cheaper. If you do opt for the Titan, be aware the only other Nvidia GPU you can add to that computer is another Titan, due to the drivers. If you are also a gamer, as some of us are, a GTX 690 would get you better performance on average than a Titan for the same price, and will also be a faster cruncher than the Titan on FP32. However if you need FP64 you will be crippled with the 690. Or, for the same $1k, you could get an AMD 7990 (or 2 7970s) and excel at both FP32 and FP64. The decision would have to be made on your personal choice of projects/apps/games as i don't know what your preferences are :p
Thanks for that, John.
Yeah first and foremost I use my rigs for gaming so that has to be at the forefront of video card choices for me. The problem I have with the 690 is in it being a single card multi GPU setup which makes it a pass decision, for intricate details and reasons I won't bother boring you with at this time.
The next most important thing to me is CUDA rendering. I make videos in Sony Vegas and can say that GPU video rendering has been close to 500% faster for me on my setup. I love it.
Last in place is DC crunching. Considering these things, I pass on Fermi cards as Kepler are better gaming performers. The Titan does definitely shine with games, but by simply adding a third 660Ti, I'd actually be outperforming the Titan on games by a substantial amount. So from that perspective, on all fronts it's a pass decision.
I'm really surprised to see the lack of performance increase that the Titan has in most DC applications vs. the 580. As someone had mentioned earlier in this thread, Nvidia likes to rebrand cards and often new products are only marginally better than what they already have to offer. Really, the Titan is what the 690 should have been all along.
In summary I'm satisfied with my current setup and won't be upgrading any time soon. For games, My i7 3770 with 32GB RAM and 2x 660Ti is plenty good. When and if I do upgrade however, I just can't justify going for the Titan. If I were to upgrade today, hypothetically speaking, I'd go with simply adding the third 660Ti instead of a 690 or a Titan. Aside from the circumstances that you mention, for the cost of an additional 660Ti (about $300) I will have gaming and crunching performance that actually exceeds the Titan. Plus, when I'm done with the setup, I have 3 cards that can be put in other boxes to upgrade their aging components (I have two systems with a 550Ti and one with a GTX 260).
Of course it's not ideal to set out to have 3x mid range video cards, but from where I'm at now with 2 already, it makes the most sense cost/performance wise.
Microstuttering perhaps? :p AMD is (far) worse with that than Nvidia, admittedly.
Actually the Titan was originally supposed to be the GTX 680. The current 680 was supposed to be the 660Ti. What happened was Nvidia saw the 660Ti's performance was about the same as AMD's 7970 (in games), so the 660Ti was released as the GTX 680 for $500. Imagine those profits...Quote:
I'm really surprised to see the lack of performance increase that the Titan has in most DC applications vs. the 580. As someone had mentioned earlier in this thread, Nvidia likes to rebrand cards and often new products are only marginally better than what they already have to offer. Really, the Titan is what the 690 should have been all along.
Anyway, if you check GFLOP specs on GPUs before making a purchase, or just out of curiosity, i'll explain something about why the 500 series is still great for crunching. Starting with the 600 series, Nvidia dropped the shader clock which ran 2x the core clock. This is why when you compare the GFLOPS of the 600 series to the 500 (or earlier) series, you must cut the rating for the 600 series card in half. Comparing a 600 series GPU to an AMD GPU is essentially even. The same will be true of the 700 series.
For example the 660Ti is rated at 2459.52 GFLOPS FP32. Divide by 2 and the actual DC performance is 1229.76 GFLOPS. The 560Ti is rated at 1311.7 GFLOPS making it several percentage points faster. And it costs less. And it has better FP64, but still crappy. Those numbers are based on reference speeds. On OC versions, the gap is a bit wider.
It does sound like you need to stick with Nvidia though for your specific uses. Just remember to keep your power bill in mind when adding so many mid-range cards to a system to meet the performance of a single high-end card. Also remember to subtract ~30% of the combined performance for 3-way SLI (Xfire as well) when making comparisons. It'll never scale linearly :)
Thanks again for that John.
Yeah, I'm aware of the difference with Kepler and Fermi when it comes to crunching. I've tried to explain some of these things to people that were touting the Kepler cards as the Fermi's replacement and could never get through to them. Then there were others that said that Kepler was a huge step backwards because they didn't perform as well as the Fermi in the 500 series. But, Kepler was not designed first and foremost with DC in mind. You're absolutely right, in that the 500 series is a much better performer with DC applications. With gaming though, the Kepler romps the Fermi series (and by a wide margin) and rightly so; it's what they were primarily designed for. It surprises me that the Titan is of Kepler architecture. I don't know enough about it to really understand why at this point, but I question with a card like that and nvidia touting CUDA as the need for it why they didn't just build a super awesome Fermi based card.
I did not know about the whole 680 was supposed to be the 660ti and such. Thanks for the info.
I'm not for the 690 but not because of microstuttering. I used to have the stutter with my 660ti sli setup but some evga patch eliminated it. Without going into great detail, I'll just say that sli on a stick has its issues. The 690 can be very finicky with certain motherboards and some apps don't like it much at all.
When looking at the 3 way SLI, I'm just looking at benchmarks with the setup, not trying to figure what boost I'll get by adding a third card. It's pretty surprising but on a lot of titles a 3 way 660Ti is beating out the Titan on frame rates by usually around 20%, but in some times (such as Battlefield 3) at times as much as 45%. Granted, it's 3 cards vs. 1 so pound for pound obviously the Titan is stronger, but for someone with my current setup I'm looking at getting above Titan performance in games for $300.
Electric bill is an issue that most have to contend with, but not me. Fortunately, my wife has all of our bills paid through her employment so we never even get an electric bill :) I feel guilty about it at times because we're not very green and should be making at least some attempt to conserve power, but instead we've got 4 systems running 24/7, multiple televisions, we grow tomatoes indoors... hehe we're just huge electricity consumers.
Something to note:
Overclocking the Titan is ridiculously simple due to the characteristics of GPU Boost 2.0 which raises the boost clock above stock settings if the Titan feels the temps are low enough to do so.
Reference clocks for the Titan are 837MHz base and 876MHz boost. By doing *nothing* more than increasing the manual fan speed to the max setting of 85%, the Titan will settle in at 993MHz. This puts the Titan's rating at 5337 GFLOPS FP32 for a standard edition card. Of course it might be a bit loud with the fan that high :p If that bothers you, just pick up one of these EVGA Titan Hydro Coppers :) This would allow you to settle the Titan in at 1137MHz without even trying very hard, and would also be quiet. It would also push the rating to over 6100 GFLOPS, over a 33% increase from stock.
These guys did, if no one here has yet: http://www.hardocp.com/article/2013/...cking_review/2
Good news. I hadn't looked at Nvidia's most recent driver release, but now you CAN have a Titan and other Nvidia cards in the same computer at the same time. The Titan's driver is no longer separate from the rest.
Nice!
Edit: any thoughts on the 780 GTX? About half the cost of a Titan, and 90% of the performance.
http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units#GeF orce_700_Series
It's fine for FP32 but your DP drops from 1310 GFLOPS with the Titan to 165.7 GFLOPS on the 780. Other than that, it's definitely worth the money for FP32 in comparison. From a power usage viewpoint, TDP is the same on both. The Titan has more cores, but the 780's base/boost clock is higher.
Edit: Though a 7970 GHz Edition beats the 780 by about 100 GFLOPS in FP32 for about $200 less, and you would also get 1024 GFLOPS of DP.
Well, I have one now.
http://www.newegg.com/Product/Produc...82E16814130899
FWIW, it is running at 993MHz, and I have changed no settings.
Question: Where is the setting in the nvidia control panel to change the DP setting? I can't find it.
Nvidia Control Panel
Manage 3D settings>>Global settings>>CUDA - Double precision
Well, that was obvious. Thanks nVidida.
Thanks JPM!!
FWIW, with fan on auto, the card is running at 1045 MHz.
Sweet! So how does the output compare? Was it worth it?:confused:
I haven't done any benchmarking yet. But I can tell you this: I have two nvidia machines on Bitcoin Utopia. (I know, not the best place for nvidia) One has a 590 and a 580, three GPUs total. The other the titan. The titan does about half the credits/day of the other machine.
So it's probably fairly good performance per watt then. :) That machine with BOTH 590 & 580 must be sucking some serious juice! :D
DrPop, I was thinking of the same thing. (Like minded people think alike, only fools never differ), than again, if those GPU's were on their own in different machines, would it not consume same power as the two in one machine?. Irrespective, these GPU's consume too much power and cost me heaps when the power bill comes around...
zombie67, are you using the software that came with your card or a third party software package like MSI Burner. Reason I'm asking is I've recently installed the EVGA model in my system and the software has the clock at a little over 550MHZ when the card itself should be closer to 837MHz. This has me concerned due to the fact it isn't even trying to boost when I'm running boinc. Will post a screen shot this evening of the software screen and maybe figure this out. I'm hoping its not a faulty card :(
No software was required to make it run at that speed.
However, I am using the EVGA precision x to play around with OC.
Same here with the Precision X. The card made a liar out of me after I checked it during lunch during the burn test, 1056MHz boosted at auto fan speed :D I'm much happier. I don't know if it was just not engaging during the seti WU's or the Primegrid WU's are just beefier requiring it to spin up to full burn.
Will post a screen of my setup with Precision and see if maybe we can squeeze a bit more out of the card, it seems like its getting plenty of power but I did get a V-core warning during some gaming last night so I'll see what is going on with the CPU power.
Question: Water cooling, will it ultimately be worth it? I'm seeing in most of the reviews it does the obvious, keep the temp down. I'm still not sold on it making the card perform significantly better...most of the reviews only give a 10% difference in performance, would it be worth the extra 200 bucks.
First, seti doesn't work the titan hard enough half the time for it to run at full speed. 2nd my titan is water cooled. 1071 mhz standard speed out of the box. Highest temp it's ever reached was 39C on Dirt. The catch is some of the RAM is on the back of the card so you'll still need a gentle breeze blowing across there.
Sent from my Galaxy S4 using Tapatalk 4 beta
Yes, I have to run 2-3 setiathome_v7 tasks at a time to get load up to 100%.
Any advice for overclocking the Titans? Very little out there that I can find for the Titan Black (no real surprise there). I figure both Titan models are not really all that different, please share your wisdom/experience. Articles on how to O.C. Titans are the most varied I have read for any card, mainly because of the way the card manages boost and temps and folks desire to increase voltage.
Currently running the Titan Black at 1212 Mhz core, 3703 Mhz memory, stock voltage, +106% power, and custom fan profile; crunching on GPUGrid. I should probably tune it in on a project like collatz first so I don't loose a 8+ hour wu, but what the heck it.
Where did you get the titan black? i haven't been able to find it in stock anywhere... I have the original titan, and I am overclocking it some, but it sounds like you already have the basic idea. I use MSI afterburner as my overclocking tool.
I got it from evga.com the first day they released, kept an eye on the main sites until they appeared, than pondered it for awhile. When they started to disappear from the sites later in the day, I pulled the trigger.