PDA

View Full Version : Nividia Titan



litehouse43
02-07-13, 01:01 PM
http://www.tomshardware.com/news/GeForce-Titan-GK110-GTX690,20797.html

Anyone waiting to go out and pick one up?

We're getting close it looks like. Time to buy up all the GTX 6XX's going on ebay when the gamers update. =P~

John P. Myers
02-07-13, 06:12 PM
I had actually started a thread about this here: http://www.setiusa.us/showthread.php?4593-Big-Kepler-GK110-is-Coming-Afterall

But i like your title better now that the release date is much closer. :)

So the new info i've got is that this card was listed for pre-order on a Danish hardware website and Nvidia allowed Asus to put their stickers on this one :p Converting their currency to USD equates to ~$1200, however European taxes are far higher than here. I still expect the card to retail for $899.

Expected release date is Feb. 24th-26th

The GPU supposedly has two DVI, one HDMI and one Displayport video outputs.

Quad-SLI will be supported.

8+2 VRMs for powerphases. VRAM is expected to be found on both the front and back of the PCB since it takes 24 2Gb chips to come up with 6GB, and will have a backplate.

Also it will have 1 8-pin + 1 6-pin power connecters, as i mentioned in the other thread.

The PCB appears to only have 1 plug for a fan, so expect the standard single blower which is found on most reference GPUs.

Performance, clocks and the number of cores are still unverified, but it should easily outperform a K20X in FP32. No word on FP64 yet.

In other news, it has been verified AMD is NOT launching the 8000 series anytime soon, perhaps not until Q4 as i mentioned in the other thread.

I cannot say anything more than what I’ve already said, really. AMD and its partners are happy with the HD 7000 Series, and it will continue to be our emphasis in the channel for the foreseeable future. I should note that HD 8000 Series has never been so much as hinted at for a channel release. Anything to the contrary is an unsubstantiated rumor fabricated to drive traffic."

zombie67
02-07-13, 07:19 PM
Any chance for a dual chip version? Like a 590 or 690?

John P. Myers
02-07-13, 07:38 PM
Any chance for a dual chip version? Like a 590 or 690?

I'd have to say no, since the single GPU already has to use the back of the card to find space for VRAM. The only way they could make a dual-GPU card would be:
a) Extend the PCB to 16-17" in length
b) Tie 2 separate PCBs together like they did with the GTX 295 (the more realistic option, though it also hasn't been done since the GTX 295)

Again, OEMs are forbidden by Nvidia to alter this GPU's design in any way, meaning if there ever is a Titan x2, Nvidia will have to make it, unlike AMD who just threw their hands in the air and let anyone make anything they wanted when it came to a HD7970 x2 (7990).

As it happens, AMD may still release a 7990 of their own eventually.

John P. Myers
02-10-13, 06:40 PM
Finally a realistic benchmark:

1257

*Seems* to be the most realistic benchmarks to surface so far. Still no guarantee it's true.

John P. Myers
02-11-13, 08:46 PM
Possible actual specs have maybe possibly surfaced possibly. Problem i have is everything from the 512bit DDR5 to the base and boost clocks mimic the 690, but it is listed for sale as such.


ASUS GTX TITAN PCI-E 3.0 6GB 512-bit DDR5, Base:915 boost:1019 / 6008 MHz, DVI-I x 2, DVI-D x 1, Mini DP x 1, Fan

http://www.austin.net.au/catalog/product/view/id/8772/s/asus-gtx-titan-pci-e-30-6gb-512-bit-ddr5-base/

zombie67
02-11-13, 10:16 PM
Possible actual specs have maybe possibly surfaced possibly. Problem i have is everything from the 512bit DDR5 to the base and boost clocks mimic the 690, but it is listed for sale as such.

I can't follow what you are saying. The base and boost clocks are the same. And is listed as such. Okay. What is the problem then? They should be different or something? I am missing something obvious, I think.




http://www.austin.net.au/catalog/product/view/id/8772/s/asus-gtx-titan-pci-e-30-6gb-512-bit-ddr5-base/

The link isn't working for me. Maybe you have to have an account?

John P. Myers
02-12-13, 05:29 AM
I can't follow what you are saying. The base and boost clocks are the same. And is listed as such. Okay. What is the problem then? They should be different or something? I am missing something obvious, I think.
Just seems to be too much of a coincidence. Not impossible, but i don't recall it ever happening before. It's possible i'm overly paranoid :p



The link isn't working for me. Maybe you have to have an account?

They probably had to take it down like the other 2 sites that listed it early. Here's what the listing looked like: http://www.legitreviews.com/news/15115/

zombie67
02-12-13, 11:25 AM
Ah. Got it. Yeah, I can't believe the clocks are exactly the same.

Duke of Buckingham
02-12-13, 04:00 PM
:o My clock is much better. :D

1258

John P. Myers
02-12-13, 08:18 PM
This benchmark seems to be correct: http://www.setiusa.us/attachment.php?attachmentid=1257&d=1360539612

As suspected, the 512bit interface seems to be incorrect.


Finally, we have some solid information about the upcoming flagship model from NVIDIA. Our sources confirm that the almighty Titan is set to be launched on February 18th in very limited quantities.

Forget about the 512-bit interface though. The GTX Titan will be based on GK110 GPU with 2688 CUDA cores. There will be 224 texture mapping and 48 raster operating units. The reference board will almost without a doubt be equipped with 6GB of GDDR5 memory across 384-bit interface.

First leaks suggested that the core will be clocked at 732 MHz. DonanimHaber has reported reliable, reliable as can be at this stage, information about the texture fill rate, which apparently comes in at 288 GT/s. That’s faster than the GTX 690′s 234 GT/s. Furthermore, the site is reporting that the GTX Titan would have computing power of 4.5 TFLOPS. If the provided numbers are correct we are in the range of 800-900 MHz core clock.

The card will definitely look better than the GTX 690, although the design will mirror it’s dual-gpu brother. The GeForce GTX Titan will be covered with the magnesium alloy, while the whole card will be metallic silver. There will only be a reference design, so no custom models. Additionally, in the first batch there may only be ASUS and EVGA cards available.

Expect reviews on the 19th and realistic availability around the 24th

Crazybob
02-13-13, 01:49 PM
So do you think I could get a couple to try out? Drooooolllll.................=P~

John P. Myers
02-13-13, 09:06 PM
The company castrated the Double Precision, and you can expect great Single Precision performance (2,688 CUDA cores times 875 MHz should result in around 4.5 TFLOPS SP from a single chip). Double-precision follows the Kepler tradition of 1/24 Single Precision performance. Yes, 4.5 TFLOPS SP and 196 GFLOPS DP, nicely protecting Tesla K20/K20X and the upcoming Quadro K6000 products.


Welp, nevermind then. :mad: Keep buying AMD.

DrPop
02-13-13, 11:47 PM
Welp, nevermind then. :mad: Keep buying AMD.

What the heck? Why on earth does the marketplace let them get away with this crap? Crippled to the tune of 1/24th its true power???????????:confused: Arrrrrgggghhhhh.:mad:

zombie67
02-13-13, 11:55 PM
Welp, nevermind then. :mad: Keep buying AMD.

Help me out. Can we get the equivalent numbers, or ratios, or whatever for the 580 and 680? I have no frame of reference here.

Also, at 1/24, how does that compare to (say) a 7970 at DP performance?

Finally, what projects require DP, or are significantly sped up with DP? Yes, MW does. GPUGRID uses almost no DP. Others?

John P. Myers
02-14-13, 04:05 PM
Help me out. Can we get the equivalent numbers, or ratios, or whatever for the 580 and 680? I have no frame of reference here.

Also, at 1/24, how does that compare to (say) a 7970 at DP performance?

Finally, what projects require DP, or are significantly sped up with DP? Yes, MW does. GPUGRID uses almost no DP. Others?

While you're right that DP isn't used for many projects, that was really the only selling point the Titan had for me - finally having an Nvidia GPU that didn't suck at double precision. Otherwise the $900 price tag for this thing is unfounded. Here's why:

Nvidia GTX Titan:
SP: 4.5 TFLOPS
DP: 196 GFLOPS
Price: $899
SP/$: $0.20 per SP GFLOPS
DP/$: $4.59 per DP GFLOPS

AMD HD 7970 GE:
SP: 4.096 TFLOPS
DP: 1024 GFLOPS
Price: $450
SP/$: $0.11 per SP GFLOPS
DP/$: $0.44 per DP GFLOPS

Without the DP, the novelty of the Titan wore off for me and i'm left with nothing but logic, which dictates there's no reason whatsoever so pay 2x the price of a 7970 GHz Edition for a GPU that only gets 400 more GFLOPS SP and less than 1/5 the DP. What are we actually paying for? Meh...

FWIW, the DP of the Titan *would* have been 1.5 TFLOPS if it wasn't crippled, blowing away the AMD 7970 GE.

Edit: As far as the 580 goes, it gets 197.6 GFLOPS DP - yes, even faster than the Titan and faster than the 680 which only has 128.8 GFLOPS DP, and the 580 costs less. You'll also get better SP compute results from the 580 than the 680.

zombie67
02-14-13, 06:06 PM
Thanks! That is exactly what I was looking for. Very clear now.

Fire$torm
02-14-13, 06:13 PM
So the 580 is the only CUDA card worth buying. The logic behind nVidia's marketing practices leaves me wondering how they managed to stay in business this long.

DrPop
02-15-13, 02:38 AM
So the 580 is the only CUDA card worth buying. The logic behind nVidia's marketing practices leaves me wondering how they managed to stay in business this long.

They aren't too worried about the crunchin' market, I'm sure. Either gamers (for things like the 680 or Titan) or of course "professionals" using their uber expensive, non- crippled "pro" teslas. Very, very lame for what we like to do with them.
I think the 570 is a good sweet spot card; the 580 is good, and of course the big dog double 590...but those are the last of the good Nvidia cards for crunching. Of course that's looking at it from a $/performance perspective like JPM was saying. And I don't see how to justify any other perspective on it! :)) ;)

zombie67
02-15-13, 09:13 PM
Edit: As far as the 580 goes, it gets 197.6 GFLOPS DP - yes, even faster than the Titan and faster than the 680 which only has 128.8 GFLOPS DP, and the 580 costs less. You'll also get better SP compute results from the 580 than the 680.


JPM: Maybe there is one more angle to look at, power consumption per GFLOPS?

I was reading this thread (http://www.gpugrid.net/forum_thread.php?id=3238&nowrap=true#27994), and I wonder how titan compares to the previous generations?


The CC describes the GPU's abilities, and the researchers use this to determine the GPU type. The top CC2.0 GPU's (Geforce GTX 590, 580, 570, 480, 560 Ti (448), 470, 465) are less power efficient than the top CC3.0 GPU's (GTX690, GTX680, GTX670, GTX660Ti, GTX660, GTX650Ti, GTX650) making the CC3.0 cards preferable. The GTX570 and GTX580 are roughly as powerful as the GTX660Ti, GTX670 and GTX680, but being less power efficient and an older generation they are slightly down the recommended list.

John P. Myers
02-16-13, 12:32 AM
JPM: Maybe there is one more angle to look at, power consumption per GFLOPS?

I was reading this thread (http://www.gpugrid.net/forum_thread.php?id=3238&nowrap=true#27994), and I wonder how titan compares to the previous generations?

Yes that can come into play which makes saying things like "the GTX 580 is a faster (or equivalent) cruncher in all projects at all times" a bit silly. However that will be true 90% of the time, if only 10% of the projects have actually bothered coding using CUDA to take advantage of higher CC ratings. Most have not. Many are actually coding in OpenCL which doesn't rely on CC ratings so much, but more upon which driver version you have.

If it means anything, the Titan will have a CC of 3.5, though you can be pretty sure no one will create an application requiring it. The only differences are that 3.5 includes something called Dynamic Parallelism and Funnel Shift, both which no other CC version has, and the "Maximum number of 32-bit registers per thread" is increased to 255 (CC 2.0 through 3.0 is 63. CC 1.0 to 1.3 is 127). How useful these 3 things would be to us, i don't know, but they are the only 3 differences from 3.0.

The 600 series is definitely more power-efficient than the 500 series. No doubt. But to measure GFLOPS/Watt between them is extremely difficult. Mainly because the 600 series all claim higher GFLOPS than their 500 series counterparts, but real world compute-testing shows the 600 series to actually be slower in some cases. Even with something as simple as sorting, the GTX 580 can be ~28% faster than a 680. As for personal experience, i replaced a plain GTX 460 with an overclocked EVGA GTX 660 a few months ago. The 460 had a core clock of 675 MHz and the 660 had a core clock of 1123 MHz, but was only 13% faster, when it should've been more than double according to the GFLOPS specs. Even though it was still a tad faster and even though it used less power, i gave it to my wife so should could play games with it and put my 460 back in and once again swore off buying Nvidia only to be a victim of their intentional scams :) Essentially, if you check the GLOPS ratings on wiki for the 600 series, to compare it properly to every other Nvidia GPU, divide it exactly in half.

The GTX 680 claims it can do 3090.4 GFLOPS. Pretend it says exactly half that (1545.2) and then you can compare it to the other GPUs (making the GTX 580 ~36 GFLOPS faster than the 680). The reason for this is the architecture change in Kepler which pretty much reduced the powerful cores to an imitation of AMD's weak stream processors. There's more of them, but they're smaller, simpler and slower. Of course if anyone ever codes specifically for the 600 series, meaning the exact same app wouldn't work on any previous gen., then we may see better results than half.

If you're looking at saving on your power bill, then yes the 600 series is better than the 500 series, however you'll have to keep the 600 series cards for years and years to save as much on your power bill as the difference in price for the cards.

Considering GFLOPS/Watt:
AMD 7970: 15.155
GTX 680: 15.85
GTX Titan: 18.816

Nvidia does win the GFLOPS/W contest :p



http://media.bestofmicro.com/5/G/348820/original/0103%20Luxmark.png FP64 is taken into account here.

Mumps
02-16-13, 09:22 AM
Considering GFLOPS/Watt:
AMD 7970: 15.155
GTX 680: 15.85
GTX Titan: 18.816

AMD still wins *and* you get DP as a free bonus :p

Ummm. Am I misreading this? I thought in a GFLOPS/Watt rating, the higher numbers are better. So the Titan is the best of the three listed cards.

Now this may be completely wrong, but let's take a stab at costing this.

With a difference of about 55 Watts in the TDP rating between the 7970 (195) and the GTX 680 (250), doesn't that equate to roughly 40Kw of power consumption monthly? (55*720hours in a 30 day month) Just taking a stab at a 12 cents/KwH price for electricity, that's about $17.00 monthly to run the 7970. And $5 a month more to run the 680. Wasn't the TDP of the Titan supposed to be about 235? Which makes it about $3.50 a month more expensive to run than the 7970.

John P. Myers
02-16-13, 09:41 AM
Ummm. Am I misreading this? I thought in a GFLOPS/Watt rating, the higher numbers are better. So the Titan is the best of the three listed cards.


Bah! You're right. Brain fart :p Nvidia does win the GFLOPS per watt race. #-o

Also the TDP is expected to be closer to 250W for the Titan. The 235W rating was based on what the K20X draws, but it's only clocked at 732MHz while the base Titan clock would be 875MHz, with the Asus possibly being released at 915MHz which might push TDP to ~260W

zombie67
02-16-13, 11:10 AM
Okay, so nothing compelling in the power consumption department either. Thanks!

Slicker
02-19-13, 10:54 AM
If it means anything, the Titan will have a CC of 3.5, though you can be pretty sure no one will create an application requiring it. The only differences are that 3.5 includes something called Dynamic Parallelism and Funnel Shift, both which no other CC version has, and the "Maximum number of 32-bit registers per thread" is increased to 255 (CC 2.0 through 3.0 is 63. CC 1.0 to 1.3 is 127). How useful these 3 things would be to us, i don't know, but they are the only 3 differences from 3.0.

True, but then again, shouldn' nVidia be adjusting their OpenCL compiler so that if the hardware is 3.5, that it would use all the 3.5 features? Since OpenCL is extremely similar to CUDA (especially compared to CAL) you would think that any changes nVidia does for their CUDA compiler optimization could also be put into their OpenCL optimizer.

John P. Myers
02-19-13, 06:53 PM
True, but then again, shouldn' nVidia be adjusting their OpenCL compiler so that if the hardware is 3.5, that it would use all the 3.5 features? Since OpenCL is extremely similar to CUDA (especially compared to CAL) you would think that any changes nVidia does for their CUDA compiler optimization could also be put into their OpenCL optimizer.

That is an option, but it would still have to be coded for specifically. Some projects do write separate OpenCL Nvidia and OpenCL AMD apps, but not all. Those that do have the option of being that hardware specific, which of course prevents the same OpenCL app from working on AMD or vise versa. Would also prevent the same app from working on an Nvidia GPU with only CC 3.0 or lower. Similar to the SSE instructions from Intel, if you code for SSE 2, but your CPU and compiler supports SSE 4.2, you still only get SSE 2.

Regarding the Titan though, there is a glimmer of hope afterall. New "official" specs have been released showing the Titan getting 1.3 TFLOPS FP64, however it seems the release price has climbed to $999. While 1.3 TFLOPS DP would be nice, for that price you can get a 7990 (7970 x2) and get pretty close to 2 TFLOPS. I don't understand why they're only claiming 1.3 when it should be 1.5 since this architecture has DP working at 1/3 SP which is rated at 4.5 TFLOPS. The Titan is also 10.5 inches long which is 0.5 inches shorter than the 690.

TDP is confirmed at 250W which is ~50W less than a 690. The 384-bit 6GB VRAM is clocked at 6008MHz. Base clock is 837MHz with boost at 876MHz. The Boost clock is now based on GPU temp (GPU Boost 2.0), rather than the power range of the core as it is on the 600 series. OverVoltage will also be hardware supported, but the companies that put their stickers on it (Asus, EVGA, etc) have the option of preventing you from using it if they feel like it. You can adjust the target temp of the GPU Boost (default is 80C). Increasing it will raise the Boost frequency.

The NDA on performance results will be lifted thursday.

trigggl
02-19-13, 07:20 PM
The NDA on performance results will be lifted thursday.

I assume that means Non-Disclosure Agreement. I thought I would spell that out for those of us who are TLA challenged.

John P. Myers
02-19-13, 08:53 PM
I assume that means Non-Disclosure Agreement. Correct :)
I thought I would spell that out for those of us who are TLA challenged. Us geeks love our Three-Letter Acronyms ;)

John P. Myers
02-19-13, 09:46 PM
I don't understand why they're only claiming 1.3 when it should be 1.5 since this architecture has DP working at 1/3 SP which is rated at 4.5 TFLOPS.

AHA! Reason found. Though 1.3 TFLOPS still beats a 7970, i have to still say boo @ Nvidia for this new gimmick of theirs. By default FP64 is set to run at standard Kepler speeds (1/24 FP32). You have to go into Nvidia's configuration menu and enable FP64 yourself. But! When you do, it disables the Boost clock and makes it likely the base clock will drop from 837MHz to 725MHz.


Titan, as we briefly mentioned before, is not just a consumer graphics card. It is also a compute card and will essentially serve as NVIDIA’s entry-level compute product for both the consumer and pro-sumer markets.

The key enabler for this is that Titan, unlike any consumer GeForce card before it, will feature full FP64 performance, allowing GK110’s FP64 potency to shine through. Previous NVIDIA cards either had very few FP64 CUDA cores (GTX 680) or artificial FP64 performance restrictions (GTX 580), in order to maintain the market segmentation between cheap GeForce cards and more expensive Quadro and Tesla cards. NVIDIA will still be maintaining this segmentation, but in new ways.

zombie67
02-19-13, 10:11 PM
I really like AnandTech reviews. But I don't read a lot of reviews, across many different review sites. How does AnandTech compare? What is the general consensus?

But that last post confuses me (easily done).

JPM says: By default FP64 is set to run at standard Kepler speeds (1/24 FP32). You have to go into Nvidia's configuration menu and enable FP64 yourself.

Okay, but when you enable it, what is the result? 8/24 FP32?

AnandTech says: The key enabler for this is that Titan, unlike any consumer GeForce card before it, will feature full FP64 performance, allowing GK110’s FP64 potency to shine through.

What is full performance? Does that mean it will match the Tesla K20 or K20X at DP? So then, what is the point of Tesla, if this performance can just be turned on at will?

Fire$torm
02-19-13, 10:28 PM
I think it means any form of clock boosting, i.e. Turbo mode, will be disabled or locked at some much reduced clock speed. So nVidia is still hamstringing the Titan but doing it in a different way. Sorta making the Titan a teaser to the high-end Tesla/Quatro lines.

John P. Myers
02-19-13, 10:43 PM
JPM says: By default FP64 is set to run at standard Kepler speeds (1/24 FP32). You have to go into Nvidia's configuration menu and enable FP64 yourself.

Okay, but when you enable it, what is the result? 8/24 FP32?
No, FP32 will still run at 4.5 TFLOPS (~3.9 TFLOPS @ 725 MHz). The SP and DP cores are separate (though both groups are within the same SMX, which is basically Nvidia's term for a cluster of cores. Each of the 14 SMXs consist of 192 FP32 cores and 64 FP64 cores) and both can run at the same time, which is why clocks are reduced because of the increased TDP and heat output.


AnandTech says: The key enabler for this is that Titan, unlike any consumer GeForce card before it, will feature full FP64 performance, allowing GK110’s FP64 potency to shine through.

What is full performance? Does that mean it will match the Tesla K20 or K20X at DP? So then, what is the point of Tesla, if this performance can just be turned on at will?

It's very similar to the K20X, but the K20X supports ECC while the Titan doesn't. Also the Titan won't support direct memory access to other 3rd party PCIe devices often found in servers/supercomputers among other things not relevant to desktops or most workstations, like HyperQ which allows multi-core CPUs to simultaneously utilize the CUDA cores on a single K20X. Also there's no programming tech support.

zombie67
02-19-13, 11:06 PM
Okay, but when you enable it, what is the result? 8/24 FP32?


No, FP32 will still run at 4.5 TFLOPS (~3.9 TFLOPS @ 725 MHz).

Er, that's not what I was asking. Or at least what I thought I was asking. Let me try again. When you enable it, what happens? If not-enabled, DP is 1/24 of FP32. If enabled DP is 1/3, so that is 8/24 of FP32, right? Now I've confused myself.

Okay, let's try this a different way. With the Titan DP enabled, how will it compare to the 7970 at a DP project like milkyway (assuming they had a similar app for both)?

John P. Myers
02-20-13, 06:38 AM
With the Titan DP enabled, how will it compare to the 7970 at a DP project like milkyway (assuming they had a similar app for both)?

Assuming the MW app is coded equally efficiently for both Nvidia and AMD and that both apps require the completion of the exact same amount of work, the Titan would be ~30% faster than the 7970.

John P. Myers
02-21-13, 08:51 PM
1268

Not bad at all. It's about time Nvidia allowed DP.

Edit: Larger image: http://hothardware.com/articleimages/Item1992/comp1.png

zombie67
02-21-13, 08:58 PM
Maybe I will buy one after all...if the cost comes down.

Wish they had included the 580 in that chart.

John P. Myers
02-21-13, 09:08 PM
Maybe I will buy one after all...if the cost comes down.

Wish they had included the 580 in that chart.

The 580 would've been within 1% above the DP-disabled Titan. Unless you want the Titan for bragging rights (which i do, personally :D ) you're better off getting another 7990 for the price/performance/slots used.

Fire$torm
02-22-13, 04:43 AM
Looks like for the moment the only way to get a Titan is to buy a complete pre-built system ---> http://www.forbes.com/sites/jasonevangelho/2013/02/21/nvidias-geforce-gtx-titan-available-today-what-to-know-and-where-to-buy/

Edit: Your gonna Love this.....
The Newegg overview video (Here (http://www.newegg.com/Product/Product.aspx?Item=N82E16814121724&Tpk=nvidia%20titan)) puts a true spin-doctor lie into the reason for the automatic downclocking when DP is enabled. Somewhere in the first 4~6 minutes of the video it is stated that downclocking helps to maintain compute accuracy in DP mode. (B.F.S.)

Crazybob
02-22-13, 09:34 AM
Received an E-mail this morning from EVGA regarding the new Titan. $999.00. Not available yet though.

http://www.evga.com/Products/Product.aspx?pn=06G-P4-2790-KR

DrPop
02-22-13, 06:59 PM
...Your gonna Love this.....
The Newegg overview video (Here (http://www.newegg.com/Product/Product.aspx?Item=N82E16814121724&Tpk=nvidia%20titan)) puts a true spin-doctor lie into the reason for the automatic downclocking when DP is enabled. Somewhere in the first 4~6 minutes of the video it is stated that downclocking helps to maintain compute accuracy in DP mode. (B.F.S.)

Hey F$ - that is some real funny stuff - I dare you to call them on it! :D :-bd

Fire$torm
02-22-13, 07:57 PM
Hey F$ - that is some real funny stuff - I dare you to call them on it! :D :-bd

Can't.... Newegg is still my go to place for hardware. Don't want to piss them off... :P

John P. Myers
02-25-13, 07:17 PM
If you get an EVGA Titan, here's a nifty link to control the LED lights which illuminate the logo on top of the GPU http://www.evga.com/forums/tm.aspx?m=1869730

Also works on the 690.

Fire$torm
02-25-13, 07:29 PM
Titan on eBay ~$1,500 (+|-)
http://compare.ebay.com/like/321078158032?var=lv&ltyp=AllFixedPriceItemTypes&var=sbar

Newegg - Out of stock.....
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130897&Tpk=EVGA%20GTX%20TITAN

DrPop
02-25-13, 09:28 PM
Man, for that price I would so much rather have a 7990! :p

zombie67
02-25-13, 11:31 PM
Man, for that price I would so much rather have a 7990! :p

Been there. Been through two of them. Both completely unstable at load. The replacement, with the same instability problems, is sitting on a shelf. What a waste.

Better would be 3x 7970 GHZ cards.

DrPop
02-26-13, 12:49 AM
Been there... What a waste...Better would be 3x 7970 GHZ cards.

Ah, OK. Well, I've never had one, they look so good on paper. ;) Guess that's just marketing hype for you?:confused: :o

Fire$torm
02-26-13, 10:29 AM
Been there. Been through two of them. Both completely unstable at load. The replacement, with the same instability problems, is sitting on a shelf. What a waste.

Better would be 3x 7970 GHZ cards.

Hey Z,

I did not realize the 7990 was giving you such headaches. If I may ask, is it unstable only when crunching or just plain unstable?

John P. Myers
04-08-13, 09:46 PM
A new cheaper Titan is being relased called the Titan LE. 5GB GDDR5, 320 bit, 2304 shader cores, same clocks as current Titan. Expected to be released in Q3 and cost ~$800

zombie67
04-08-13, 10:05 PM
A new cheaper Titan is being relased called the Titan LE. 5GB GDDR5, 320 bit, 2304 shader cores, same clocks as current Titan. Expected to be released in Q3 and cost ~$800

Gotta do something with "only slightly flawed" chips...

John P. Myers
04-23-13, 10:29 PM
A new cheaper Titan is being relased called the Titan LE. 5GB GDDR5, 320 bit, 2304 shader cores, same clocks as current Titan. Expected to be released in Q3 and cost ~$800

*sigh* And now new info surfacing...

Titan LE now equals GTX 780, but seems they decided on 2496 cores, however only 2GB memory bringing the price down to $600. Also there will be all the crippled FP64 we've all come to love (or despise with murderous contempt) :) As with the 600 series, divide GFLOPS ratings in half when comparing them to a 500 series or earlier GPU.

The 700 series is of course nothing but a refresh/renumbering of the 600 series. The GTX 770 will be the GTX 680, so feel free to roll your eyes hard when someone tells you the 770 is 23% faster than the 670, which will be renumbered as the 760 Ti. Slightly higher clocks may be thrown in here or there, but nothing to be impressed over.

How soon can we start yawning at these new GPUs? Mid-May.

dmike
04-28-13, 11:32 AM
Just wanted to chime in with this as it had come up between a friend and I...

In the "research" and reading I've done on the Titan, specs aside, just looking at the performance, it seems that there is an obvious boost with CUDA apps based on rendering and such. But with DC projects I've yet to find any reliable data that puts the Titan anywhere near the price point.

I'm sorry, I can't remember where, but I found some benchmark with regard to Folding that had the Titan taking out WU about 10% faster than a GTX 580. If that's true, I have to say it's quite disappointing. Yes I know, hearsay, not evidence, just going by memory here :) Please someone confirm/correct me on this if you have the info.
Sure, the 580 is one of the best crunchers imo, but a $1k card that just barely outperforms is ridiculous.

Now with regard to gaming, my understanding is that the Titan shines, in many cases walking away from the 690 in performance with substantial leads depending upon titles/settings. My impression has always been that Kepler is a gaming based architecture first, and a cruncher as an after thought. Conversely, the Fermi chips pound for pound out crunch Kepler cards but won't live up to the gaming performance.

When I built my rig it was for gaming in mind. I found a good middle ground on price/performance to throw in 2x 660Ti cards. It's an excellent setup and I run all modern games and full specs without issue. OTOH, I have a secondary box with a single 550Ti in it that can knock out a WU just about as fast as one of the 660Ti cards does.

To me, the Titan is far from a Tesla. Just from what I've seen, the card is for gaming and will also excel at CUDA based rendering. It does not however seem to be the best or even a good choice for DC work though, and it seems that Nvidia's CUDA marketing is focused upon rendering apps as there seems to be no mention of DC work when they market these cards to us.

Edit;
----------
This is not all encompassing, but I wanted to point out some benchmark specs within Einstein with regard to performance. In this case, the Titan vs the GTX 580. Each system has 2 of each card.
The Titan first;
1307

Now the 580 setup;
1308

Discuss?

John P. Myers
04-28-13, 12:22 PM
Well the *ONLY* benefit to having a Titan is it's the best FP64 performance you'll get out of any Nvidia card, by far. FP32 results are ok, but not worth throwing $1k at, as you said. Of course this also only applies to projects that don't offer AMD GPU apps. There's never any reason to buy anything made by Nvidia if you can crunch using a Radeon. Primegrid is one example of a project where you must use Nvidia, however (i think only 1 app was adapted to AMD). The Titan would do some serious stomping here, as most Primegrid GPU apps require FP64 (100% requirement on all PSA CUDA apps). For milkyway, might as well stick with AMD since performance/$ is much more in your favor. For GpuGrid the boost from Titan's FP64 would be nominal, but still something. Since they give bonus points for returning completed WU's early, it may or may not have an effect there occasionally.

dmike
04-28-13, 12:49 PM
Hey John, I understand all of that. AMD exceeds in many ways. I'm just kind of looking at it from an upgrade perspective, i.e. Titan vs. what Nvidia already offers.

John P. Myers
04-28-13, 01:18 PM
Hey John, I understand all of that. AMD exceeds in many ways. I'm just kind of looking at it from an upgrade perspective, i.e. Titan vs. what Nvidia already offers.

Well if you need FP64, upgrading to a Titan is the way to go. If not, stick with the GTX 500 series - better at crunching than the 600 series, and also cheaper. If you do opt for the Titan, be aware the only other Nvidia GPU you can add to that computer is another Titan, due to the drivers. If you are also a gamer, as some of us are, a GTX 690 would get you better performance on average than a Titan for the same price, and will also be a faster cruncher than the Titan on FP32. However if you need FP64 you will be crippled with the 690. Or, for the same $1k, you could get an AMD 7990 (or 2 7970s) and excel at both FP32 and FP64. The decision would have to be made on your personal choice of projects/apps/games as i don't know what your preferences are :p

dmike
04-28-13, 02:33 PM
Thanks for that, John.

Yeah first and foremost I use my rigs for gaming so that has to be at the forefront of video card choices for me. The problem I have with the 690 is in it being a single card multi GPU setup which makes it a pass decision, for intricate details and reasons I won't bother boring you with at this time.

The next most important thing to me is CUDA rendering. I make videos in Sony Vegas and can say that GPU video rendering has been close to 500% faster for me on my setup. I love it.

Last in place is DC crunching. Considering these things, I pass on Fermi cards as Kepler are better gaming performers. The Titan does definitely shine with games, but by simply adding a third 660Ti, I'd actually be outperforming the Titan on games by a substantial amount. So from that perspective, on all fronts it's a pass decision.

I'm really surprised to see the lack of performance increase that the Titan has in most DC applications vs. the 580. As someone had mentioned earlier in this thread, Nvidia likes to rebrand cards and often new products are only marginally better than what they already have to offer. Really, the Titan is what the 690 should have been all along.

In summary I'm satisfied with my current setup and won't be upgrading any time soon. For games, My i7 3770 with 32GB RAM and 2x 660Ti is plenty good. When and if I do upgrade however, I just can't justify going for the Titan. If I were to upgrade today, hypothetically speaking, I'd go with simply adding the third 660Ti instead of a 690 or a Titan. Aside from the circumstances that you mention, for the cost of an additional 660Ti (about $300) I will have gaming and crunching performance that actually exceeds the Titan. Plus, when I'm done with the setup, I have 3 cards that can be put in other boxes to upgrade their aging components (I have two systems with a 550Ti and one with a GTX 260).

Of course it's not ideal to set out to have 3x mid range video cards, but from where I'm at now with 2 already, it makes the most sense cost/performance wise.

John P. Myers
04-28-13, 04:33 PM
Thanks for that, John.

Yeah first and foremost I use my rigs for gaming so that has to be at the forefront of video card choices for me. The problem I have with the 690 is in it being a single card multi GPU setup which makes it a pass decision, for intricate details and reasons I won't bother boring you with at this time. Microstuttering perhaps? :p AMD is (far) worse with that than Nvidia, admittedly.



I'm really surprised to see the lack of performance increase that the Titan has in most DC applications vs. the 580. As someone had mentioned earlier in this thread, Nvidia likes to rebrand cards and often new products are only marginally better than what they already have to offer. Really, the Titan is what the 690 should have been all along.

Actually the Titan was originally supposed to be the GTX 680. The current 680 was supposed to be the 660Ti. What happened was Nvidia saw the 660Ti's performance was about the same as AMD's 7970 (in games), so the 660Ti was released as the GTX 680 for $500. Imagine those profits...

Anyway, if you check GFLOP specs on GPUs before making a purchase, or just out of curiosity, i'll explain something about why the 500 series is still great for crunching. Starting with the 600 series, Nvidia dropped the shader clock which ran 2x the core clock. This is why when you compare the GFLOPS of the 600 series to the 500 (or earlier) series, you must cut the rating for the 600 series card in half. Comparing a 600 series GPU to an AMD GPU is essentially even. The same will be true of the 700 series.

For example the 660Ti is rated at 2459.52 GFLOPS FP32. Divide by 2 and the actual DC performance is 1229.76 GFLOPS. The 560Ti is rated at 1311.7 GFLOPS making it several percentage points faster. And it costs less. And it has better FP64, but still crappy. Those numbers are based on reference speeds. On OC versions, the gap is a bit wider.

It does sound like you need to stick with Nvidia though for your specific uses. Just remember to keep your power bill in mind when adding so many mid-range cards to a system to meet the performance of a single high-end card. Also remember to subtract ~30% of the combined performance for 3-way SLI (Xfire as well) when making comparisons. It'll never scale linearly :)

dmike
04-28-13, 04:53 PM
Thanks again for that John.

Yeah, I'm aware of the difference with Kepler and Fermi when it comes to crunching. I've tried to explain some of these things to people that were touting the Kepler cards as the Fermi's replacement and could never get through to them. Then there were others that said that Kepler was a huge step backwards because they didn't perform as well as the Fermi in the 500 series. But, Kepler was not designed first and foremost with DC in mind. You're absolutely right, in that the 500 series is a much better performer with DC applications. With gaming though, the Kepler romps the Fermi series (and by a wide margin) and rightly so; it's what they were primarily designed for. It surprises me that the Titan is of Kepler architecture. I don't know enough about it to really understand why at this point, but I question with a card like that and nvidia touting CUDA as the need for it why they didn't just build a super awesome Fermi based card.

I did not know about the whole 680 was supposed to be the 660ti and such. Thanks for the info.

I'm not for the 690 but not because of microstuttering. I used to have the stutter with my 660ti sli setup but some evga patch eliminated it. Without going into great detail, I'll just say that sli on a stick has its issues. The 690 can be very finicky with certain motherboards and some apps don't like it much at all.

When looking at the 3 way SLI, I'm just looking at benchmarks with the setup, not trying to figure what boost I'll get by adding a third card. It's pretty surprising but on a lot of titles a 3 way 660Ti is beating out the Titan on frame rates by usually around 20%, but in some times (such as Battlefield 3) at times as much as 45%. Granted, it's 3 cards vs. 1 so pound for pound obviously the Titan is stronger, but for someone with my current setup I'm looking at getting above Titan performance in games for $300.

Electric bill is an issue that most have to contend with, but not me. Fortunately, my wife has all of our bills paid through her employment so we never even get an electric bill :) I feel guilty about it at times because we're not very green and should be making at least some attempt to conserve power, but instead we've got 4 systems running 24/7, multiple televisions, we grow tomatoes indoors... hehe we're just huge electricity consumers.

John P. Myers
04-29-13, 03:01 PM
Something to note:

Overclocking the Titan is ridiculously simple due to the characteristics of GPU Boost 2.0 which raises the boost clock above stock settings if the Titan feels the temps are low enough to do so.

Reference clocks for the Titan are 837MHz base and 876MHz boost. By doing *nothing* more than increasing the manual fan speed to the max setting of 85%, the Titan will settle in at 993MHz. This puts the Titan's rating at 5337 GFLOPS FP32 for a standard edition card. Of course it might be a bit loud with the fan that high :p If that bothers you, just pick up one of these EVGA Titan Hydro Coppers (http://www.evga.com/Products/Product.aspx?pn=06G-P4-2794-KR) :) This would allow you to settle the Titan in at 1137MHz without even trying very hard, and would also be quiet. It would also push the rating to over 6100 GFLOPS, over a 33% increase from stock.

zombie67
05-21-13, 01:06 AM
Something to note:

Overclocking the Titan is ridiculously simple due to the characteristics of GPU Boost 2.0 which raises the boost clock above stock settings if the Titan feels the temps are low enough to do so.

Reference clocks for the Titan are 837MHz base and 876MHz boost. By doing *nothing* more than increasing the manual fan speed to the max setting of 85%, the Titan will settle in at 993MHz. This puts the Titan's rating at 5337 GFLOPS FP32 for a standard edition card. Of course it might be a bit loud with the fan that high :p If that bothers you, just pick up one of these EVGA Titan Hydro Coppers (http://www.evga.com/Products/Product.aspx?pn=06G-P4-2794-KR) :) This would allow you to settle the Titan in at 1137MHz without even trying very hard, and would also be quiet. It would also push the rating to over 6100 GFLOPS, over a 33% increase from stock.

Any titans on the team yet? Anyone tried this oc trick?

John P. Myers
05-21-13, 04:51 PM
Any titans on the team yet? Anyone tried this oc trick?

These guys did, if no one here has yet: http://www.hardocp.com/article/2013/04/29/nvidia_geforce_gtx_titan_overclocking_review/2

John P. Myers
06-23-13, 02:36 PM
Good news. I hadn't looked at Nvidia's most recent driver release, but now you CAN have a Titan and other Nvidia cards in the same computer at the same time. The Titan's driver is no longer separate from the rest.

zombie67
06-23-13, 04:36 PM
Nice!

Edit: any thoughts on the 780 GTX? About half the cost of a Titan, and 90% of the performance.

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units#GeF orce_700_Series

John P. Myers
06-24-13, 09:40 AM
Nice!

Edit: any thoughts on the 780 GTX? About half the cost of a Titan, and 90% of the performance.

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units#GeF orce_700_Series

It's fine for FP32 but your DP drops from 1310 GFLOPS with the Titan to 165.7 GFLOPS on the 780. Other than that, it's definitely worth the money for FP32 in comparison. From a power usage viewpoint, TDP is the same on both. The Titan has more cores, but the 780's base/boost clock is higher.

Edit: Though a 7970 GHz Edition beats the 780 by about 100 GFLOPS in FP32 for about $200 less, and you would also get 1024 GFLOPS of DP.

zombie67
07-30-13, 10:31 PM
Something to note:

Overclocking the Titan is ridiculously simple due to the characteristics of GPU Boost 2.0 which raises the boost clock above stock settings if the Titan feels the temps are low enough to do so.

Reference clocks for the Titan are 837MHz base and 876MHz boost. By doing *nothing* more than increasing the manual fan speed to the max setting of 85%, the Titan will settle in at 993MHz. This puts the Titan's rating at 5337 GFLOPS FP32 for a standard edition card. Of course it might be a bit loud with the fan that high :p If that bothers you, just pick up one of these EVGA Titan Hydro Coppers (http://www.evga.com/Products/Product.aspx?pn=06G-P4-2794-KR) :) This would allow you to settle the Titan in at 1137MHz without even trying very hard, and would also be quiet. It would also push the rating to over 6100 GFLOPS, over a 33% increase from stock.

Well, I have one now.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814130899

FWIW, it is running at 993MHz, and I have changed no settings.

Question: Where is the setting in the nvidia control panel to change the DP setting? I can't find it.

John P. Myers
07-31-13, 12:49 AM
Nvidia Control Panel

Manage 3D settings>>Global settings>>CUDA - Double precision

zombie67
07-31-13, 01:09 AM
Well, that was obvious. Thanks nVidida.

Thanks JPM!!

zombie67
08-09-13, 11:03 PM
FWIW, with fan on auto, the card is running at 1045 MHz.

DrPop
08-10-13, 02:16 PM
Sweet! So how does the output compare? Was it worth it?:confused:

zombie67
08-11-13, 05:55 PM
I haven't done any benchmarking yet. But I can tell you this: I have two nvidia machines on Bitcoin Utopia. (I know, not the best place for nvidia) One has a 590 and a 580, three GPUs total. The other the titan. The titan does about half the credits/day of the other machine.

DrPop
08-11-13, 10:57 PM
So it's probably fairly good performance per watt then. :) That machine with BOTH 590 & 580 must be sucking some serious juice! :D

Cruncher Pete
08-12-13, 02:50 AM
So it's probably fairly good performance per watt then. :) That machine with BOTH 590 & 580 must be sucking some serious juice! :D

DrPop, I was thinking of the same thing. (Like minded people think alike, only fools never differ), than again, if those GPU's were on their own in different machines, would it not consume same power as the two in one machine?. Irrespective, these GPU's consume too much power and cost me heaps when the power bill comes around...

Sarge104
08-12-13, 01:16 PM
zombie67, are you using the software that came with your card or a third party software package like MSI Burner. Reason I'm asking is I've recently installed the EVGA model in my system and the software has the clock at a little over 550MHZ when the card itself should be closer to 837MHz. This has me concerned due to the fact it isn't even trying to boost when I'm running boinc. Will post a screen shot this evening of the software screen and maybe figure this out. I'm hoping its not a faulty card :(

zombie67
08-12-13, 01:37 PM
No software was required to make it run at that speed.

However, I am using the EVGA precision x to play around with OC.

Sarge104
08-12-13, 03:01 PM
No software was required to make it run at that speed.

However, I am using the EVGA precision x to play around with OC.

Same here with the Precision X. The card made a liar out of me after I checked it during lunch during the burn test, 1056MHz boosted at auto fan speed :D I'm much happier. I don't know if it was just not engaging during the seti WU's or the Primegrid WU's are just beefier requiring it to spin up to full burn.

Will post a screen of my setup with Precision and see if maybe we can squeeze a bit more out of the card, it seems like its getting plenty of power but I did get a V-core warning during some gaming last night so I'll see what is going on with the CPU power.

Sarge104
08-12-13, 05:14 PM
Question: Water cooling, will it ultimately be worth it? I'm seeing in most of the reviews it does the obvious, keep the temp down. I'm still not sold on it making the card perform significantly better...most of the reviews only give a 10% difference in performance, would it be worth the extra 200 bucks.

John P. Myers
08-12-13, 06:03 PM
First, seti doesn't work the titan hard enough half the time for it to run at full speed. 2nd my titan is water cooled. 1071 mhz standard speed out of the box. Highest temp it's ever reached was 39C on Dirt. The catch is some of the RAM is on the back of the card so you'll still need a gentle breeze blowing across there.

Sent from my Galaxy S4 using Tapatalk 4 beta

zombie67
08-12-13, 06:18 PM
Yes, I have to run 2-3 setiathome_v7 tasks at a time to get load up to 100%.

EmSti
02-25-14, 10:03 PM
Any advice for overclocking the Titans? Very little out there that I can find for the Titan Black (no real surprise there). I figure both Titan models are not really all that different, please share your wisdom/experience. Articles on how to O.C. Titans are the most varied I have read for any card, mainly because of the way the card manages boost and temps and folks desire to increase voltage.

Currently running the Titan Black at 1212 Mhz core, 3703 Mhz memory, stock voltage, +106% power, and custom fan profile; crunching on GPUGrid. I should probably tune it in on a project like collatz first so I don't loose a 8+ hour wu, but what the heck it.

Tank_Master
02-25-14, 10:13 PM
Where did you get the titan black? i haven't been able to find it in stock anywhere... I have the original titan, and I am overclocking it some, but it sounds like you already have the basic idea. I use MSI afterburner as my overclocking tool.

EmSti
02-25-14, 10:29 PM
I got it from evga.com the first day they released, kept an eye on the main sites until they appeared, than pondered it for awhile. When they started to disappear from the sites later in the day, I pulled the trigger.

EmSti
02-25-14, 11:40 PM
Looks like you can order from amazon.com if you are willing to wait a few weeks. Base and SC models

EmSti
02-26-14, 10:04 AM
Currently running the Titan Black at 1212 Mhz core, 3703 Mhz memory, stock voltage, +106% power, and custom fan profile; crunching on GPUGrid. I should probably tune it in on a project like collatz first so I don't loose a 8+ hour wu, but what the heck it.

Pulled back to a much more modest 1187 core, stock memory (3499) and +104% power. Last night the core clock kept dropping to something like 546 or 564. I would have to reboot to get it to come back up, restarting the activity on the card didn't help. I wasn't hitting a temp max, so maybe a memory heat, power limit, or some other issues. It was interesting, because I would be reading and typing away on the machine and notice that core clock dropped, but no driver warning or screen blips. It ran overnight at stock speed just fine. I just need to work out that sweet spot.

EmSti
02-26-14, 04:00 PM
Memory overclock may have been the problem. Ran with 1187 and 1212 Core clock with no issues today. Currently running at 1257 Core clock now and that might be the sweet spot for me. I have been doing multiple activities beyond boinc with no down clocking to 549 issues (sucks that you have to reboot to get it off that number again). Haven't tried video yet. 1270 was near instant failure, so I somewhere near the right core clock for GPUGrid. I will know for sure when the current wus get through and validate.