Page 3 of 9 FirstFirst 12345 ... LastLast
Results 21 to 30 of 83

Thread: Nividia Titan

  1. #21
    Platinum Member
    John P. Myers's Avatar
    Join Date
    January 13th, 2011
    Location
    Jackson, TN
    Posts
    4,502

    Re: Nividia Titan

    Quote Originally Posted by zombie67 View Post
    JPM: Maybe there is one more angle to look at, power consumption per GFLOPS?

    I was reading this thread, and I wonder how titan compares to the previous generations?
    Yes that can come into play which makes saying things like "the GTX 580 is a faster (or equivalent) cruncher in all projects at all times" a bit silly. However that will be true 90% of the time, if only 10% of the projects have actually bothered coding using CUDA to take advantage of higher CC ratings. Most have not. Many are actually coding in OpenCL which doesn't rely on CC ratings so much, but more upon which driver version you have.

    If it means anything, the Titan will have a CC of 3.5, though you can be pretty sure no one will create an application requiring it. The only differences are that 3.5 includes something called Dynamic Parallelism and Funnel Shift, both which no other CC version has, and the "Maximum number of 32-bit registers per thread" is increased to 255 (CC 2.0 through 3.0 is 63. CC 1.0 to 1.3 is 127). How useful these 3 things would be to us, i don't know, but they are the only 3 differences from 3.0.

    The 600 series is definitely more power-efficient than the 500 series. No doubt. But to measure GFLOPS/Watt between them is extremely difficult. Mainly because the 600 series all claim higher GFLOPS than their 500 series counterparts, but real world compute-testing shows the 600 series to actually be slower in some cases. Even with something as simple as sorting, the GTX 580 can be ~28% faster than a 680. As for personal experience, i replaced a plain GTX 460 with an overclocked EVGA GTX 660 a few months ago. The 460 had a core clock of 675 MHz and the 660 had a core clock of 1123 MHz, but was only 13% faster, when it should've been more than double according to the GFLOPS specs. Even though it was still a tad faster and even though it used less power, i gave it to my wife so should could play games with it and put my 460 back in and once again swore off buying Nvidia only to be a victim of their intentional scams Essentially, if you check the GLOPS ratings on wiki for the 600 series, to compare it properly to every other Nvidia GPU, divide it exactly in half.

    The GTX 680 claims it can do 3090.4 GFLOPS. Pretend it says exactly half that (1545.2) and then you can compare it to the other GPUs (making the GTX 580 ~36 GFLOPS faster than the 680). The reason for this is the architecture change in Kepler which pretty much reduced the powerful cores to an imitation of AMD's weak stream processors. There's more of them, but they're smaller, simpler and slower. Of course if anyone ever codes specifically for the 600 series, meaning the exact same app wouldn't work on any previous gen., then we may see better results than half.

    If you're looking at saving on your power bill, then yes the 600 series is better than the 500 series, however you'll have to keep the 600 series cards for years and years to save as much on your power bill as the difference in price for the cards.

    Considering GFLOPS/Watt:
    AMD 7970: 15.155
    GTX 680: 15.85
    GTX Titan: 18.816

    Nvidia does win the GFLOPS/W contest



    http://media.bestofmicro.com/5/G/348...%20Luxmark.png FP64 is taken into account here.
    Last edited by John P. Myers; 02-16-13 at 09:43 AM. Reason: I'm an idiot


  2. #22
    Platinum Member
    Mumps's Avatar
    Join Date
    October 28th, 2010
    Location
    Milwaukee, WI
    Posts
    3,994

    Re: Nividia Titan

    Quote Originally Posted by John P. Myers View Post
    Considering GFLOPS/Watt:
    AMD 7970: 15.155
    GTX 680: 15.85
    GTX Titan: 18.816

    AMD still wins *and* you get DP as a free bonus
    Ummm. Am I misreading this? I thought in a GFLOPS/Watt rating, the higher numbers are better. So the Titan is the best of the three listed cards.

    Now this may be completely wrong, but let's take a stab at costing this.

    With a difference of about 55 Watts in the TDP rating between the 7970 (195) and the GTX 680 (250), doesn't that equate to roughly 40Kw of power consumption monthly? (55*720hours in a 30 day month) Just taking a stab at a 12 cents/KwH price for electricity, that's about $17.00 monthly to run the 7970. And $5 a month more to run the 680. Wasn't the TDP of the Titan supposed to be about 235? Which makes it about $3.50 a month more expensive to run than the 7970.

  3. #23
    Platinum Member
    John P. Myers's Avatar
    Join Date
    January 13th, 2011
    Location
    Jackson, TN
    Posts
    4,502

    Re: Nividia Titan

    Quote Originally Posted by Mumps View Post
    Ummm. Am I misreading this? I thought in a GFLOPS/Watt rating, the higher numbers are better. So the Titan is the best of the three listed cards.
    Bah! You're right. Brain fart Nvidia does win the GFLOPS per watt race.

    Also the TDP is expected to be closer to 250W for the Titan. The 235W rating was based on what the K20X draws, but it's only clocked at 732MHz while the base Titan clock would be 875MHz, with the Asus possibly being released at 915MHz which might push TDP to ~260W
    Last edited by John P. Myers; 02-16-13 at 09:46 AM.


  4. #24
    Diamond Member
    zombie67's Avatar
    Join Date
    October 24th, 2010
    Location
    Reno, NV
    Posts
    7,290

    Re: Nividia Titan

    Okay, so nothing compelling in the power consumption department either. Thanks!
    "Don't confront me with my failures, I had not forgotten them" - Jackson Browne

    Avatar source


  5. #25
    Gold Member
    Slicker's Avatar
    Join Date
    October 25th, 2010
    Location
    South of Cheeseland
    Posts
    1,253

    Re: Nividia Titan

    Quote Originally Posted by John P. Myers View Post
    If it means anything, the Titan will have a CC of 3.5, though you can be pretty sure no one will create an application requiring it. The only differences are that 3.5 includes something called Dynamic Parallelism and Funnel Shift, both which no other CC version has, and the "Maximum number of 32-bit registers per thread" is increased to 255 (CC 2.0 through 3.0 is 63. CC 1.0 to 1.3 is 127). How useful these 3 things would be to us, i don't know, but they are the only 3 differences from 3.0.
    True, but then again, shouldn' nVidia be adjusting their OpenCL compiler so that if the hardware is 3.5, that it would use all the 3.5 features? Since OpenCL is extremely similar to CUDA (especially compared to CAL) you would think that any changes nVidia does for their CUDA compiler optimization could also be put into their OpenCL optimizer.
    Spring 2008 Race: (1st Place)

  6. #26
    Platinum Member
    John P. Myers's Avatar
    Join Date
    January 13th, 2011
    Location
    Jackson, TN
    Posts
    4,502

    Re: Nividia Titan

    Quote Originally Posted by Slicker View Post
    True, but then again, shouldn' nVidia be adjusting their OpenCL compiler so that if the hardware is 3.5, that it would use all the 3.5 features? Since OpenCL is extremely similar to CUDA (especially compared to CAL) you would think that any changes nVidia does for their CUDA compiler optimization could also be put into their OpenCL optimizer.
    That is an option, but it would still have to be coded for specifically. Some projects do write separate OpenCL Nvidia and OpenCL AMD apps, but not all. Those that do have the option of being that hardware specific, which of course prevents the same OpenCL app from working on AMD or vise versa. Would also prevent the same app from working on an Nvidia GPU with only CC 3.0 or lower. Similar to the SSE instructions from Intel, if you code for SSE 2, but your CPU and compiler supports SSE 4.2, you still only get SSE 2.

    Regarding the Titan though, there is a glimmer of hope afterall. New "official" specs have been released showing the Titan getting 1.3 TFLOPS FP64, however it seems the release price has climbed to $999. While 1.3 TFLOPS DP would be nice, for that price you can get a 7990 (7970 x2) and get pretty close to 2 TFLOPS. I don't understand why they're only claiming 1.3 when it should be 1.5 since this architecture has DP working at 1/3 SP which is rated at 4.5 TFLOPS. The Titan is also 10.5 inches long which is 0.5 inches shorter than the 690.

    TDP is confirmed at 250W which is ~50W less than a 690. The 384-bit 6GB VRAM is clocked at 6008MHz. Base clock is 837MHz with boost at 876MHz. The Boost clock is now based on GPU temp (GPU Boost 2.0), rather than the power range of the core as it is on the 600 series. OverVoltage will also be hardware supported, but the companies that put their stickers on it (Asus, EVGA, etc) have the option of preventing you from using it if they feel like it. You can adjust the target temp of the GPU Boost (default is 80C). Increasing it will raise the Boost frequency.

    The NDA on performance results will be lifted thursday.


  7. #27
    Gold Member
    trigggl's Avatar
    Join Date
    November 6th, 2010
    Location
    Arkansas
    Posts
    2,077

    Re: Nividia Titan

    Quote Originally Posted by John P. Myers View Post
    The NDA on performance results will be lifted thursday.
    I assume that means Non-Disclosure Agreement. I thought I would spell that out for those of us who are TLA challenged.
    6r39 7r199



  8. #28
    Platinum Member
    John P. Myers's Avatar
    Join Date
    January 13th, 2011
    Location
    Jackson, TN
    Posts
    4,502

    Re: Nividia Titan

    Quote Originally Posted by trigggl View Post
    I assume that means Non-Disclosure Agreement.
    Correct
    I thought I would spell that out for those of us who are TLA challenged.
    Us geeks love our Three-Letter Acronyms


  9. #29
    Platinum Member
    John P. Myers's Avatar
    Join Date
    January 13th, 2011
    Location
    Jackson, TN
    Posts
    4,502

    Re: Nividia Titan

    Quote Originally Posted by John P. Myers View Post
    I don't understand why they're only claiming 1.3 when it should be 1.5 since this architecture has DP working at 1/3 SP which is rated at 4.5 TFLOPS.
    AHA! Reason found. Though 1.3 TFLOPS still beats a 7970, i have to still say boo @ Nvidia for this new gimmick of theirs. By default FP64 is set to run at standard Kepler speeds (1/24 FP32). You have to go into Nvidia's configuration menu and enable FP64 yourself. But! When you do, it disables the Boost clock and makes it likely the base clock will drop from 837MHz to 725MHz.

    Quote Originally Posted by AnandTech
    Titan, as we briefly mentioned before, is not just a consumer graphics card. It is also a compute card and will essentially serve as NVIDIA’s entry-level compute product for both the consumer and pro-sumer markets.

    The key enabler for this is that Titan, unlike any consumer GeForce card before it, will feature full FP64 performance, allowing GK110’s FP64 potency to shine through. Previous NVIDIA cards either had very few FP64 CUDA cores (GTX 680) or artificial FP64 performance restrictions (GTX 580), in order to maintain the market segmentation between cheap GeForce cards and more expensive Quadro and Tesla cards. NVIDIA will still be maintaining this segmentation, but in new ways.


  10. #30
    Diamond Member
    zombie67's Avatar
    Join Date
    October 24th, 2010
    Location
    Reno, NV
    Posts
    7,290

    Re: Nividia Titan

    I really like AnandTech reviews. But I don't read a lot of reviews, across many different review sites. How does AnandTech compare? What is the general consensus?

    But that last post confuses me (easily done).

    JPM says: By default FP64 is set to run at standard Kepler speeds (1/24 FP32). You have to go into Nvidia's configuration menu and enable FP64 yourself.

    Okay, but when you enable it, what is the result? 8/24 FP32?

    AnandTech says: The key enabler for this is that Titan, unlike any consumer GeForce card before it, will feature full FP64 performance, allowing GK110’s FP64 potency to shine through.

    What is full performance? Does that mean it will match the Tesla K20 or K20X at DP? So then, what is the point of Tesla, if this performance can just be turned on at will?
    "Don't confront me with my failures, I had not forgotten them" - Jackson Browne

    Avatar source


Page 3 of 9 FirstFirst 12345 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •