PDA

View Full Version : NVIDA 680 v 670



STE\/E
09-22-12, 03:43 PM
Anybody have a 680 & a 670 NVIDIA Cards & how do they Compare @ POEM Speed Wise or any GPU Project S[peed wise ... ??? Thanks

DrPop
09-25-12, 01:59 PM
I don't think either of the higher end 600 series cards is worth the money, honestly. The high end 500 series will match or in some cases beat them, and cost considerably less! They are not crunching cards - in that they have been crippled or something like that. There is "Big Kepler" that is due on the scene sometime soon I think, and that is the next real crunching card from Nvidia.

STE\/E
09-25-12, 03:31 PM
There is "Big Kepler" that is due on the scene sometime soon I think, and that is the next real crunching card from Nvidia.

Probably just wait for them to come out since I've waited this long to Upgrade any of my NVIDIA Cards ... Had a chance to sell all 4 of the 580's I have left 1 month or so ago but decided to keep them for now ...

Mad Matt
09-27-12, 06:54 AM
In that sense I am probably about to make a really lousy investment. ;)

In fact, bang for buck on POEM of the 5xx really is great and since the low TDP usage 28nm Kepler does not really come to play. Yet it does do more. So, as an addition it makes sense, as a replacement rather not. Not sure what they get on Dirt, but I will learn soon. ;) What I noted (just as for the 5xx series) is the 680 having the best GLFOP/watt ratio. Maybe again for POEM you could scrap that, but this time I opted for the big 680 for the first time.

Other than that I guess they cannot compete with AMD when it comes to the uber-high-paying projects. So I think for folks trying to switch between e.g. POEM, Dirt and Donate a 7970 is a great buy.

Slicker
09-27-12, 11:15 AM
My new laptop has a GTX 670M, but I don't think it uses the same exact technology of the 670. It is good for about 220K per day on POEM although I'm still working on tweaking it.

John P. Myers
09-27-12, 12:44 PM
A month ago, word on the street was the GTX 780 would be released by march. Obviously Nvidia already has a cubic meter of GK110 chips available since they are pumping out the K20 professional cards left and right so they could release the 780 tomorrow if they wanted to. The reason they pushed it back to march though was because at the time there was nothing from the AMD side of things to compete against the GTX 690 so they weren't in any hurry. Now however there is a (fake) 7990 on the market, but they sell out in a matter of hours so there's no real stock to pose a threat. Also AMD does not and will not officially make a 7990, which is a bit upsetting and worrisome. Still, i believe if just 1 more company can release a few more "7990s" into the market by the middle of next month, it may prompt Nvidia to release the 780 by January.

Also, if AMD moves up the release date of their 8000 series, i believe Nvidia will also release the 780 a bit sooner.

As for the 780 itself, i'm not sure what Nvidia has done to keep it from being as impressive as it should've been. What i mean is, from the specs i've seen, the 780 has 2880 cores, which is 87.5% more than the 680. The 780 is also expected to have a clock speed of 1100MHz, which is about 10% higher than the 680. So, you've got 87.5% more cores that run ~10% faster should equal ~2x faster. Am i right? Make sense? :p Well it seems the 780 is only expected to be 30% faster than the 680 at most. Maybe George W. could give me a lesson in fuzzy math to make that make sense.

DrPop
09-27-12, 01:30 PM
...from the specs i've seen, the 780 has 2880 cores, which is 87.5% more than the 680. The 780 is also expected to have a clock speed of 1100MHz, which is about 10% higher than the 680. So, you've got 87.5% more cores that run ~10% faster should equal ~2x faster. Am i right? Make sense? :p Well it seems the 780 is only expected to be 30% faster than the 680 at most...

That's it! It's a conspiracy man! :D We love those around here. ;)

Mad Matt
09-27-12, 05:26 PM
So, you've got 87.5% more cores that run ~10% faster should equal ~2x faster. Am i right? Make sense? :p Well it seems the 780 is only expected to be 30% faster than the 680 at most. Maybe George W. could give me a lesson in fuzzy math to make that make sense.

I reckon it's another capped performance at cost of the 'average' consumer and helps them selling the pure computing cards for horrific prizes. Not really a way to convince costumers. No clue what the 680 could do on DP if not being capped, it's not that far away in SP from AMD...

John P. Myers
09-27-12, 05:57 PM
I reckon it's another capped performance at cost of the 'average' consumer and helps them selling the pure computing cards for horrific prizes. Not really a way to convince costumers. No clue what the 680 could do on DP if not being capped, it's not that far away in SP from AMD...

Nvidia's DP has always been faster than AMD's since the 300 series. By rule, whatever Nvidia can do SP, it can do exactly half that rate at DP which blows AMD out of the water. But like you said, Nvidia cripples it so AMD actually appears to have better DP.

Fire$torm
09-27-12, 08:38 PM
nVidia's business model looks more and more like Apple's model. Or is that the other way around...??? Anyhoot, if both companies keep it up, I can see them both losing significant market share in 2+ years.

I mean when the co-founder of a corp. uses the competitor's product, its a glimpse of things to come........

Mad Matt
09-28-12, 12:26 PM
Nvidia's DP has always been faster than AMD's since the 300 series. By rule, whatever Nvidia can do SP, it can do exactly half that rate at DP which blows AMD out of the water. But like you said, Nvidia cripples it so AMD actually appears to have better DP.

You see, actually I did not know this since before DC I never looked at SP/DP. And never needed any of those cards. ;) So I always thought Nvidia just plain sucks here and is technologically behind. Hey Nvidia, if you are having a PR-bot, take that! :D

STE\/E
09-28-12, 07:29 PM
**

pinhodecarlos
09-29-12, 07:15 PM
If you see it in terms of energy efficiency (ratio 3dMark/TDP) you have:

GTX 670 is 3.1 % better than GTX 680
GTX 670 is 24.4 % better than GTX 690
GTX 670 is 45.6% better than GTX 590

For calculations I used data from Nvidia page, like TDP and 3DMark DirectX 11 results.

670>680>690>590

In terms of price I don't know.

I don't own any CUDA card capable but when I am looking to buy a CPU I usually look for the most energy efficient one. This means I need to know a ratio of work/power. For me it is not important to have the fastest card or CPU but instead the one that for the same amount of work it uses less energy. Finally I look at the price.

For example, I would buy two GTX 670 instead of one GTX 590.

DrPop
09-30-12, 01:42 AM
Hmmm...interesting way to do it. I think you might get a more accurate result by using the GFLOPS rating for each card, instead of the 3DMark score. I would base my choice on GFLOPS / Watt. That 590 will out crunch the 670 and 680 by a huge margin, because it is a dual GPU card.

Mad Matt
09-30-12, 01:51 AM
If you see it in terms of energy efficiency (ratio 3dMark/TDP) you have:

GTX 670 is 3.1 % better than GTX 680
GTX 670 is 24.4 % better than GTX 690
GTX 670 is 45.6% better than GTX 590



Surprising point of view. So far I thought SP/DP per watt would be the measure. How close is 3dMark to DC performance? :-? AFAIK it's a benchmark not that really far away from it - besides project specifics. E.g. the 560Ti was bang-for-buck king on PG and the 570 on GPUGRID.

John P. Myers
09-30-12, 02:01 AM
Hmmm...interesting way to do it. I think you might get a more accurate result by using the GFLOPS rating for each card, instead of the 3DMark score. I would base my choice on GFLOPS / Watt. That 590 will out crunch the 670 and 680 by a huge margin, because it is a dual GPU card.

Correct. GFLOPS/watt is what's important to crunching. DirectX benchmarks are purely for graphics capabilities and do not give an accurate representation of what they can do in the BOINC world.

GFLOPS/W
690 - 18.74
680 - 15.85
670 - 14.47
590 - 6.82
580 - 6.48
570 - 6.41
480 - 5.38
470 - 5.06

For us, clearly the 690 is far more efficient.

690>680>670>590

Mad Matt
09-30-12, 02:17 AM
JPM, thanks for the confirmation of the 'GFLOP' perspective. Yet I am wondering what difference projects may make on top of their degree of optimization, especially regarding the new 28nm GPUs. Probably there is always a compromise solution.

No clue when projects will learn to use 28nm chips correctly, probably long time after 22nm or 18nm is out...

John P. Myers
09-30-12, 03:04 AM
No clue when projects will learn to use 28nm chips correctly, probably long time after 22nm or 18nm is out...

Probably, since projects still don't even use the advanced instruction sets on CPUs, such as AVX or SSSE 4.2 (or even 4.0)

Mad Matt
09-30-12, 03:11 AM
Probably, since projects still don't even use the advanced instruction sets on CPUs, such as AVX or SSSE 4.2 (or even 4.0)

Some folks in DC act like there was time and power to waste. ;) Interestingly DA is the guy who has most time of them all. Then again, they need time to adapt and most projects except for WCG (yes IBM, you deserve the credits...no matter why you are doing this) are not really well supported economically. Or in other words, they just spend all or large chunks of their idle time and money - just like us. :P

pinhodecarlos
09-30-12, 03:26 AM
Mad Matt,

Despite we all are here as a hobby we need to have an environment concern, at least a tine one...lol

John P. Myers,

GFLOPS/watt is more accurate than my ratio but we can go even further, there's must be an even accurate way to measure the work done, maybe number of work units done per energy consumed per day. Of course these calculations must be taking in care for the project you will be concentrating on.

Carlos

Mad Matt
09-30-12, 12:03 PM
Mad Matt,

Despite we all are here as a hobby we need to have an environment concern, at least a tine one...lol

John P. Myers,

GFLOPS/watt is more accurate than my ratio but we can go even further, there's must be an even accurate way to measure the work done, maybe number of work units done per energy consumed per day. Of course these calculations must be taking in care for the project you will be concentrating on.

Carlos

+1 +1 Carlos. :-bd

It's a heckload of work, I did some of those calculations regarding POEM vs other projects. And I found a pretty clear result (for now). Will see how long this will last as chips and apps progress. :D

Mike029
09-30-12, 08:45 PM
+1 +1 Carlos. :-bd

It's a heckload of work, I did some of those calculations regarding POEM vs other projects. And I found a pretty clear result (for now). Will see how long this will last as chips and apps progress. :D

OT: Carlos, any plans on a GPU version for NFS@home?

pinhodecarlos
10-01-12, 03:46 AM
OT: Carlos, any plans on a GPU version for NFS@home?

For the sieve stage I don't think so. The algorithm is very difficult, people of mersenneforum.org can't even port the code to windows 64 bits. If I had the skills I would help but I don't have it. Also there's the memory issue, memory requirements grow roughly with the square root of the sieving time. A paper to read (http://lacal.epfl.ch/files/content/sites/lacal/files/papers/ecdl2.pdf).

For the polynomial stage there's already one. The same client that does the post-processing stage can take advantage of the GPU for the polynomial search. If you look at NFS@Home details/status when you have a number that it is SNFS then no polynomial search is needed, if GNFS then msieve is used to search for the best polynomial.

Mike029
10-01-12, 12:14 PM
For the sieve stage I don't think so. The algorithm is very difficult, people of mersenneforum.org can't even port the code to windows 64 bits. If I had the skills I would help but I don't have it. Also there's the memory issue, memory requirements grow roughly with the square root of the sieving time. A paper to read (http://lacal.epfl.ch/files/content/sites/lacal/files/papers/ecdl2.pdf).

For the polynomial stage there's already one. The same client that does the post-processing stage can take advantage of the GPU for the polynomial search. If you look at NFS@Home details/status when you have a number that it is SNFS then no polynomial search is needed, if GNFS then msieve is used to search for the best polynomial.

Thank you.