Anybody have a 680 & a 670 NVIDIA Cards & how do they Compare @ POEM Speed Wise or any GPU Project S[peed wise ... ??? Thanks
Printable View
Anybody have a 680 & a 670 NVIDIA Cards & how do they Compare @ POEM Speed Wise or any GPU Project S[peed wise ... ??? Thanks
I don't think either of the higher end 600 series cards is worth the money, honestly. The high end 500 series will match or in some cases beat them, and cost considerably less! They are not crunching cards - in that they have been crippled or something like that. There is "Big Kepler" that is due on the scene sometime soon I think, and that is the next real crunching card from Nvidia.
In that sense I am probably about to make a really lousy investment. ;)
In fact, bang for buck on POEM of the 5xx really is great and since the low TDP usage 28nm Kepler does not really come to play. Yet it does do more. So, as an addition it makes sense, as a replacement rather not. Not sure what they get on Dirt, but I will learn soon. ;) What I noted (just as for the 5xx series) is the 680 having the best GLFOP/watt ratio. Maybe again for POEM you could scrap that, but this time I opted for the big 680 for the first time.
Other than that I guess they cannot compete with AMD when it comes to the uber-high-paying projects. So I think for folks trying to switch between e.g. POEM, Dirt and Donate a 7970 is a great buy.
My new laptop has a GTX 670M, but I don't think it uses the same exact technology of the 670. It is good for about 220K per day on POEM although I'm still working on tweaking it.
A month ago, word on the street was the GTX 780 would be released by march. Obviously Nvidia already has a cubic meter of GK110 chips available since they are pumping out the K20 professional cards left and right so they could release the 780 tomorrow if they wanted to. The reason they pushed it back to march though was because at the time there was nothing from the AMD side of things to compete against the GTX 690 so they weren't in any hurry. Now however there is a (fake) 7990 on the market, but they sell out in a matter of hours so there's no real stock to pose a threat. Also AMD does not and will not officially make a 7990, which is a bit upsetting and worrisome. Still, i believe if just 1 more company can release a few more "7990s" into the market by the middle of next month, it may prompt Nvidia to release the 780 by January.
Also, if AMD moves up the release date of their 8000 series, i believe Nvidia will also release the 780 a bit sooner.
As for the 780 itself, i'm not sure what Nvidia has done to keep it from being as impressive as it should've been. What i mean is, from the specs i've seen, the 780 has 2880 cores, which is 87.5% more than the 680. The 780 is also expected to have a clock speed of 1100MHz, which is about 10% higher than the 680. So, you've got 87.5% more cores that run ~10% faster should equal ~2x faster. Am i right? Make sense? :p Well it seems the 780 is only expected to be 30% faster than the 680 at most. Maybe George W. could give me a lesson in fuzzy math to make that make sense.
I reckon it's another capped performance at cost of the 'average' consumer and helps them selling the pure computing cards for horrific prizes. Not really a way to convince costumers. No clue what the 680 could do on DP if not being capped, it's not that far away in SP from AMD...
nVidia's business model looks more and more like Apple's model. Or is that the other way around...??? Anyhoot, if both companies keep it up, I can see them both losing significant market share in 2+ years.
I mean when the co-founder of a corp. uses the competitor's product, its a glimpse of things to come........
**
If you see it in terms of energy efficiency (ratio 3dMark/TDP) you have:
GTX 670 is 3.1 % better than GTX 680
GTX 670 is 24.4 % better than GTX 690
GTX 670 is 45.6% better than GTX 590
For calculations I used data from Nvidia page, like TDP and 3DMark DirectX 11 results.
670>680>690>590
In terms of price I don't know.
I don't own any CUDA card capable but when I am looking to buy a CPU I usually look for the most energy efficient one. This means I need to know a ratio of work/power. For me it is not important to have the fastest card or CPU but instead the one that for the same amount of work it uses less energy. Finally I look at the price.
For example, I would buy two GTX 670 instead of one GTX 590.
Hmmm...interesting way to do it. I think you might get a more accurate result by using the GFLOPS rating for each card, instead of the 3DMark score. I would base my choice on GFLOPS / Watt. That 590 will out crunch the 670 and 680 by a huge margin, because it is a dual GPU card.
Surprising point of view. So far I thought SP/DP per watt would be the measure. How close is 3dMark to DC performance? :-? AFAIK it's a benchmark not that really far away from it - besides project specifics. E.g. the 560Ti was bang-for-buck king on PG and the 570 on GPUGRID.
Correct. GFLOPS/watt is what's important to crunching. DirectX benchmarks are purely for graphics capabilities and do not give an accurate representation of what they can do in the BOINC world.
GFLOPS/W
690 - 18.74
680 - 15.85
670 - 14.47
590 - 6.82
580 - 6.48
570 - 6.41
480 - 5.38
470 - 5.06
For us, clearly the 690 is far more efficient.
690>680>670>590
JPM, thanks for the confirmation of the 'GFLOP' perspective. Yet I am wondering what difference projects may make on top of their degree of optimization, especially regarding the new 28nm GPUs. Probably there is always a compromise solution.
No clue when projects will learn to use 28nm chips correctly, probably long time after 22nm or 18nm is out...
Some folks in DC act like there was time and power to waste. ;) Interestingly DA is the guy who has most time of them all. Then again, they need time to adapt and most projects except for WCG (yes IBM, you deserve the credits...no matter why you are doing this) are not really well supported economically. Or in other words, they just spend all or large chunks of their idle time and money - just like us. :P
Mad Matt,
Despite we all are here as a hobby we need to have an environment concern, at least a tine one...lol
John P. Myers,
GFLOPS/watt is more accurate than my ratio but we can go even further, there's must be an even accurate way to measure the work done, maybe number of work units done per energy consumed per day. Of course these calculations must be taking in care for the project you will be concentrating on.
Carlos
For the sieve stage I don't think so. The algorithm is very difficult, people of mersenneforum.org can't even port the code to windows 64 bits. If I had the skills I would help but I don't have it. Also there's the memory issue, memory requirements grow roughly with the square root of the sieving time. A paper to read.
For the polynomial stage there's already one. The same client that does the post-processing stage can take advantage of the GPU for the polynomial search. If you look at NFS@Home details/status when you have a number that it is SNFS then no polynomial search is needed, if GNFS then msieve is used to search for the best polynomial.