PDA

View Full Version : PrimeGrid: Official release of tpsieve for PPS (Sieve)



RSS
12-01-10, 11:34 AM
PrimeGrid is excited to finally announce the official release of the tpsieve application in the PPS (Sieve) project. Linux, Mac, and Windows platforms for 32/64 bit are all supported as well as CUDA and OpenCL (ATI). Only Mac OpenCL is NOT currently available. Also, OpenCL requires an app_info file to run due to a scheduling request issue. However, we hope to have that problem resolved before the next Challenge (18-21 December). NOTE: Windows GPU users should update to .NET Framework 3.5. tpsieve, based on Geoff Reynolds' original code, is a revolution to k*2^n+1/-1 sieving. Thanks in large part to Ken Brazier's updates and modifications to include simultaneous +1/-1 dat-less sieving as well as his ports to the GPU, tpsieve has changed the landscape of sieving and is now providing incredible support to the prime finding community. Thank you Ken!!!

More... (http://www.primegrid.com/forum_thread.php?id=2852)

DrBob
12-01-10, 11:56 AM
...NOTE: Windows GPU users should update to .NET Framework 3.5....Hmmm, I don't have NET 3.5 on any machines and all GPUs are currently working correctly.

trigggl
12-01-10, 01:19 PM
PrimeGrid is excited to finally announce the official release of the tpsieve application in the PPS (Sieve) project. Linux, Mac, and Windows platforms for 32/64 bit are all supported...

I'd love to test the 64 bit CPU app, but I'm busy with WCG.

Fire$torm
12-02-10, 01:34 AM
Hmmm, I don't have NET 3.5 on any machines and all GPUs are currently working correctly.

If you have any Win7 boxes then you have .net 3.5. Not sure about Vista. This is a wild guess but it could be the case that the apps might crunch faster under 3.5 but that is probably just wishful thinking. I mean when was the last time M$ updated their software and actually improved efficiency?

DrBob
12-02-10, 08:16 AM
If you have any Win7 boxes then you have .net 3.5. Not sure about Vista...Running XP Pro 64 & 32 here, NET 3.5 apparently not needed.

STMahlberg
12-09-10, 05:37 AM
I found this post of run times, you guys probably have seen this already. Here is the original post (http://www.primegrid.com/forum_thread.php?id=2870&nowrap=true#28829) from Primegrid by John (http://www.primegrid.com/show_user.php?userid=2449).

My ATI HD5830's came in right at 27:00 minutes.


NVIDIA


Fermi

o 330s (5:30) : GTX 580 (standard clocks), Windows 7 64-bit, Core I7 930 (HT enabled)
o 386s (6:26) : GTX 470 (OC 750MHz GPU), Win 7 x64, X5650 @ 3.52GHz
o 395s (6:35) : GTX 480 (standard clocks), Windows 7 64-bit, dual Xeon 5345
o 400s (6:40) : GTX 480, Windows 7 64 + I7 930
o 515s (8:35) : GTX 470 (standard clocks), Windows 7 64-bit, Dual Xeon 5520 (HT enabled)
o 555s (9:15) : GTX 460 @ 900/1800/2000 MHz, Q9550 @ 3.4 GHz
o 829s (13:49) : GTX 460 (Stock 675/900/1350) Win 7 x64 Pro C2Q Q6600 @2.4 GHz
o 966s (16:06) : GTS 450 (Factory OC, 1850 shader clock), Windows 7 64-bit, Core I7 860 (HT enabled)
o 983s (16:23) : GTS 450 OC 888-1000-1776, Win7-64, Q6600@2.4GHz


GT 3xx

o 7500s (2:05:00) : GT 330M (255MB) on Darwin 10.5.0, CUDA 3.2


GTX 2xx, GTS 2xx, GT 2xx

o 1030s (17:10) : GTX 285 (1024MB - 720/1639/1242 MHz) driver: 260.99, Windows 7 Pro x64
o 1150s (19:10) : GTX 275 (std clock, 877MB), Win 7 x64, Q9400 @ 2.66GHz
o 1161s (19:21) : GTX 285 (no OC), Windows 7 x64 pro + Q9650@3.6GHz (0.66 CPU + 1.00GPU)
o 1360s (22:40) : GTX 280 (Factory OC), Windows 7 x64 Pro, C2Q Q6600 @2.4 GHz
o 1370s (22:50) : GTX 280 (Factory OC) Win 7 x64 Pro C2Q Q6600 @2.4 GHz
o 1500s (25:00) : GTX 260 - 216 (standard clocks), Windows Server 2008 64-bit (compare to Vista), dual Xeon 5520 (HT enabled)
o 1521s (25:21) : GTX 260 (216 Shader ): Cuda Driver 3.2, Darwin 10.6.4 -> (powered by 2x Xeon 5520 2,33Ghz)
o 2118s (35:18) : GTS 240 (standard clocks), Windows 7 64-bit, Core I7 975 (HT enabled)
o 3105s (51:48) : GT 240 (standard clocks), Windows Vista 32-bit, Q6600
o 3140s (52:23) : GT 240 (std clock, 474MB), Win 7 x64, Q9550 @ 2.83GHz
o 6100s (1:41:40) : GT 220 on a i7-920, W7-64, 8G
o 9600s (2:40:00) : GT 120, Mac OS X 10.6.5, CUDA 3.2.17,


9xxx

o 1924s (32:06) : 9800 GTX+ (standard clocks), Windows Vista Ultimate 64-bit, Core2 E8400
o 2332s (38:54) : 9600 GSO (Factory OC, 1750 shader clock), Windows XP 32-bit, Pentium D 965 Extreme Edition (HT enabled)
o 6600s (1:50:00) : 9700M GTS, Linux 64bit
o 6897s (1:54:57) : 9600 GS (standard clocks), Windows 7 64-bit, Core I7 860 (HT enabled)
o 6962s (1:56:02) : 9500 GT (Factory OC, 1750 shader clock), Windows Vista Ultimate 64-bit, Core2 E8400
o 8600s (2:23:20) : 9500 GT 8600sec (2h23m20s) 550-400-1400, XP32, E6600@2.7GHz
o 9354s (2:35:54) : 9400 GT (standard clocks, 32 shader version), Linux 64-bit, Q6700
o 17000s (4:43:20) : 9400 GT, XP
o 22900s (6:21:40) : 9300 / nForce 730i, XP 64bit


8xxx

o 2070s (34:30) : 8800 GT, XP 64bit
o 2664s (44:24) : 8800 GS (Manual OC, 1530 shader clock), Windows XP 32-bit, Pentium D 965 Extreme Edition (HT enabled)
o 9460s (2:37:40) : 8600 GT (standard clocks), Windows XP 32-bit, Pentium D 830
o 27262s (7:34:22) : 8400 GS (Manual OC, 1014 shader clock), Windows XP 32-bit, Pentium 4 3.6Ghz (HT enabled)
o 30082s (8:21:22) : 8400M GS (standard clocks), Windows Vista 32-bit, T8100


Other

o 25400s (7:03:20) : NVIDIA ION LE, Linux


ATI


HD 5xxx

o 950s (15:50) : HD 5870, XP 64bit
o 1270s (21:10) : HD 5850 @825Mhz GPU, RAM @ 1000Mhz drivers 10.10, Windows 7 64 bits
o 1480s (24:40) : HD 5850 (std clock, 1024MB), Win 7 x64, Q9450 @ 3.04GHz
o 1989s (33:09) : HD 5770 (900/1200) driver: 10-11, Windows 7 Home x86 Edition - powered by E5200 3,4Ghz
o 2075s (34:35) : HD 5770 on a i7-930, W7-64, 12G


HD4xxx

o 1800s (30:00) : HD 4870, XP 64bit
o 3800-4100s (1:03:20 - 1:08:20) : HD 4770
o 7552s (2:05:52) : HD 4670 (standard clocks), Windows XP 32-bit, Athlon 64 x2 4200+ (socket 939)


CPU


* 28000s (7:46:40) : Q9550 @ 3.4 GHz (4 cores)
* 45500s (12:38:20) : X5650 (OC 3.52GHz, HT on - 12threads simultaneously), Win 7 x64
* 60120s (16:42:00) : i5 2.53GHz on Darwin 10.5.0
* 61433s (17:03:53) : Intel Core2 Duo T7300 @ 2.00GHz
* 68183s (18:56:23) : Intel Core2 T5300 @ 1.73GHz
* 202800s (56:20:00) : P4 2.8GHz on XP SP3 (32-bit)
* 211556s (58:45:56) : AMD Sempron 3000+

DrPop
12-09-10, 06:50 AM
Ouch! This next challenge may be painful guys. We better give it all we got just to hang in there. So a 5870 does nearly 3x worse than a GTX 470. Wow; that is some seriously inefficient code for the ATI app. :(

trigggl
12-09-10, 07:17 AM
Ouch! This next challenge may be painful guys. We better give it all we got just to hang in there. So a 5870 does nearly 3x worse than a GTX 470. Wow; that is some seriously inefficient code for the ATI app. :(
Also, I think it's a good idea for everyone to test the speed of their GPU with and without the CPU's crunching. It may be more efficient without the CPU. I'm currently calculating the runtime of the GPU tasks with and without the CPU cores crunching PPS Sieve tasks. I may keep one core free for the challenge to maximize the GPU efficiency if necessary.

Bryan
12-09-10, 08:38 AM
On my machines I found that the I7 crunching or not had little or no effect on the GPU wu times. On the quad Q6600 at 3G there was a marked improvement by not using the CPU for crunching.

Both machines were crunching WCG and PG at the same time.

DrBob
12-09-10, 01:50 PM
On my machines I found that the I7 crunching or not had little or no effect on the GPU wu times. On the quad Q6600 at 3G there was a marked improvement by not using the CPU for crunching.

Both machines were crunching WCG and PG at the same time.Good info since 3 of my machines are E6600s.
Which GPU?
I just checked CPU usage and the ATI (4870) app is using a constant 3-6% CPU while the Nvidia (9800GT) cards are all using <1%.
I'm leaning towards running both CPU GPU in the PG Challenge.

DrPop
12-09-10, 01:54 PM
Sometimes I wish I had an i7. :D Just goes to show that Intel wasn't blowing smoke when they said the i7 technology was all about feeding those PCI-Express pipelines more efficiently, so as not to bog down the system when at full GPU load.

I'm actually glad to hear something wasn't all marketing HYPE for a change! =)):p

Fire$torm
12-09-10, 02:42 PM
I must be my P4 CPU being generations older because the system with the P4 and x2 8800GT cards crunch GPU wu's faster when the CPU is idle. The credit difference is approx. 5K for a full day on PG.