PDA

View Full Version : Looking for Multi CPU and Multi GPU projects...



DAD
10-25-11, 06:28 PM
Like the title says.. here are some examples of 2 projects I know of

Milkyway@home: Can generate WUs designed for however many cores you sent boinc to allow, and they run very fast and can rack up credit. Unfortunately they don't let you configure their project very well, and you get single core CPU WUs mixed in.

MooWrapper: Can generate multi GPU WUs to use more than 1 GPU at a time for the WU

If anyone knows of any other "multi" resource projects like that, please let me know.

Fire$torm
10-25-11, 10:47 PM
Like the title says.. here are some examples of 2 projects I know of

Milkyway@home: Can generate WUs designed for however many cores you sent boinc to allow, and they run very fast and can rack up credit. Unfortunately they don't let you configure their project very well, and you get single core CPU WUs mixed in.

MooWrapper: Can generate multi GPU WUs to use more than 1 GPU at a time for the WU

If anyone knows of any other "multi" resource projects like that, please let me know.

The only project that I know of was Aqua@Home but they shutdown after the blowback they incurred when DA's CreditNew infiltrated their server......

DAD
10-26-11, 12:00 AM
Climate prediction wrote their own in house credit system.. Can't projects hose to not adopt credit new?

Fire$torm
10-26-11, 12:43 AM
Climate prediction wrote their own in house credit system.. Can't projects hose to not adopt credit new?

Unfortunately in the near future that will no longer be possible. In earlier iterations of the server code DA included options for alternate credit schemes. He is currently in the process of removing any such option. Soon it will all be CreditNew and ONLY CreditNew. :mad:

DAD
10-26-11, 03:51 AM
But since ppl often compile the source for their particular OS, couldnt they could choose to rewrite the credit code? (unless he plans not to release the source and limit the platforms the server can run on)

Example, what would happen if I started a project with his new creditnew server, and just rewrote his code back to the old system?

Fire$torm
10-26-11, 11:10 AM
But since ppl often compile the source for their particular OS, couldnt they could choose to rewrite the credit code? (unless he plans not to release the source and limit the platforms the server can run on)

Example, what would happen if I started a project with his new creditnew server, and just rewrote his code back to the old system?

No, the code is open source so you are correct. The thing is DA is in the process of eliminating the possibility of being able to add an alternate credit schema to the code by making other various essential elements of the code dependent on the CreditNew code.

Bottom line: In the near future if a project does not want to use CreditNew then they will have to do some serious rewrite of a good portion of the server code. Since many projects do not have the time/patience/skill to do that, CreditNew will be the de-facto standard.

Beerdrinker
10-26-11, 01:25 PM
No, the code is open source so you are correct. The thing is DA is in the process of eliminating the possibility of being able to add an alternate credit schema to the code by making other various essential elements of the code dependent on the CreditNew code.

Bottom line: In the near future if a project does not want to use CreditNew then they will have to do some serious rewrite of a good portion of the server code. Since many projects do not have the time/patience/skill to do that, CreditNew will be the de-facto standard.

I am sorry to say. But when that happens - I am gone. I will then spend my money on something else.

Fire$torm
10-26-11, 02:42 PM
I am sorry to say. But when that happens - I am gone. I will then spend my money on something else.

Unfortunately I share your sentiment. I think many teams will suffer much as credit mongers/point chasers decide BOINC isn't worth the time and money and move on to other hobbies. That will be a sad day indeed.

c303a
10-26-11, 03:37 PM
I will be gone as well. I hate when 1 person decides that they want to control how other projects run. I can put the money that I spend on electricity to good use elseware......:p:p:pMORE BEER!

DAD
10-26-11, 04:40 PM
Well it depends on why you are using boinc. Yes I like to chase credits, but I'm also in it for the science. It won't stop me from crunching, but it will change how I do it, and how many resources I devote to it. I dont want to penalize projects for DA's moronic thinking. However, projects can chose NOT to upgrade to his new code, but then projects may eventually fall behind or become non compatible with newer clients :(

However, if I ever ran a server, I WOULD rewrite his code as I do have the programming and math knowledge and time to do so.

... hmmm.. idea.... lol.. rewrite server code.. make that code available for public use ;) but that may violate some legal things. I need to look into the EULA for the boinc client/server. If it can legally be done, then I'll snag his new, mandatory, credit new server code, re write it, and put the files up for free for public use, and then each project can recompile it

In the end, if projects start really losing people over his new system, they will pressure him to undo it or seek/design another boinc-like client/server solution.

Slicker
10-26-11, 04:41 PM
No, the code is open source so you are correct. The thing is DA is in the process of eliminating the possibility of being able to add an alternate credit schema to the code by making other various essential elements of the code dependent on the CreditNew code.

Bottom line: In the near future if a project does not want to use CreditNew then they will have to do some serious rewrite of a good portion of the server code. Since many projects do not have the time/patience/skill to do that, CreditNew will be the de-facto standard.

For Collatz, I just let all the creditNew code run right before it updates the database, I change the credit value. The project thinks it is running the creditNew, but it actually uses the fixed credit. Unless DA changes it so that all credit is stored centrally, I don't think there is anything he can do to stop that from working. Projects that award credit according to GFLOPS and not some type of fixed credit will require a lot more code since DA has removed some of the estimated and client reported GFLOPS values returned with the workunits. The creditNew uses the time crunched along with the benchmarks to calculate the credit. Grabbing the GFLOPS from the database for that host and calculating the GFLOPS by using the compute time would work, but would be messy and require quite a few code changes.

Since AMD has stated that CAL/Brook will no longer work in future releases even though OpenCL favors nVidia, Collatz will need to support OpenCL even though OpenCL performance sucks compared to CAL. OpenCL either runs 50% slower, uses 100% CPU even when doing asynchronous calls, or both. OpenCL support will require upgrading to the latest server code, so I'll find out then how hard it is to still bypass creditNew at that time.

Slicker
10-26-11, 04:50 PM
I have an OpenMP version of Collatz for CPUs that I tested. It is configurable form 1-N cores via the app_info.xml file, but it doesn't play very well with GPU apps and is a couple percent slower than running once WU per core (e.g. if a quad core box can run 4 WUs - one per core - in 20 minutes, the OpenMP version should be able to run one WU on all four cores in 5 minutes. In reality, it takes about 5 minutes and 6-8 seconds. So, you lose 6-8 seconds. On the other hand, you only have 1 WU to upload and report so it may balance out since BOINC doesn't grant any credit for time spent uploading, downloading, and reporting completed work since that's done by the BOINC client and not the app.

I haven't attempted to create a multi-GPU app yet since the fast GPUs can crunch a Collatz WU so quickly and since trying to keep GPUs of different speeds all busy is not easy. If they were all the same speed, one could divide the WU by the number of GPUs and run part on each. When the GPUs are different speeds and/or the PCIe slots don't transfer data at the same speed, one GPU will finish before another and then there is wasted idle time which means fewer credits than running one WU per GPU.

Fire$torm
10-26-11, 06:06 PM
Well it depends on why you are using boinc. Yes I like to chase credits, but I'm also in it for the science. It won't stop me from crunching, but it will change how I do it, and how many resources I devote to it. I dont want to penalize projects for DA's moronic thinking. However, projects can chose NOT to upgrade to his new code, but then projects may eventually fall behind or become non compatible with newer clients :(

However, if I ever ran a server, I WOULD rewrite his code as I do have the programming and math knowledge and time to do so.

... hmmm.. idea.... lol.. rewrite server code.. make that code available for public use ;) but that may violate some legal things. I need to look into the EULA for the boinc client/server. If it can legally be done, then I'll snag his new, mandatory, credit new server code, re write it, and put the files up for free for public use, and then each project can recompile it

In the end, if projects start really losing people over his new system, they will pressure him to undo it or seek/design another boinc-like client/server solution.

Nope you would not violate DA's license agreement. It's the limited GPL which allows for 3rd parties to change code. The only caveat is that DA gets the rights to your code. Trust me, if I had your skills or Buffet's money this whole thing would be a non-issue. :P

Fire$torm
10-26-11, 06:12 PM
Nope you would not violate DA's license agreement. It's the limited GPL which allows for 3rd parties to change code. The only caveat is that DA gets the rights to your code. Trust me, if I had your skills or Buffet's money this whole thing would be a non-issue. :P

Edit: @Slicker: I really hope what you say remains true. Fixed credit per wu is better then what is granted by CreditNew. At least IMHO.

DrPop
10-27-11, 02:49 PM
Edit: @Slicker: I really hope what you say remains true. Fixed credit per wu is better then what is granted by CreditNew. At least IMHO.

Agreed. That is the way it should be -- always! 1 WU is worth X credit. If my rig can crunch 1 WU per day, I get X credit. If your rig can crunch 2 WUs per day, you get 2(X) credits. If Buffet's personal Cray :D can crunch 250 WUs per day, he gets 250(X) credits.
How much SIMPLER can we make this?

@Slicker - I applaud your efforts. Please continue to battle for us. I think I will crunch some Collatz instead of Prima today just because of this.

Beerdrinker
10-27-11, 02:55 PM
Agreed. That is the way it should be -- always! 1 WU is worth X credit. If my rig can crunch 1 WU per day, I get X credit. If your rig can crunch 2 WUs per day, you get 2(X) credits. If Buffet's personal Cray :D can crunch 250 WUs per day, he gets 250(X) credits.
How much SIMPLER can we make this?

@Slicker - I applaud your efforts. Please continue to battle for us. I think I will crunch some Collatz instead of Prima today just because of this.

+ 1


I am going for my 1 Mill on Prima, and then really considering going all-in Collatz

DrPop
10-27-11, 03:00 PM
+ 1


I am going for my 1 Mill on Prima, and then really considering going all-in Collatz

Sweet! I will meet you there. :cool:

spingadus
10-27-11, 03:03 PM
Wow, I had no idea that Collatz was run by one of our own members. Cool!

Fire$torm
10-27-11, 08:00 PM
+ 1


I am going for my 1 Mill on Prima, and then really considering going all-in Collatz


Sweet! I will meet you there. :cool:

Ha! I'm ahead of the two of you. The 4850 I slipped into my Uncle's new computer has been crunching Collatz since going online :D Catch me if you can............



Wow, I had no idea that Collatz was run by one of our own members. Cool!


Yep. Very, very cool indeed. :-bd

DrPop
10-28-11, 01:53 PM
Ha! I'm ahead of the two of you. The 4850 I slipped into my Uncle's new computer has been crunching Collatz since going online :D Catch me if you can...

Alright, I will take that challenge. Oh, wait...the 5870 is in the shop! :D How about I will try to hang with you off of my CPU credits until the replacement arrives??? LOL! :D

Fire$torm
10-28-11, 02:02 PM
Alright, I will take that challenge. Oh, wait...the 5870 is in the shop! :D How about I will try to hang with you off of my CPU credits until the replacement arrives??? LOL! :D

OK, your on!

DrPop
10-28-11, 02:13 PM
OK, your on!

Nice! ;) Nothing like a good ole SETI.USA rivalry. I'm running in "crippled mode" but let's see what I can put up.
@Beer and Sping - you guys in too? :D

spingadus
10-28-11, 05:15 PM
How's the credit for nvidia vs amd? It's been a while since I crunched at Collatz.

spingadus
10-28-11, 05:33 PM
Sticking my foot in the water to test. Going to run collatz on my nvidia card to see what I get. Time to chase some MM's instead of maximizing my credit anyways.

DrPop
10-28-11, 05:57 PM
How's the credit for nvidia vs amd? It's been a while since I crunched at Collatz.

Slicker can give us the real answer, ;) but last time I checked, ATI/AMD was better credit. It's been a while since I tried an Nvidia card on Collatz though.
It's actually decent credit on NVIDIA, just not what PG was giving, so we all crunched that. However, now that PG dropped its credit, and DiRT seems to have issues half the time, Collatz just might be one of the better bets for both GPUs now. I'm sure some of the guys can give us some comments as well.

spingadus
10-28-11, 08:25 PM
Here is what I'm getting on 2 cards:




Run time CPU time Credit Application

1,398.44 1,396.29 3,055.05 collatz v2.09 (ati13ati) AMD 6970
2,248.85 357.18 3,125.08 collatz v2.03 (cuda23) GTX 590

DrPop
10-28-11, 08:55 PM
OK, so someone correct me if I'm wrong, but the GTX 590 should be at least double the HD 6970, and it's not (credit / sec) so that means it still slightly (maybe 30% or so?) favors ATI/AMD GPUs for credit...

I think the real question is, how does that credit / sec compare to what your GTX 590 can do on PG or DiRT?:confused:

The HD 6970 could certainly do much better on Moo! if you also dedicate 1 CPU core along with that. But, with DNETC gone, of course that's not the point - the point was to crunch a little Collatz and support Slicker.:cool:

DAD
10-28-11, 10:22 PM
It looks like the ATI is one GPU and the 590 is 2... that's one WHOPPING ATI card LOL..

ATI:


Up to 880MHz Engine Clock
2GB GDDR5 Memory
1375MHz Memory Clock (5.5 Gbps GDDR5)
176 GB/s memory bandwidth (maximum)
2.7 TFLOPs Single Precision compute power
683 GFLOPs Double Precision compute power
TeraScale 3 Unified Processing Architecture

1536 Stream Processors (1536:96:32)
96 Texture Units
128 Z/Stencil ROP Units
32 Color ROP Units
Dual geometry and dual rendering engines



NVIDIA 590
PU Engine Specs:1024CUDA Cores
607Graphics Clock (MHz)
1215Processor Clock (MHz)
77.7Texture Fill Rate (billion/sec)
Memory Specs:
1707 MHzMemory Clock
3072MB (1536MB per GPU)Standard Memory Config (1.5MB per GPU)
GDDR5Memory Interface
768-bit (384-bit per GPU)Memory Interface Width
327.7Memory Bandwidth (GB/sec)
Processors: 1024:128:96
Gflops: 2488.3

Note for the 590 the Glfop rating is for BOTH GPUs.. So each do only about 1244... and thats just single precision.. double would be about 1/2 that. The ati cards wins out because it looks like it has more processors (1536:96:32) vs the NV card (1024:128:96) (http://en.wikipedia.org/wiki/GeForce_500_Series#cite_note-GTX590-techarp-leaks-6) and runs 200Mhz faster. But one thing to consider.. far more boinc projects seem to support Nvidia over ATi, but I'm sure that will change. Where the 590 will outshine this ATI card is in DP math.. that single ati GPU can only do about 600Gflops of DP math... on the 590, each gpu can do about 500-600Gflops... and most boinc apps can use DP.. and more and more are *requiring* DP cards or they just won't run.

I don't like ATI cards, but that's for gaming. I don't like the way it renders textures, and handle FX. To me, graphics on an ATI card look a bit "fuzzy" and it drives me up a wall.. but Id have no bias against them for crunching :) It looks like ATI FAR FAR wins out in crunching vs nvidia with this card and higher models. I'm sure nv will respond soon with something that trumps ATI.. then ATI will trump NV.. it's a never ending cycle :)

In closing.. for SP Math crunching that ATI card wins HANDS DOWN.. for DP cruching.. it can't come close to the 590.. as each 590 can do 500-600gflops of DP math

ATI source: http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units#IGP_.2 8HD_6xxx.29
NV source: http://en.wikipedia.org/wiki/GeForce_500_Series

spingadus
10-28-11, 11:01 PM
Nvidia is better if you are crunching Distr or PG.



Run time CPU time Credit Application

695.41 1.00 4,220.00 Distributed Rainbow Table Generator (distrrtgen) v3.45 (cuda23) GTX 590
= 6.07 credits/sec

721.03 63.96 2,411.00 PPS (Sieve) v1.39 (cuda23) PG GTX 590
= 3.34 credits/sec

1,398.44 1,396.29 3,055.05 collatz v2.09 (ati13ati) AMD 6970
= 2.18 credits/sec

2,248.85 357.18 3,125.08 collatz v2.03 (cuda23) GTX 590
=1.39 credits/sec



I could get more credits crunching PG or Distr, but I'm putting in some time since the project is run by our own Slicker :) Plus, I want to get my MM up for this project.

spingadus
10-28-11, 11:09 PM
The cards are an EVGA Classified GTX 590 running stock at 630/1260/1728 (core/shader/memory). The other card is an HIS Radeon HD 6970 IceQ. I don't know the stock settings for it though.

DrPop
10-29-11, 12:36 AM
It is as I remember then. All projects favor one architecture over another. For example, AQUA favored Intel a bit, and Primaboinca seems to favor AMD a little. Collatz crunches faster on AMD, but PrimeGrid pretty massively favors NVIDIA. DiRT is the best NVIDIA credits now that PG has dropped theirs.
DNETC is gone, so Moo! is the best paying AMD project, followed by MilkyWay and then Collatz. If you factor in all the downtime MilkyWay has, and the fact that Slicker has almost no down time with Collatz, you will probably come out about even points wise.

Beerdrinker
10-29-11, 01:51 AM
Back in the days - ABC@home was king of the hill on CPU´s running x64 OS. As I remember it...ABC had very little downtime to.

DAD
10-29-11, 04:50 PM
ya for most any projects that use or req DP math... nvidia wins hands down... ati rocks the world in sp math though... For a time, one company always outshines the other, or has some pros and cons... then the other company responds and makes something better.. never ending cycle.


1) At the moment, nv rocks the socks off ATI for DP math boinc projects
2) At the moment, more projects support nvidia/cuda vs ATI (open_cl). open_cl to me is like open_gl... it's open.. and easier to cross platforms, but it's slower and not as efficient. IMO ATI needs to have some sort of "thing" like cuda (maybe they do, but I'm not an ATI fan lol). Using open_cl is really wasting a lot of GPU performance imo.
3) more and more boinc gpu WUs are moving from SP to DP math
4) If you are a fan of ATI or Nvidia is to stick with the cards you like - especially if the rig is used for gaming. If it's just a crunching rig ONLY, then I'd go with the best performing card for DP math

DrPop
10-30-11, 03:11 AM
ATI had stream, which was way more efficient than OpenCL. I do not know why they are dropping it to go OpenCL, because like you said, it is way less efficient. Look at PG GPU appear for an example. The CUDA app is very efficient and rocks. The ATI app not so much, because it's a port of the CUDA code to OpenCL and then run on an ATI GPU, which is very inefficient. ATI boards like the 5970 are amazing what they can put out. Even my 5870 is pretty sweet GFLOPS...but the code has to be efficient or the credit sucks. :p

Fire$torm
10-30-11, 05:37 PM
ATI had stream, which was way more efficient than OpenCL. I do not know why they are dropping it to go OpenCL, because like you said, it is way less efficient. Look at PG GPU appear for an example. The CUDA app is very efficient and rocks. The ATI app not so much, because it's a port of the CUDA code to OpenCL and then run on an ATI GPU, which is very inefficient. ATI boards like the 5970 are amazing what they can put out. Even my 5870 is pretty sweet GFLOPS...but the code has to be efficient or the credit sucks. :p

Well OpenCL was ATI's way of conceding to nVidia's CUDA. Stream never attracted the attention CUDA received. Unfortunately OpenCL was very poorly conceived and was rushed into production. They should be able to improve OpenCL considerably but that depends entirely on R&D funding. I don't know if AMD is committed to being OpenCL's primary benefactor. Time will tell.

DrPop
10-30-11, 06:42 PM
So if AMD is dropping Stream, if they don't back OpenCL, then what have they got? I mean, they're kind of stuck being the promoter now, right? Or maybe I'm missing something?

Fire$torm
10-30-11, 07:49 PM
So if AMD is dropping Stream, if they don't back OpenCL, then what have they got? I mean, they're kind of stuck being the promoter now, right? Or maybe I'm missing something?

Nope, that's it alright. Notice that AMD still isn't promoting Multi-GPU solutions for R&D the way nVidia is. OpenCL might look like a monster sinkhole to them. Also notice how much they ARE promoting Fusion!

I might be completely wrong but to me it looks like AMD wants to kill the Add-on GPU market. Doing so would put them way ahead of nVidia & Intel in the on-chip GPU segment. If they were to succeed that would relegate PCIe GPU's to the $1,000+ units for the esoteric markets.

DrPop
10-31-11, 12:52 AM
Oh. Does that mean my 5870 is going to be a paper weight someday? Or is that waaaaaay in the distant future when nothing supports Stream or OpenCL and by then I'll have way better GPUs?

Fire$torm
10-31-11, 04:19 AM
Oh. Does that mean my 5870 is going to be a paper weight someday? Or is that waaaaaay in the distant future when nothing supports Stream or OpenCL and by then I'll have way better GPUs?

It will not affect the immediate future of the GPU market. I see it as a slow, long term movement. Also keep in mind that the primary driving force for high end GPUs are gamers. And that segment of the market has seen major declines in numbers. The mod'ing/high end gamer generation is fragmenting due to the economy, waning interest and the realization that the increase in game performance for each new generation of GPU had reached the point of diminishing returns.

DAD
11-02-11, 02:00 PM
I would not be a fan of moving to an integrated GPU. The whole point of GPUs being an "add in" card is that you can upgrade them. CPU upgrades are usually rare due to socket changes by intel, or the bios of the mobo only supporting certain CPUs.

I REALLY don't want the GPU market going that way, and TBH, I don't think it will. I think lower end systems will keep with the integrated GPUs, but mid-high end need add in GPU cards for better gaming, science, cad, video and photo editing, etc

If ATI tries to "kill" the add on GPU market, IMO, that will harm them to no end as people would FLOCK to buy nvidia cards.

DrPop
11-02-11, 02:36 PM
I agree with this from the standpoint of heat dissipation. It is just physically impossible to get the same performance out of an integrated-on die- CPU / GPU compared to what we can get today with an add-in GPU. One of the reasons is, there is no cooling system efficient enough for that sucker even if they could ever build it! :D

I am sure the low-end, office or budget computers will had integrated graphics from now on - people been getting by with them in laptops for years, after all. I mean - look at the average person, they're happy with graphics they get on an ipod.

Hopefully for us power users, there will always be a relatively "reasonable cost" add-in GPU solution!:cool:

DAD
11-02-11, 05:28 PM
hehe every laptop I have has had add in GPUs. My older Dell gaming laptop has an "add on" PCIe-2.0 nvidia 8800M which has 2 GPUs on 1 card.

but I don't have need for laptops anymore now that I have an ipad2. If I travel, I don't need to really game, and if I do, I have my ipad.