PDA

View Full Version : Standardized BOINC Credits...revisited



DrPop
05-02-12, 06:37 PM
I've been doing some considering on this here "problem" we've all been going round and round about for the last several years. How does a project award its credits, and could there ever be cross-project parity in the credits awarded?

Problem: Today we are adding up our total score, and saying things like "my 200 oranges + 56 lemons + 39 Grapefruits is bigger than your 300 tangerines + 42 kiwis + 97 apples." In other words, it is a little wonky.

Postulate: How do we go from a bag of mixed nuts to something that makes sense? We need projects to award credit based on "actual work done" rather than any other mechanism currently in place.

Solution: There is a way to determine how much "work" a given CPU or GPU can do. This term is a FLOP, or floating point operations per second. We actually know the rating for all modern GPUs - the manufacturers use this number (now in GFLOPS) as a selling point for new models. We also know the rating for several CPUs, and can determine it if we don't already know it, with easily obtained benchmarking software.

How could this apply to BOINC? First, a group would design their project, and then code that project's WUs for the various platforms they wish the project to be run on.
Then, a test computer with a known GFLOPS rating of both the CPU and GPU would crunch a given number of WUs, and average time of WUs completed would be recorded. If the WUs are of differing sizes, there is the extra step of calculating the difference in size vs the time needed to crunch the WU, and do a correlating ratio.

This test would then have to be repeated for EACH platform the project will be run on, as there may be time variances (PC, Mac, Linux, etc). The credits awarded may have a correction factor applied for each operating system. This is not hard for the database to figure out - your operating system and all your hardware type are already reported to each project server. An example: Let's say a CPU capable of 1GFLOPS spits out a WU in 100 seconds on Windows 7 X64. However, the same exact hardware running Linux takes 95 seconds due to improved efficiency. The WUs would therefore have a 5% higher credit yield when returned from a Linux platform vs a Windows platform computer.

When this time required per WU is paired with the GFLOPS, we would then arrive at a value of GFLOPS required per WU. EVERY project could have the same base rate of credits per GFLOP used.

From there, it is a matter of each project plugging in the time factor for their WUs to yield a similar amount of credit per GFLOP.
I will do one example:
Let us say it takes 10 minutes on a 20 GFLOP CPU to crunch 1 WU in MilkyWay. Let us also say the base rate is 10 points per GFLOP crunched per minute (for all projects). That 1 MilkyWay work unit is now worth (10 * 20 * 10) = 2,000 points.
Let us now compare an Einstein WU that may take 15 minutes to crunch on the same 20 GFLOP CPU. That Einstein WU is now worth (15 * 20 * 10) = 3,000 points.
And now you have cross project parity. If the projects would simply share the SAME BASE RATE of points awarded per GFLOP used to crunch any WU, we could then come up with a system like this where all projects awarded similar based credits per hour of crunch time on any given computer -- AND it would compensate newer and faster computers for their higher performance.

Alright....hit me. :D