PDA

View Full Version : WuProp App hours



Bok
07-13-13, 04:02 PM
I asked for WuProp to publish the app hours for users that you see on your homepage as an xml file which I could download and put into my database so I can publish and track it. They've implemented it, but have made it an option in your preferences for whether your data is published.

See http://wuprop.boinc-af.org/forum_thread.php?id=200 for details.

Hopefully I'll get the scripting done in the next few days.

Bok

zombie67
07-13-13, 04:52 PM
Very cool? Already set mine to public.

Bok
07-13-13, 05:29 PM
I've written the script to parse into a table and will let it run for a while, then I'll do the php code. Yours are in already :)


mysql> select project,app,running0,pending0 from wuprop_hours where id = 797;
| project | app | running0 | pending0 |
+---------+------------------------------------------------------------------------------------+------------+----------+
| | ABC sieving finder | 3511.6667 | 0 |
| | ABC Eval Tool | 8833.5667 | 0 |
| | Binary Radio Pulsar Search (Arecibo) | 5098.6 | 0 |
| | Binary Radio Pulsar Search (Perseus Arm Survey) | 228.1167 | 0 |
| | Gamma-ray pulsar search #2 | 1850.0333 | 0 |
| | Gravitational Wave S6 LineVeto search (extended) | 1560.0167 | 0 |
| | BioMedial Genome Correlations for Research and Healt Care | 5269.9667 | 0 |
| | BioMedial Genome Correlations for Research and Healt Care | 1499.7833 | 0 |
| | Statistical correlization analysis for genome research | 1523.7 | 0 |
| | Period Search Application | 8722.2833 | 4.3 |
| | The Attic Proxy | 5123.15 | 0 |
| | Crawl Internet New Media Infomation | 66.6333 | 0 |
| | Spider Application | 107.0333 | 0 |
| | Bitcoin Utopia Project 0 | 591.75 | 0 |
| | Bitcoin Vault for Energy and Environment Awards (longer workunits: 100 shares) | 103.4 | 0 |
| | Bitcoin Vault for Energy and Environment Awards (shorter workunits: 20 shares) | 19.1833 | 0 |
| | Bitcoin Vault for Exploration Awards (longer workunits: 100 shares) | 52.15 | 0 |
| | Bitcoin Vault for Exploration Awards (shorter workunits: 20 shares) | 16.75 | 0 |
| | Bitcoin Vault for Global Development Awards (longer workunits: 100 shares) | 16.5333 | 0 |
| | Bitcoin Vault for Learning Awards (longer workunits: 100 shares) | 57 | 0 |
| | Bitcoin Vault for Life Sciences Awards (longer workunits: 100 shares) | 111.65 | 0 |
| | BOINCSIMAP simap application | 6922.4667 | 0 |
| | Blender | 1012.45 | 13.2 |
| | SunflowerBlender | 973.9667 | 70.9167 |
| | SunflowerBlender (Linux) | 1825.1333 | 860.4 |
| | ICT Protein Structure Prediction(2nd Generation) | 5158.1833 | 0 |
| | Tsinghua Nano Tech Research | 373.45 | 0 |
| | Mr.Wilson | 522.95 | 0 |
| | modeE of Climate at Home | 618.5333 | 0 |
| | UK Met Office Coupled Model Full Resolution Ocean | 7945.0167 | 0 |
| | UK Met Office HADAM3P European Region | 2705.2167 | 0 |
| | UK Met Office HADAM3P Pacific North West | 1355.4333 | 219.6 |
| | collatz | 5203.3167 | 0 |
| | mini_collatz | 5021.9667 | 0 |
| | solo_collatz | 4800.6 | 18.8833 |
| | TrackJack | 13537.5667 | 7.3833 |
| | Tycho | 6513.3167 | 0 |
| | Tycho Betatest | 48.0167 | 0 |

<snip> because it's too big to list here :)

186 rows in set (0.00 sec)

Al
07-13-13, 09:57 PM
Great idea Bok! Thanks for doing that. I've gone public too.

Duke of Buckingham
07-13-13, 10:00 PM
Very nice Bok.

Slicker
07-13-13, 11:08 PM
So I have to ask.... I'm normally a SQL Server or Oracle guy. Only for Collatz do I use MySQL. I tried hosting it on a Windows server and didn't work well at all (due to BOINC never closing the connections which meant thousands and thousands of connections to the database when Windows limits it to 1024 and requires a huge makeover to allow more than that which I consider to be a major BOINC bug but since it supposedly can use any database on any platform -- NOT!) Anyway, do you use MySQL for all the BOINC Stats? If so, how do you keep if from disk thrashing since MySQL seems to perform well only when the entire database is loaded into memory? I had started a stats app a while back and decided that inserts were soooo slow after the table had a couple hundred thousand rows that I just gave up. Are the MySQL performance tips that everyone is keeping secret? Or, does using SSD drives eliminate the IO bottleneck?

Bok
07-14-13, 09:26 AM
I do use Mysql, though I'm intending to switch to MariaDB fairly soon, just because Oracle don't seem to be maintaining Mysql very much these days.

I have no bottlenecks at all and when it comes to inserts, parsing projects like Seti generates a few million inserts. I'm surprised you have had problems, but I'm pretty sure they are caused by incorrect settings in your my.cnf. I spent a lot of time tweaking this to get it right. Wait timeout and connect timeout especially for connections. You shouldn't really need to close connections anyway, mysql should re-use them if you have it configured right. For raw speed for the inserts I actually push the data into a file and use 'load data infile $filename into table xxxx' - MUCH much faster than doing single insert statements.

On linux, the right filesystem certainly helps, though these days I tend to just use ext4.

SSD's certainly help, but even then it's only got to the point now that I think I have that figured out :)

I've considered using Oracle before, but honestly I don't think it would be as fast as I have it right now and there are tons of small things that Mysql allow above and beyond the SQL standards that I use (insert ignore, replace into)


So I have to ask.... I'm normally a SQL Server or Oracle guy. Only for Collatz do I use MySQL. I tried hosting it on a Windows server and didn't work well at all (due to BOINC never closing the connections which meant thousands and thousands of connections to the database when Windows limits it to 1024 and requires a huge makeover to allow more than that which I consider to be a major BOINC bug but since it supposedly can use any database on any platform -- NOT!) Anyway, do you use MySQL for all the BOINC Stats? If so, how do you keep if from disk thrashing since MySQL seems to perform well only when the entire database is loaded into memory? I had started a stats app a while back and decided that inserts were soooo slow after the table had a couple hundred thousand rows that I just gave up. Are the MySQL performance tips that everyone is keeping secret? Or, does using SSD drives eliminate the IO bottleneck?

Bok
07-20-13, 05:35 PM
Added some basic sorting when you click the headers, Gopher is working on some more advanced client side javascript sorting options for all tables .Wouldn't work on anything where the results sets are too large to pass to the client, but anything say less than a few hundred or even a thousand would allow sorting asc/desc on any column then.

Fire$torm
07-21-13, 01:09 PM
Nice! That will a welcome addition to Free-DC stats. Thx Bok.