Log in

View Full Version : Who cares to help with the left overs on NFS@Home?



pinhodecarlos
04-02-13, 02:50 PM
When wu's are aborted or left behind sometimes it delays the completion of a NFS sieve work.
Greg, the administrator of NFS, can re-direct these type of work to individual not for an entire team. His question, and also my question, is for only 64-bit linux users although window users are welcome, who is willing to help to clean the left overs?
Please post here your NFS@Home nickname so I can give Greg a list. Also tell me if you will be staying active for the duration of the two challenges.
The behave of BOINC cleaning the left overs is very simple, you will get new and old wu's to be done as fast as you can while boinc itself aborts wu's already done by others faster than you. With this strategy we can easily clean the sieve of 2,1049+.

Thank you.

Carlos

DrPop
04-02-13, 04:58 PM
Hi Carlos, I may as well stay on it. I only have Win x64 though. Anyway, whatever he wants to send to the Drpop account, I will crunch it. :)

Slicker
04-02-13, 05:41 PM
I know BOINC is supposed to have some setting which allows work to be directed to machines which return work faster so jobs that have to get resent will get resent to hosts that not only complete tasks but complete them within a set time frame. I've never used it on Collatz so I can't give specifics but I know that the logic is in the server code so it is probably documented somewhere on the BOINC wiki.

pinhodecarlos
04-02-13, 06:04 PM
I know BOINC is supposed to have some setting which allows work to be directed to machines which return work faster so jobs that have to get resent will get resent to hosts that not only complete tasks but complete them within a set time frame. I've never used it on Collatz so I can't give specifics but I know that the logic is in the server code so it is probably documented somewhere on the BOINC wiki.

If I am not mistaken Greg uses a script for NFS@Home made by him.

kmanley57
04-02-13, 06:59 PM
I am currently running two Linux 64-bit boxes on NFS, but I am sure nobody noticed! I will be running both challenge parts also. So I could help clean up, ran resent WU's on a couple on other projects. Getting a WU to finish in minutes is kind of exciting sometimes! :p

same alias: kmanley57

If I forget this alias, they better kick the dirt in on me.

denim
04-03-13, 11:47 AM
Killing me here, now I have to get a x64 linux box.

Duke of Buckingham
04-03-13, 02:05 PM
Killing me here, now I have to get a x64 linux box.

:-o:D=))^:)^

Hello denim my friend that is the spirit.

The team spirit ...

pinhodecarlos
04-03-13, 02:30 PM
Killing me here, now I have to get a x64 linux box.

You can help too as I said. Windows users are welcome.
Also soon a new win64 build will be available....you can read the thread here: http://www.mersenneforum.org/showthread.php?t=18043

denim
04-04-13, 02:02 PM
You can help too as I said. Windows users are welcome.
Also soon a new win64 build will be available....you can read the thread here: http://www.mersenneforum.org/showthread.php?t=18043

For sure, would love to do it.

pinhodecarlos
04-06-13, 07:39 AM
I will also help cleaning the left overs after I am finish with my 13 days post-processing job I am running for NFS@Home. The job I am running is here: http://escatter11.fullerton.edu/nfs/crunching_e.php and log output from an Intel® Core™ i7-3630QM with 16 GB DDR 1600 MHz of memory running msieve:



Sat Apr 6 09:09:52 2013 commencing linear algebra
Sat Apr 6 09:09:53 2013 read 11995861 cycles
Sat Apr 6 09:10:16 2013 cycles contain 38293262 unique relations
Sat Apr 6 09:26:59 2013 read 38293262 relations
Sat Apr 6 09:28:07 2013 using 20 quadratic characters above 2147483238
Sat Apr 6 09:30:44 2013 building initial matrix
Sat Apr 6 09:38:17 2013 memory use: 5019.5 MB
Sat Apr 6 09:38:22 2013 read 11995861 cycles
Sat Apr 6 09:38:24 2013 matrix is 11995684 x 11995861 (5191.2 MB) with weight 1489055082 (124.13/col)
Sat Apr 6 09:38:24 2013 sparse part has weight 1240874977 (103.44/col)
Sat Apr 6 09:40:59 2013 filtering completed in 2 passes
Sat Apr 6 09:41:02 2013 matrix is 11995169 x 11995346 (5191.1 MB) with weight 1489035519 (124.13/col)
Sat Apr 6 09:41:02 2013 sparse part has weight 1240866428 (103.45/col)
Sat Apr 6 09:42:28 2013 matrix starts at (0, 0)
Sat Apr 6 09:42:30 2013 matrix is 11995169 x 11995346 (5191.1 MB) with weight 1489035519 (124.13/col)
Sat Apr 6 09:42:30 2013 sparse part has weight 1240866428 (103.45/col)
Sat Apr 6 09:42:30 2013 saving the first 48 matrix rows for later
Sat Apr 6 09:42:32 2013 matrix includes 64 packed rows
Sat Apr 6 09:42:34 2013 matrix is 11995121 x 11995346 (4996.9 MB) with weight 1264920585 (105.45/col)
Sat Apr 6 09:42:34 2013 sparse part has weight 1189965917 (99.20/col)
Sat Apr 6 09:42:34 2013 using block size 262144 for processor cache size 6144 kB
Sat Apr 6 09:42:58 2013 commencing Lanczos iteration (8 threads)
Sat Apr 6 09:42:58 2013 memory use: 6648.9 MB
Sat Apr 6 09:45:21 2013 linear algebra at 0.0%, ETA 301h28m
Sat Apr 6 09:46:06 2013 checkpointing every 40000 dimensions


Carlos

pinhodecarlos
04-10-13, 05:24 PM
I suppose the current users that wanted to help are receiving the left overs. By mistake I gave Steve ID so I apologize. When I was getting the ID's from Free-DC stats page using the mouse cursor I copied wrongly the ID and I gave Steve's instead of Denim ID.

Carlos

kmanley57
04-10-13, 06:04 PM
I suppose the current users that wanted to help are receiving the left overs. By mistake I gave Steve ID so I apologize. When I was getting the ID's from Free-DC stats page using the mouse cursor I copied wrongly the ID and I gave Steve's instead of Denim ID.

Carlos

Well I know I have at least one, since I do not have that application type selected! :p

Or at least not for the last week or so! \m/

pinhodecarlos
04-10-13, 06:16 PM
Well I know I have at least one, since I do not have that application type selected! :p

Or at least not for the last week or so! \m/

Well, I gave the ID's to Greg so by using a script he could send jobs to a user but strangely you were only awarded with one wu!
I still need 198 hours to finish a post-processing job so within a week I will turn on boinc to Greg be able to send me the left overs. I think by then, and at current speed, we will need 2-3 days.

Carlos

kmanley57
04-10-13, 07:56 PM
I just checked and I have a bunch more now, and getting more! :p

Just have to run the 450+ V5's in front of them. ^#(^

pinhodecarlos
04-11-13, 04:57 AM
I just checked and I have a bunch more now, and getting more! :p

Just have to run the 450+ V5's in front of them. ^#(^

Don't worry about, first finish the 16e V5 wu's. You will notice the long name of the wu's that need to be processed rapidly. What will happen is this: you can crunch faster than others or a few will be cancelled by the server. It's like a fun race to see who finishes first. I like it.
I posted on the other thread the numbers of wu's still available to be done for 2,1049+ sieve job. At current pace they are being tested I think I will be able to enter this fun but cleaning instead the left overs of the 16e V5 wu's.

When I helped clean the 2,1037- left overs Greg sent:

Steve: 2299
Carlos: 645
Greg: 725
Total WUs left: 3669

Of the 645, I did like 80 % because the wu's were also sent to normal clients, 20 % were processed faster. It was a fun race. I was with guts to finish them all.
Anyway, you are doing an important job instead of them being crunched only at the end of their deadline of 7 days. We are pointing to the end of the month to start the next factoring phase of the 2,1049+ number, the post-processing stage on a big cluster.

Thank you.

Carlos

DrPop
04-14-13, 10:26 PM
Carlos, how long would you like my rigs to stay on NFS for the cleanup? Also, do I need to get all the WUs again, or should I leave just the last two checked (via our previous PM conversation)?
Thanks!

pinhodecarlos
04-15-13, 04:25 AM
Carlos, how long would you like my rigs to stay on NFS for the cleanup? Also, do I need to get all the WUs again, or should I leave just the last two checked (via our previous PM conversation)?
Thanks!

These are the remaining wu's:

Query: select * from workunit where ( name like 'S2p1049_%' and assimilate_state!=2 and appid=8 ) order by id desc limit 20
396 records match the query.

Query: select * from workunit where ( name like 'S2p1049%' and assimilate_state!=2 and appid=9 ) order by id desc limit 20
8362 records match the query.

I saw your computer are all windows based and so with less than 400 wu's you don't need to stay in the project anymore, please abort or clean first all 2,1049+ wu's that you have dowloaded so far. If you were on linux your help would be precious on 16e V5 clean up because of the remaining 8362 wu's. Thank you.


Carlos