PDA

View Full Version : NFS@Home: Updates



RSS
03-28-19, 05:38 PM
It's been quite a while since the last news post, but work has been continuing. On the status pages you can follow the many completed factorizations. Also, I yesterday I updated the BOINC server code to the latest version.

More... (https://escatter11.fullerton.edu/nfs/forum_thread.php?id=740)

pinhodecarlos
02-03-20, 02:43 AM
Boost needed to push forward 2,2158L. Apps: lasievef and lasieve5f. 700MB/thread. Wu name is S2L2158_something. Each wu is credited 130 points.

pinhodecarlos
02-05-20, 05:38 PM
Started to give the example but I’m running for my dearest Scottish team. Anyone wants to join me?

zombie67
02-05-20, 09:21 PM
I'm kinda busy with some other goals for the next several weeks. Maybe next time!

pinhodecarlos
02-06-20, 04:27 AM
Will be busy trying to reach 10M.

pinhodecarlos
02-22-20, 03:17 AM
After we process the current 2.8M wus we will add a difficult job to process which will require at least 2GB/thread. I’ve requested admin to add some feedback on the news, also I mention to consider an increase of the wu credit, currently set at 130, since for this job we will be needing dedicated cruncher. We’re forecasting at current pace 3-4 months for this job. Will keep you all updated on the credit thing, can be an opportunity to reach goals.

zombie67
02-22-20, 03:35 AM
Make the job use a new app, with explicit memory requirements on the project preferences page, opt-in only. As you say, tasks should be rewarded with extra credits.

pinhodecarlos
02-22-20, 03:39 AM
Make the job use a new app, with explicit memory requirements on the project preferences page, opt-in only. As you say, tasks should be rewarded with extra credits.
Thank you for your feedback, much appreciated.

zombie67
02-22-20, 03:52 AM
FWIW, I have 40M in credits at NFS. And the only app I have less than 100k hours is 16e Lattice Sieve, which is dead (I think). So the only goal I have to work at is credits. And another 10M is a big mountain to climb.

pinhodecarlos
02-22-20, 05:34 AM
Greg needs to fix the badge levels...the ones in place don’t make sense.

John P. Myers
02-22-20, 11:36 AM
Personally i like projects that have high RAM requirements and i hate when projects water down an app to make it more accessible. It takes the fun out of it because it removes the knowledge that we're chasing an elite badge instead of just another common one. I'm aware that dumbing down RAM requirements allows for more participation, but it also hurts the project because now the app doesn't do what it needs to. So if you need 2GB/thread for your app to run optimally, then require it and give it it's own badge. I'll be excited to run something like that.

pinhodecarlos
02-23-20, 03:14 AM
Personally i like projects that have high RAM requirements and i hate when projects water down an app to make it more accessible. It takes the fun out of it because it removes the knowledge that we're chasing an elite badge instead of just another common one. I'm aware that dumbing down RAM requirements allows for more participation, but it also hurts the project because now the app doesn't do what it needs to. So if you need 2GB/thread for your app to run optimally, then require it and give it it's own badge. I'll be excited to run something like that.

As a matter of fact since beginning this projects tweaks down some parameters of the client to get memory down to 1GB/thread to please everybody. When we started more than 10 years ago people even hadn’t 2GB/thread.

zombie67
02-23-20, 02:15 PM
I can't remember... Does the app use AVX, AVX2, or AVX-512? If not, any chance to add that with the new app?

pinhodecarlos
02-26-20, 04:57 AM
After all this job will be run on the current app since the parameters were wrong in the beginning.

All applications don’t use avx, etc and no foreseen to be added since it’s very hard to play around the code, at least we can’t find anyone skilled enough to do it.

Username is valid
02-26-20, 06:00 PM
After we process the current 2.8M wus we will add a difficult job to process which will require at least 2GB/thread. I’ve requested admin to add some feedback on the news, also I mention to consider an increase of the wu credit, currently set at 130, since for this job we will be needing dedicated cruncher. We’re forecasting at current pace 3-4 months for this job. Will keep you all updated on the credit thing, can be an opportunity to reach goals.


After all this job will be run on the current app since the parameters were wrong in the beginning.

All applications don’t use avx, etc and no foreseen to be added since it’s very hard to play around the code, at least we can’t find anyone skilled enough to do it.
Hi Carlos,

There are currently 3.2M in the queue according to the server status, I guess they will take a while to clear.

If you are going to introduce new work units that need up to 2Gb each can you (Gregg) please let us know with at least a week's notice if not more please. Some of my machines will not be able to run the new work units without reducing the amount that run concurrently and I need the notice to get it done it time.

Thanks

pinhodecarlos
02-27-20, 03:45 PM
Hi Carlos,

There are currently 3.2M in the queue according to the server status, I guess they will take a while to clear.

If you are going to introduce new work units that need up to 2Gb each can you (Gregg) please let us know with at least a week's notice if not more please. Some of my machines will not be able to run the new work units without reducing the amount that run concurrently and I need the notice to get it done it time.

Thanks

The 3.2M wus already includes the new job which will be run on the same application, as previously stated on my last message, no need for 2GB/thread, only 1GB/thread. Thank you all.

zombie67
03-06-20, 11:40 AM
I just added 72 threads to NFS. I have about 11M to go, to get my 50M MM.

zombie67
03-08-20, 11:05 AM
I just added another 160 threads.

pinhodecarlos
03-08-20, 12:43 PM
Slowly closing the 10M mark...