GPU72 is a subproject for GPU Trial Factoring of Mersenne Prime. The project will be run on opencl for AMD and Nvidia cards on 64bit only.
There is still some further processing in creating all the server stuff.
More...
Printable View
GPU72 is a subproject for GPU Trial Factoring of Mersenne Prime. The project will be run on opencl for AMD and Nvidia cards on 64bit only.
There is still some further processing in creating all the server stuff.
More...
Cool. New wuprop app.
You guys better check this, in production mode. Read FAQ.
I am running a few tasks, just to try it out. Windows works, linux does not. I'll bet there is some additional library that requires installation. In any case, I won't have time to play with it further until after SETI.
Step 1: install CUDA 10.1 toolkit (exactly 10.1)
https://developer.nvidia.com/cuda-10..._type=deblocal
Step 2: upgraded the driver from Software & Updates (Ubuntu) to v430. Rebooted.
Step 3: run sefltest with ./mfaktc.exe -st (Selftest passed!)
Now that mfaktc.exe is running standalone, using it in BOINC should not pose a problem.
Also, new badges!
Good stream of work has been deployed so if you guys are hunting for badges and get some formula BOINC points now it is the time to do it.
This GPU project really seems to shine with the Nvidia 20 Series. A 2080 Super is about twice as fast as a 1080 Ti.
GPU Avg Time PPD 1050 Ti 2257 57,421 GTX 970 1475 87,864 GTX 980 1372 94,461 GTX 1660 Ti 944 137,288 GTX 1070 872 148,624 GTX 1080 Ti 513 252,632 RTX 2070 Super 324 400,000 RTX 2080 281 461,210 RTX 2080 Super 253 512,253 RTX 2080 Ti 208 623,077
Yes, trial factoring is faster on Nvidia than on AMD/ATI
Do you guys managed to run the client on Linux machines? Can’t understand why the standalone version works but not under the boinc wrapper.
Can I have a small list of BOINC GPU project wu length vs GPU type? TIA
Next work batch will have credit raised by 3.3x but current one needs a boost.
Current batch is now being credit as 165550 against the 49500. RTX2080 process one wu in 10,000 seconds, that’s 1.4M credit per day. Will take awhile to clean the queue since a lot of people stopped running this huge wus but for the tough hosts this is an opportunity to climb stats.
If you have spare GPU cycles please support the project to troubleshoot linux version. I’m suspicious running the Windows client on my old Laptop with an ATI GPU because I’ve ran in the past some tests with the standalone client and I can’t remember what I’ve done to make it work. Also support on app to make it run in several GPUs.
http://srbase.my-firewall.org/sr5/fo...p?id=1335#6102
SRBase progress at Mersenne.
https://www.mersenne.org/report_top_500/
We need benchmark for R VII. http://srbase.my-firewall.org/sr5/fo...stid=6106#6106
Also collaz pays 30k per 120 seconds, think it is time to ramp up the credits on SRBase.
I have a VII. But I need way more instructions than I can follow in that thread. Seriously, they don't even reference with OS they are using.
Check with davidBAM or Dirk Broer on the TSBT forum. We have a thread here...https://tsbt.co.uk/forum/viewtopic.php?f=157&t=19995
I'd send him here but he had problems posting here and don't think it got resolved.
Do you want to give it a go? New wrapper, just be aware checkpoints are not being read properly with this new version.
I'm running it on a GTX 1080 and GTX 1070. My Radeon VII is in a Win8.1 only system.
EDIT: Oh, I didn't realize this app ran on Windows. Running it on my WIn8.1 Radeon VII system. How long do those WUs run? Been sitting at 100% Progress for 10+ minutes.
The new, longer WUs avg...
9.6 hours on GTX 1070 linux
7.0 hours on GTX 1080 linux
6.0 hours on Radeon VII Win
Steve, there’s an issue with the wrapper for checkpoints but wus will get completed. Top Nvidia cards processes these long wus in 10,000 seconds. Call of arms from Windows users to clean the queue so they can update the background name of the project due to a lawyer threat from the admin of GPU72 site.
Quick question, can anyone get in touch with EG to see if he can support us on SRBase?
I've managed to get it running on 1 - 970 under Win7, without an immediate computational error. At this point it is at 10h 20m at 100% complete and is currently showing 100% GPU usage.
Edit: Completed and validated 55,896 seconds for 165,000 credits...almost 3 credits/sec.
I've been running it for a couple weeks, using a 960, 2 1050ti's, 1050, 1070ti, 2 rx560's, and two 1650 supers with very few errors.
I will advice to double or even triple the credits. Collatz credits 30k per 2 mins.
Hey Steve, need an those config files setup to only run this GPU at 80 % of capacity, how do I do that? Where should I save the files under my windows BOINC folder? My GPU is slow one from laptop and while running TF app my laptop just stays uncontrollably, takes a lot of time to do other stuff.
Tested it and limited by old GPU. Thank you anyway.
Can’t see any option to change GPU usage for my AMD HD 7600M series. What I want is an option to allocate only 70-80% of the GPU to BOINC, I don’t want to underclock. Is this possible?
The app_config.xml, where to set gpu usage doesn't actually control the amount of gpu resources. It's just a way to run more than 1 WU on a gpu. Here's an app_config.xml with gpu_usage set to .5 and it will only run 1 WU at a time but it will still use all the gpu resources it can.
Now, I've not used this program but it says it will allow you to throttle cpu or gpu usage. It's called TThrottle by eFMr, the same folks who provide boinctasks...https://efmer.com/tthrottle/Code:<app_config>
<project_max_concurrent>1</project_max_concurrent>
<app>
<name>GPU72</name>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
</app_config>
My CPU doesn’t have an internal GPU. Back in 2013 I remember running Collatz without any lagging issues, just can’t remember if the client has an option flag to reduce GPU usage.
Will try now that software, thank you.
One last thing to try. I've never used this so I don't know how well it will work either but here's a win cmd script that will toggle the boinc client's gpu use on and off for however many seconds you want or what works.
This will set gpu mode to always, wait 2 seconds, set gpu mode to never, wait 2 seconds and loop back around. Understand it doesn't literally run the gpu WU for 2 secs and pause it for 2 secs. All it can do send the boinc client a request to run with GPU, waits 2 secs, then sends another request to stop using gpu. I don't know what effect lag time or other variables will have but give it a try if you want. Copy and paste into a text file, name it yoyo_gpu.cmd. To run it, enter yoyo_gpu. Press <ctrl><c> to stop it. Adjust timeouts if needed.Code::top
"C:\Program Files\BOINC\boinccmd" --set_gpu_mode always
timeout /T 2
"C:\Program Files\BOINC\boinccmd" --set_gpu_mode never
timeout /T 2
goto top
Steve, haven’t tested the above but I really appreciate your support. Think will just run full on GPU with screen lags. Just need to know how much I can produce daily, I believe mine can process one wu each 4 hours, that will be 30k/day. NFS@Home 15k/day on CPU side.
Running my GPU on it taking 1300 seconds per wu but RTX2080 is 50x faster so expect wus to take less than 30 seconds, 500 credits. Also as we move to larger ranges within the same bit depth timings will go down drastically. Example on current range 300M to 400M, 70 to 71 bits, at start of range I was doing a wu each 1500 seconds, at 360M on 1300 seconds.
Work progress available here: https://www.mersenne.ca/status/tf/0/0/1/10000