I'm surprised that the guy that was tauted as a "boy genius" can't figure out how to issue the same sized WU on a single work request. Brilliant, absolutely brilliant. I too have stopped running the project.
I'm surprised that the guy that was tauted as a "boy genius" can't figure out how to issue the same sized WU on a single work request. Brilliant, absolutely brilliant. I too have stopped running the project.
Got one going now that's expected to finish in 2 hours for a total runtime of 10 hours. Also noticed it's taking about 256MB RAM to run it, where a single Poem GPU app uses half that. This will be my last FH.
Hmm. I don't see any slow-down running FH. To me, it's all just free upside..
There are 2 different sized WU being issued by the project. If you get 7 of the shorter ones and 1 or more of the longer ones then all the cores that had crunched the short WU sit idle for a couple of hours waiting for the last WU to finish. It won't ask for work until ALL WU have completed.
Yes, somewhat free credits, but no where near what you could get if the WU size was matched on each download.
I understand. But free is free. It's not like I am losing out on credits from other projects, by running FH. Even with mixed length tasks. It's all upside. For me, it's ~2500 per day, and over 14M in total, helping out the team's total score.
I'm surprised that this isn't a priority for the folks who want us to be #1 in overall credits.
I don't know what is the problem but the FreeHAL tasks are getting bigger and bigger, my other computer stopped running them after a small while and was "waiting for memory" all the time and this one when running NFS 16e start losing time maybe because of the big amount of swap needed. I think it depends on the Computer memory, Operational System and the projects that we are running.
Friends are like diamonds and diamonds are forever
Yes, FH tasks require a lot of RAM. And I think it increases over the duration of the task. So longer tasks need more RAM by the end. This is why I have been recommending for *years* that people should have at *least* 2gb per thread, for any cruncher. There are always a few projects that need lots of RAM, especially the medical projects. So don't skimp. I target 4gb per thread. This comes in very handy when running VMs. So I can run FH tasks in the VM, as well as natively.