Page 1 of 2 12 LastLast
Results 1 to 10 of 30

Thread: Just started GPU's on Einstein

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Question Just started GPU's on Einstein

    Does anyone have some advice for proper setup of the NVIDIA video cards in my machine on Einstein@home?
    Just another little goldfish... steamrollin the competition one project at a time!
    Staff Hardware Reviewer - BayReviews.com
    Top Reviewer - Computer Hardware - Epinions.com

  2. #2
    Past Administrator
    Fire$torm's Avatar
    Join Date
    October 13th, 2010
    Location
    In the Big City
    Posts
    7,938

    Re: Just started GPU's on Einstein

    I have not run the Einstein Cuda app but......

    I do not believe the Cuda app is multi-threaded so you should see one wu for each GPU crunching it. For OverClocking your nVidia start at stock clocks and run a wu while checking GPU temp. When the wu is finished pause GPU work and up the GPU clock 20~25 Mhz and crunch another wu, again while monitoring GPU temp. Keep doing this until you reach the highest temp you are comfortable with or until you hit that cards OC limit. Whcih ever comes first. Then you are set to crunch to your hearts content.



    Edit: I would suggest leaving the nVidia Mem Clocks at stock. Uping mem clock just adds a lot of heat without a significant increase in crunching performance.
    Last edited by Fire$torm; 07-11-11 at 06:44 PM.


    Future Maker? Teensy 3.6

  3. #3
    Administrator
    Bryan's Avatar
    Join Date
    October 27th, 2010
    Location
    CO summer, TX winter
    Posts
    6,457

    Re: Just started GPU's on Einstein

    Check the forum for an app_info. Several of the top computers show as "anonymous platform" and it appears they are running 3 wu at a time. A GTX 570 appears to get about 32k per day. This is a pure guess on my part!


  4. #4
    Past Administrator
    Fire$torm's Avatar
    Join Date
    October 13th, 2010
    Location
    In the Big City
    Posts
    7,938

    Re: Just started GPU's on Einstein

    Quote Originally Posted by Bryan View Post
    Check the forum for an app_info. Several of the top computers show as "anonymous platform" and it appears they are running 3 wu at a time. A GTX 570 appears to get about 32k per day. This is a pure guess on my part!
    Yeah, what he said....


    Future Maker? Teensy 3.6

  5. #5

    Re: Just started GPU's on Einstein

    I have yet to reach full utilization of my GPU's in the computer.

    Over at PrimeGrid, I was routinely hitting 192F on my 3 GPU's, now I am lucky to hit 160F at Einstein.

    Should I tweak my Boinc settings to dedicate more cpu resources to the management of these Einstein CUDA WU's?

    As always, THANK YOU for you support!
    Just another little goldfish... steamrollin the competition one project at a time!
    Staff Hardware Reviewer - BayReviews.com
    Top Reviewer - Computer Hardware - Epinions.com

  6. #6
    Administrator
    Bryan's Avatar
    Join Date
    October 27th, 2010
    Location
    CO summer, TX winter
    Posts
    6,457

    Re: Just started GPU's on Einstein

    Before you mess with your BOINC settings do this:

    1. Use GPU-Z to monitor your GPU loading.
    2. Go into BOINC Projects and suspend all CPU projects.
    3. If the loading jumps up then the problem is with your CPU usage and if it doesn't then the problem is with the Einstein program.

    I think this is why folks are running an app_info (anonymous platform) and crunching multiple wu at a time per card.


  7. #7
    Administrator
    Al's Avatar
    Join Date
    May 18th, 2011
    Location
    Chapel Hill, NC
    Posts
    6,669
    I've gotten it up in the mid 80's but it still took twice as long, so I gave up on it.



  8. #8
    Gold Member
    Slicker's Avatar
    Join Date
    October 25th, 2010
    Location
    South of Cheeseland
    Posts
    1,253

    Re: Just started GPU's on Einstein

    One of two things is going on here. Either the Einstein developers are college kids who haven't learned how to code for performance yet, or the app really doesn't lend itself to running on a GPU. You have a choice when programming for the GPU to do either blocking or non-blocking calls. Either way, the CPU has no clue what the GPU is doing when it is doing it. It only knows when it finishes. So, the question is whether the CPU does something else while it waits or whether it blocks all other programs from running while waiting. Choose wrong and the CPU wastes cycles that could have been spent elsewhere. By calculating how long it takes to run a GPU kernel (e.g. run 10 of them and take the average time), once can tell the CPU to "sleep" for that length of time so that it uses virtually no CPU at all. This is even easier though the use of events in OpenCL programming -- while clWaitForEvents(event) sleep(milliseconds).

    GPUs are great at parallel tasks. But, every time one of the stream processors has to do something different than all the others (e.g. loop one extra time) then all the others have to wait for that one stream processor to finish. Only when all of them are finished can it move on to the next task. When 40% of the stream processors have to run extra instructions and the other 60% sit idle, you will see the app running at 40% GPU utilization. To fix that, one needs to break the kernels into smaller parts so that for each, all stream processors are 100% utilized. That, however, can't always be done.

    One way to look at parallel processing is like this: You have 1000 cars driving from New York to California and all are supposed to arrive at exactly the same time. There is no such thing as a 1000 lane highway. So, it is impossible for all of them to arrive at exactly the same time since, assuming the road is two lanes wide, the cars would be stacked up 500 deep in line to cross the California border. You can attempt to use multiple roads so that there are fewer cars in line, but the logistics of getting all the cars on all the roads to cross at the same time is much more difficult than having to monitor a single road. So, in GPU programming, any time the application goes through a "Do command. OK, but...." that means it has to stop and wait because all streams run the same commands at the same time and none are allowed to jump ahead. If the math problem being solved doesn't lend itself to doing that, the GPU app won't run efficiently.

    As I've said more than once before, just because you can use a table knife as a screwdriver, it doesn't mean you should. Just because a GPU can run an app doesn't make it the best use for the GPU. With the heavy CPU utilization and the limited GPU utilization, Einstein is fitting a square peg into a round hole. With a big enough hammer, you can get it to work. The real question is, should they even try?
    Spring 2008 Race: (1st Place)

  9. #9
    Past Administrator
    Fire$torm's Avatar
    Join Date
    October 13th, 2010
    Location
    In the Big City
    Posts
    7,938

    Re: Just started GPU's on Einstein

    Thank you Slicker. GPU Programming 101: Introduction. Good stuff Professor Slicker.


    Future Maker? Teensy 3.6

  10. #10
    Gold Member

    Join Date
    June 1st, 2011
    Location
    Terra Incognito
    Posts
    1,012

    Re: Just started GPU's on Einstein

    Yes, thanks Slicker. I enjoyed reading that and feel more educated as well

    Makes me want to learn a programming language.

    Any ideas what would be the ideal language these days to start on if you wanted to eventually create a BOINC project and code for both cpu and gpu?

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •