Page 4 of 5 FirstFirst ... 2345 LastLast
Results 31 to 40 of 43

Thread: Standardized Credits

  1. #31
    Zytozux
    Guest

    Re: Standardized Credits

    Quote Originally Posted by DrPop View Post
    Nice, and a worthy cause to be sure! But, can you also do folks like Slicker and I a favor and put your GPU on something high paying while your CPU is on WCG? Thank you if you can!!!
    Thanks. I really only have one. It's ATI so I have it on MOO. I got it on Black Friday for $20 dollars. I am a starving college student etc . I would stick it on Donate@home (most ATI credits right now), but when you snooze Donate@home it resets your workunit back to 0

  2. #32
    Gold Member

    Join Date
    June 1st, 2011
    Location
    Terra Incognito
    Posts
    1,012

    Re: Standardized Credits

    Quote Originally Posted by Zytozux View Post
    Thanks. I really only have one. It's ATI so I have it on MOO. I got it on Black Friday for $20 dollars. I am a starving college student etc . I would stick it on Donate@home (most ATI credits right now), but when you snooze Donate@home it resets your workunit back to 0
    Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.

  3. #33
    Past Administrator
    DrPop's Avatar
    Join Date
    October 13th, 2010
    Location
    SoCal, USA
    Posts
    7,635

    Re: Standardized Credits

    Quote Originally Posted by spingadus View Post
    Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.
    Hmmm...yeah, it might. Good question?

    @Zytozux - awesome man! Thanks for the help. I agree with this, that CPU power can be used for almost anything these days - pet projects, MMs, etc - but GPU power is where we are really going to lift this team up "credit wise" and approach the Germans et. al again! GO TEAM GO!!!

  4. #34
    Gold Member
    Slicker's Avatar
    Join Date
    October 25th, 2010
    Location
    South of Cheeseland
    Posts
    1,253

    Re: Standardized Credits

    Quote Originally Posted by spingadus View Post
    Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.
    Some time back, "you know who" decided that snoozing or suspending a GPU app should kill it rather than suspend it. So, "leave app in memory" means suspend any CPU app but go ahead and kill the GPU app.
    Spring 2008 Race: (1st Place)

  5. #35
    Platinum Member
    Maxwell's Avatar
    Join Date
    October 25th, 2010
    Location
    Everett, WA
    Posts
    3,300

    Re: Standardized Credits

    Quote Originally Posted by Mumps View Post
    Sorry, any scale that has Maxwell at 40, and me not even in the Top 100 has obviously got to be flawed!
    And by flawed, you clearly mean "recognizes the beauty and shape of a god-like person."

  6. #36
    Past Administrator
    Fire$torm's Avatar
    Join Date
    October 13th, 2010
    Location
    In the Big City
    Posts
    7,938

    Re: Standardized Credits

    Quote Originally Posted by Slicker View Post
    Some time back, "you know who" decided that snoozing or suspending a GPU app should kill it rather than suspend it. So, "leave app in memory" means suspend any CPU app but go ahead and kill the GPU app.
    Just more evidence of _____________ (We all know whats on the blank line).


    Future Maker? Teensy 3.6

  7. #37
    Nuadormrac
    Guest

    Re: Standardized Credits

    Quote Originally Posted by spingadus View Post
    Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.
    Oh, that sort of thing... I found it bad enough when on Yoyo, one had an evo WU that would go past 100%, no idea how long it would take, but the only touchy issue would be that being on a laptop, and actually needing the computer somewhere else, for instance at work (and the whole issue of being late), left one few choices.

    - Hibernate if one's computer supports it (my new laptop does, my old one, the battery was so far gone, it well doesn't quite work that way)

    - Shutdown, and lose 12 hours of crunch time

    - Be late (not a good option when things are work related)

    Well hibernate works, though there's the itsy bitsy issue that nVidia beta driver set 304.79 and IE 9 don't exactly get along all that well. The 64 bit browser crashes constantly on loading, the 32-bit less so, but the driver compatibility bit does come up sometimes, usually the browser just restarts itself, but every once in awhile (especially if the OS has been running a bit and has had browser crashes a fair few times), it takes the user interface with it. Even task manager won't respond at that point, to EP the IE 9 process...

    Yeah, I browsed around, and the common consensus on that one is it's a Microsoft vs nVidia strange compability issue with their driver set. And given it's a Keppler based GPU, well there ya go....

    It's annoying to lose crunch time period, though if they would abort a task because BOINC was scheduled to switch projects, that would be enough to swear a project off. I, personally lost enough crunch time, when for instance an evo task reset itself after that above mentioned bug crashed things, and had to ctrl + alt + del, force logout, then cancel logout when the hung IE 9 was forced to shut down. Stupid IE + nVidia driver bug...

  8. #38
    Gold Member
    Slicker's Avatar
    Join Date
    October 25th, 2010
    Location
    South of Cheeseland
    Posts
    1,253

    Re: Standardized Credits

    Donate basically has a wrapper for BitCoin apps. Every time a BitCoin is discovered, it makes it harder to find the next one. That way, there isn't an unlimited supply of the electronic currency (unlike here in the USA where they just print money whenever they feel like it). Because of that, the WUs are time sensative. Snoozing for hours or hibernating for days and then picking up where you left off would be a waste since the data would no longer be valid because the work being done by others while your host was snoozing/hibernating would negate any results. So, because of that, the app needs to talk to the BitCoin servers and report results regularly. I believe that's done via a proxy through Donate. So, even though it looks like your app is starting over, it is really using new data.

    The fix would be for Donate to implement "trickle" credit so that you would get credit for the work done so far. That way, you would only lose the credit since the last "trickle" message. Climate Prediction does that with their WUs that can take months to finish, but I don't think too many other projects use it.
    Spring 2008 Race: (1st Place)

  9. #39
    Nuadormrac
    Guest

    Re: Standardized Credits

    Quote Originally Posted by Slicker View Post
    [rant on]What if you have the same app and two different users running it. One downclocks his GPU by 50% and the other overclocks his GPU by 20%. The latter does more than double the work but gets the same credit per second because they are the crunching with the same device.

    CreditNew assumes all projects have the same requirements. Bad assumption. Does the RAM or MB speed make a difference? Ram amount? Disk speed? Network speed? None of those are accounted for in NewCredit. What about 32 vs 64 bit? The latter can do double the work on some projects but still gets the same credit as the 32-bit app because the CPUs are the same.

    Old vs new CPU is the same arguement. If one CPU won't support SSE4 and but another does and that allows it to compete a WU faster, shouldn't it get more credit per second when compared to some other project which doesn't have SSE4 apps and both CPUs earn the same there?
    [rant off]
    That's what I hate about this whole idea behind credit new, and "cross platform parity", where the proponents always seem to be for the nerfing of the credits in some mass credit deflation for the projects that pay more, but never seem to get after Rossetta, World Comunity Grid, or other poor payers, to encourage them to pay more, to come closer to the median...

    They make way too many assumptions, and try to level things off so far, that a person crunching something on a 486 (if there's any projects that could crunch on a 486 still) will earn the same credits as an i7. There's just too many differences, starting from CPU architectures themsleves.

    Back in the day, when Digital was still relavent (before Intel acquired their fab, AMD acquired their R&D engineering team, and Intel got other of their engineers, prior to Compaq buying up what was left of DEC), the Alpha EV6, not to mention an EV67 could seriously out pace a Pentium III in crunching SETI tasks and the like. It wasn't all clock rate, or what have you. The architecture had the edge, especially with 64-bit floating point, which for obvious reasons of greater precision will be used in scientific apps... They weren't even in the same class, as the EV6 was out-performing the Sun SPARC in completition times, by more then a little bit...

    But even in the x86 space, what happened? AMD came out with the Athlon, and then the Athlon 64 platform, and struggled to counter the marketing perception Intel left, of leading the consumer to believe that clock is everything. The Pentium III 1 GHz was getting old, and the old P6 arch had already been extended about as far as it was going to go. The then new Athlon arch, still had headroom to grow...

    So Intel released the Pentium 4, arguably in rushed fashion, as the woefully under-performing Willemette core (which didn't even include all that was intended for the P4's release, Northwood was closer to what was actually planned) was going through benchmarks so poorly that a 1 GHz Pentium III was out-performing a 1.5 GHz Willemette core Pentium 4. The result, Tom's Hardware, Anandtech, and other sites got them, tested them, and were anything but kind in their analysis, while also questioning the wisdom behind the net burst architecture altogether... Intel did, what AMD struggled to do, convince the customer that there's more to peformance then the clock frequency. What could they say, when their own new Pentium 4s were being bested on performance, not just from their competition [b]but from their own companies own predecessor offering in CPUs, at only 2/3 the clock frequency). The genie was otu of the bottle. And worse, Intel demonstrated what AMD was trying to tell the consumer, for years, through their own, then new, product offering.

    Well there's more to it, but in the end Intel went much the same route, but with the core 2, and then the core i processor offerings. The clock frequency really hasn't gone much over 3 GHz between then and now, but the newer cores have become a lot more efficient. In essence, getting more work done, in fewer clock cycles...

    But then we get into where this competition can really head up for the CPU manufacturers, which is also at the heart of what can happen with optimized apps; aka getting the hardware to perform more work, while using fewer resources, fewer CPU operations, or cheaper operations that use less clock cycles to complete. An example of this would be for instance a matter that's fundamental to mathematics. If you multiply or divide a number by the mathematical base, you're simply moving the decimal place. There's no need to multiply or divide the number out (both operations being relatively expensive operations to perform on a CPU). One can simply use a bit wise manipulator, to shift the decimal place to the left or right, it gets the same result, but is a much cheaper operation to perform. In the case of binary, multiplying or dividing by 10 (binary) which would be 2 decimal, simply shifts the decimal place. A studious programmer, who cares about the efficiency of their code, could avoid a multiply or divide in such cases. Now that might count as fewer math ops being performed, but only because one found a "short cut" to get the same result...

    It isn't even just "math tricks" if you will, and we all know some of them, aka multiplying by 1, something is itself so why multiply? If that happens often enough:

    som_func (some_var)
    {
    // ....... some stuff

    if (some_num == 1)
    result == some_var;

    else
    result == some_var * some_num;

    // ..... some stuff
    }

    might safe a little CPU time. Of course an if statement introduces a mathematical test, so is testing it going to save time, well depends perhaps.... But in terms of what we know about math, it could save the multiplication, but how often would depend on how often that statement would be true.

    But on the CPUs, a fair time back, many of the arguments went away, for instance wrt to RISC vs CISC, where many of DEC's engineers for instance basically said, that in the end it doesn't really matter. What they're really after is a FISC (fast instruction set computer), and whatever will get that performance gain. If one CPU, can perform the same work, and get the same result in fewer instructions (for instance SIMD type extentions), then it's beneficial to add them. Work done, is simply not a matter of counting operations, when differences in efficiency in either the code, or the CPU or GPU is taken into account... And as they each try to out-perform the other, to win the performance crown if you will (so peeps will buy their product over a competitors...)

    Now add other differences from hard drive space, to RAM, to others. My computer has 8 GB of RAM, and a quad core i7 proc (which is seen as 8 cores). Given the operating system takes memory, and other stuff is running, a task taking 1 GB or more of RAM (regardless of crunch time on the CPU) gives that much less memory for other tasks to run. If to many tasks take 1 GB, it means that all cores won't be able to be populated anymore, because out of memory problems will result. It's reasonable that those projects who suck up people's RAM better then a Dyson should pay a bit more. They in effect are creating a situation where other projects might not be able to run, because of the total RAM they're taking. But credit new is blind to this, along with other computer resource sinks that some projects might provide...

    And given that efficiencies from code, to different CPUs can't even be the same; a simple flops counter even wouldn't tell the whole picture. There is no way that a Pentium 4 Willemette core (based on the benchmarks that the review sites came up with when it came to market) is anywhere near being as efficient as a Athlon 64, AMD's lattest offering, or a core i processor when it comes to how much work it can complete in a given amount of time, or a given number of clock cycles (given the nature of the net burst architecture and it's very deep pipeline).

  10. #40
    Past Administrator
    Fire$torm's Avatar
    Join Date
    October 13th, 2010
    Location
    In the Big City
    Posts
    7,938

    Re: Standardized Credits

    The solution to the IE9 issue is an easy one, use FireFox or one of its variants like WaterFox which is what I am currently using.


    Future Maker? Teensy 3.6

Page 4 of 5 FirstFirst ... 2345 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •