PDA

View Full Version : Standardized Credits



Zytozux
04-18-12, 07:05 PM
Do you think standardized credits will ever happen? For example credits attached to a FLOP standard, or the length of time it takes a Pentium 4 to crunch a WU? I would love to fold some proteins or search for ET, but the credits for helping Black Hats crack passwords are just too good.

Fire$torm
04-18-12, 08:45 PM
Do you think standardized credits will ever happen? For example credits attached to a FLOP standard, or the length of time it takes a Pentium 4 to crunch a WU? I would love to fold some proteins or search for ET, but the credits for helping Black Hats crack passwords are just too good.

That is what DA is trying to do with CreditNew. His method will severely diminish credits on GPU work to bring it inline with CPU credits ignoring the fact that GPUs do substantially more work then CPUs. Obviously DA's method does not sit well with GPU crunchers.

Other then the above, it will never happen. As I have been told, most BOINC project admins are not even really aware what the other projects are doing. Nor are they aware of the conflict between DA and the GPU crunchers. And of those that are aware, some just do not care.

Zytozux
04-18-12, 10:17 PM
Thanks for the response. I don't really have a problem with GPU's getttng more credit, because like you said, they do more work. I am mostly concerned about the wide credit range between projects like World Community Grid and Distributed Rainbow Tables. Getting 10k at DistRTgen takes 1 day, getting 10k at WCG takes 2 weeks. This chart (http://wuprop.boinc-af.org/results/credit.py?plateforme=all&tri=4&sort=desc&cpuid=58) is what I have an issue with mostly.

Fire$torm
04-18-12, 10:29 PM
Thanks for the response. I don't really have a problem with GPU's getttng more credit, because like you said, they do more work. I am mostly concerned about the wide credit range between projects like World Community Grid and Distributed Rainbow Tables. Getting 10k at DistRTgen takes 1 day, getting 10k at WCG takes 2 weeks. This chart (http://wuprop.boinc-af.org/results/credit.py?plateforme=all&tri=4&sort=desc&cpuid=58) is what I have an issue with mostly.

Yeah, I looked at that one awhile back. Its just that is no BOINC wide communication specifically between projects. Maxwell once told me that sort of thing is common in academia, which is where most projects originated from. Basically they are snobs and the only work that matters is the stuff each professor is working on. The exception would be related research and then only when they wish to share......

zombie67
04-18-12, 11:09 PM
Cross-project credit parity is impossible, for a number of well documented reasons (http://www.boinc-wiki.info/User:Nicolas/Credit_scenarios). Adding together credits from different projects is like trying to sum toasters, miles per hour, and the color blue. The answer is meaningless.

Rather than adding credits from different project together for a total (meaningless), I choose to measure myself with MegaMilestones. That method makes the credits that a particular project awards meaningless. To compete in the MM game, means getting to a particular milestone (say 10k) in as many projects as possible. You want more of that MM than the next guy. Sure that means you will focus on the high paying projects at first, because those will be the quickest to complete. But after you achieve the MM in those, you will eventually have to move on to the lower paying projects too, in order to get more MMs. Just like your competition. So it doesn't matter if a particular project pays high or low. It will take each person at that project, the same amount of work to hit the MM.

Once you wrap your head around this, you quickly stop caring about the amount of credits that any particular project pays. It just doesn't matter any more.

Edit: Oh yeah, and the MM measuring method applies to teams as easily as individuals. Who is #1? It really depends on how you measure it.

spingadus
04-19-12, 02:05 AM
I'm with zombie on this. Total credits doesn't make sense as you cannot compare credits across different projects. This incongruity between the definition of 'being the best' and the false comparability of project credits has led to horrible attempts to normalize credit. An example is DA's creditnew, which in my mind makes the problem worse as it seems to penalize faster pc's over slower ones.

A better more accurate measure would be to compare overall average ranking of each person and team per project. You could easily give points based on percentile and then average them out over all active projects. This would make each project have equal competitive value regardless of how many credits they give. This could even be broken down into sub categories based on the type of project, for instance math vs bio-med. Or even best in nonactive projects.

It seems to me that the stats sites have the power to redefine the definition of 'top team' or 'top user'. As long as total credit is the basis for competition instead of average ranking we'll always have the rush to the high paying projects. It forces teams who want to be competitive to constantly crunch for the high paying projects at the expense of the other projects.

I wonder what Bok would have to say about this.

STE\/E
04-19-12, 09:23 AM
Personally I don't think who has the most MM's or the Most Credit makes any Individual or Team #1 at anything since it's all worthless to begin with. If I put up 5 Million Credits at WCG like I've been trying to reach for the last many month's now & some one else put's up 5 Million Credits divided among 20 different somewhat meaningless Projects who really has done more. Sure the guy that hit 20 Projects gets 20 more MM's but if their some what meaningless Projects does it really matter.

I gritted my Teeth when Bok came out with the MM Rankings & think they has done just as much to hurt the really Important Humanity Projects as DA has done with his constant fiddling with the Credits. I thought Jesus now I have to chase a bunch of Projects that have no real worth to keep up with the Jones. I halfheartedly did try to keep pace but eventually decided it just wasn't worth it anymore. I'll settle for running something that actually benefits humanity & let the other chase every Micky Mouse Project that Pop's up ...

denim
04-19-12, 10:24 AM
Cross-project credit parity is impossible, for a number of well documented reasons (http://www.boinc-wiki.info/User:Nicolas/Credit_scenarios). Adding together credits from different projects is like trying to sum toasters, miles per hour, and the color blue. The answer is meaningless.

Rather than adding credits from different project together for a total (meaningless), I choose to measure myself with MegaMilestones. That method makes the credits that a particular project awards meaningless. To compete in the MM game, means getting to a particular milestone (say 10k) in as many projects as possible. You want more of that MM than the next guy. Sure that means you will focus on the high paying projects at first, because those will be the quickest to complete. But after you achieve the MM in those, you will eventually have to move on to the lower paying projects too, in order to get more MMs. Just like your competition. So it doesn't matter if a particular project pays high or low. It will take each person at that project, the same amount of work to hit the MM.

Once you wrap your head around this, you quickly stop caring about the amount of credits that any particular project pays. It just doesn't matter any more.

Edit: Oh yeah, and the MM measuring method applies to teams as easily as individuals. Who is #1? It really depends on how you measure it.


This has got to be the funniest thing I have read in a month.

zombie67
04-19-12, 11:46 AM
A better more accurate measure would be to compare overall average ranking of each person and team per project. You could easily give points based on percentile and then average them out over all active projects. This would make each project have equal competitive value regardless of how many credits they give.

FWIW, this is similar to what FormulaBoinc (http://formula-boinc.org/index.py?lang=en) does at the team level. They just reset the competition every year. It could be adapted to the user level easily enough.

zombie67
04-19-12, 11:52 AM
Personally I don't think who has the most MM's or the Most Credit makes any Individual or Team #1 at anything since it's all worthless to begin with. If I put up 5 Million Credits at WCG like I've been trying to reach for the last many month's now & some one else put's up 5 Million Credits divided among 20 different somewhat meaningless Projects who really has done more. Sure the guy that hit 20 Projects gets 20 more MM's but if their some what meaningless Projects does it really matter.

I gritted my Teeth when Bok came out with the MM Rankings & think they has done just as much to hurt the really Important Humanity Projects as DA has done with his constant fiddling with the Credits. I thought Jesus now I have to chase a bunch of Projects that have no real worth to keep up with the Jones. I halfheartedly did try to keep pace but eventually decided it just wasn't worth it anymore. I'll settle for running something that actually benefits humanity & let the other chase every Micky Mouse Project that Pop's up ...


My point is that MMs make cross-project credit parity moot, and resolves the issue. While still maintaining the ability measure yourself against others.

You seem to be arguing to remove credits all together, and just crunch what you think has merit. That is a valid argument too. And you can do that today. Just stop looking at the stats. But that doesn't address the OP's concern with cross-project credit parity.

Bok
04-19-12, 12:20 PM
FWIW, this is similar to what FormulaBoinc (http://formula-boinc.org/index.py?lang=en) does at the team level. They just reset the competition every year. It could be adapted to the user level easily enough.

You mean something like this - http://stats.free-dc.org/stats.php?page=boincusersrank

I did this a long time ago, but there was not too much interest in it at the time, left it in the code though..

First column (rankpoints) is calculated per project as 10000 - ((rank - 1) * (10000 / total_num-users))

Second columnn is calculated as (user_score / total_project_score) * total_users in project

I don't recall all my decisions in coming up with this at the time as it was a few years ago.

zombie67
04-19-12, 12:37 PM
You mean something like this - http://stats.free-dc.org/stats.php?page=boincusersrank

Well, that is *obviously* the perfect way to score BOINC. :D

STE\/E
04-19-12, 05:45 PM
Well, that is *obviously* the perfect way to score BOINC. :D

Right, I Bow to the #1 BOINC'er ... ^:)^

Slicker
04-19-12, 05:56 PM
[rant on]
Everyone wants GPU apps regardless of whether a project's app will work well on a GPU. One person writes an apps that uses 50% of a GPU but 90% of a CPU. Someone else writes an app for a different project that uses 99% GPU and 0.1% CPU. According to CreditNew, both should get the same credits because they are running on the same hardware even though one of them leaves the GPU idle half the time and runs on the CPU instead which slows down crunching for other projects.

What if you have the same app and two different users running it. One downclocks his GPU by 50% and the other overclocks his GPU by 20%. The latter does more than double the work but gets the same credit per second because they are the crunching with the same device.

CreditNew assumes all projects have the same requirements. Bad assumption. Does the RAM or MB speed make a difference? Ram amount? Disk speed? Network speed? None of those are accounted for in NewCredit. What about 32 vs 64 bit? The latter can do double the work on some projects but still gets the same credit as the 32-bit app because the CPUs are the same.

Old vs new CPU is the same arguement. If one CPU won't support SSE4 and but another does and that allows it to compete a WU faster, shouldn't it get more credit per second when compared to some other project which doesn't have SSE4 apps and both CPUs earn the same there?
[rant off]

DrPop
04-20-12, 02:55 AM
Right, I Bow to the #1 BOINC'er ... ^:)^

:)) ROFL! Hahaha...oh man, don't get me started. This really could be simple. Slicker's got it figured out, and Collatz is done right. It's not that arbitrary, either. X credit based of X rig. Your rig either does more or less work in 24 hours than the baseline, and you wind up with appropriate credit.

Rant ON
Credit new blows and there is literally no other description for it. It is an asinine concept . . . *censored* . . . created by those who want to give everyone the "same" BOINCing experience. Well, it doesn't take a rocket scientist to figure out that Kumbya and credits don't exactly mix.
The Germans and SICI* aren't holding our hands around the campfire and strumming the guitar with us. They are kicking our arses and laughing at our impotence as we bounce around at a 4th place RAC. Unfortunately for some, this does involved points, it's a game, and unlike what they try to teach my daughter here in California schools, by definition, there WILL be a winner and there will be loser(s) because it IS a game.

Alright, I was holding back there as good as I could...go on, ask me how I really feel about Credit New and screwing any of us who invested money, energy and time into upgrading our rigs for the sole purpose of crunching. We ARE the hand that feeds them, and yes, you did just feel a bite. :p
Rant OFF

spingadus
04-20-12, 03:44 AM
Personally I don't think who has the most MM's or the Most Credit makes any Individual or Team #1 at anything since it's all worthless to begin with. If I put up 5 Million Credits at WCG like I've been trying to reach for the last many month's now & some one else put's up 5 Million Credits divided among 20 different somewhat meaningless Projects who really has done more. Sure the guy that hit 20 Projects gets 20 more MM's but if their some what meaningless Projects does it really matter.

I'm not sure that all projects need to have some meaningfulness to everyone with respect to competition. I see the MM's, total credit and ranking systems as being the 'fun' part of the BOINC hobby. If project meaningfulness is personally more important to a cruncher than that puts him into the 'I do it for the science and not the credits' group. In my case, I think it's about 50/50. I started crunching for the projects I like, but the personal and team competition keeps me going. I just want a competition system that is logically sound. As it is, the total credit method is logically unsound. At least the MM and average project ranking method solves the problem of disparate credit granting among projects.

To play devils advocate, as BOINC grows and more and more projects pop up, it will be become difficult to keep up. Imagine if there are 500 projects running. I would say any ideas of global competition might not be feasible.


I gritted my Teeth when Bok came out with the MM Rankings & think they has done just as much to hurt the really Important Humanity Projects as DA has done with his constant fiddling with the Credits. I thought Jesus now I have to chase a bunch of Projects that have no real worth to keep up with the Jones. I halfheartedly did try to keep pace but eventually decided it just wasn't worth it anymore.

WCG is my favorite project as I believe it is more potentially beneficial to humanity. Even though I crunch other projects for my MM's I always crunch WCG. MM's make the hobby more fun. If I lose the fun in BOINC, I will probably just stop crunching altogether. So, in some sense the MM's are keeping me in the Boinc world and WCG benefits.


I'll settle for running something that actually benefits humanity & let the other chase every Micky Mouse Project that Pop's up ...

Does that mean you are no longer going to crunch the high paying projects like DirT or PG? Just curious.

spingadus
04-20-12, 03:51 AM
FWIW, this is similar to what FormulaBoinc (http://formula-boinc.org/index.py?lang=en) does at the team level. They just reset the competition every year. It could be adapted to the user level easily enough.

Nice. I like what they are doing, but I disagree with the points system. It should be more linear and include more than the top 10 ranks. It penalizes the teams that are more widely spread and gives more credence to those that focus on being in the top 10 in a limited number of projects. You could be ranked number 11 in 100 projects and get 0 points, while another team can be ranked 10 in only 10 projects and at least get 10 points.

spingadus
04-20-12, 04:11 AM
You mean something like this - http://stats.free-dc.org/stats.php?page=boincusersrank

I did this a long time ago, but there was not too much interest in it at the time, left it in the code though..

First column (rankpoints) is calculated per project as 10000 - ((rank - 1) * (10000 / total_num-users))

Second columnn is calculated as (user_score / total_project_score) * total_users in project

I don't recall all my decisions in coming up with this at the time as it was a few years ago.

This is cool! It would be awesome to see more than 100 users and include teams as well.

Do you have any plans on continuing with it?

spingadus
04-20-12, 04:40 AM
Rant ON
Credit new blows and there is literally no other description for it. It is an asinine concept, completely ill founded by liberally brain washed academic nut jobs who have never worked in the real world and therefore don't understand the concept of competition
Rant OFF

Personally, I think they are just being lazy. I do agree with you DrPop that creditnew is asinine, although I don't agree with the generalization that it's a liberal/academic thing. I consider myself to be a moderate and my political are both conservative and liberal depending on the separate issue. Meaning, I don't toe any party line other than my own. This means that I have had many friends on both sides of the isle,from ultra conservative to ultra liberal and in both academia, the military and in the private sector. I can't think of a single friend that would think that it's ok to give a faster machine the same credit as a slower one. More work should equal more pay. It's just common sense. I think the BOINC devs can't figure out a way to accurately measure devices properly and so are just averaging things out. If they can't do it right, they shouldn't do it at all and let the projects deal with it internally.

Bok
04-20-12, 05:36 AM
This is cool! It would be awesome to see more than 100 users and include teams as well.

Do you have any plans on continuing with it?

It does all users, there just isn't code to allow that page to have the Next/Prev buttons as that's a small javascript change.

Would be trivial to add it to teams as well. Sure I can do that. *Well the non-trivial part is calculating for projects which no longer export xml but I've done it before so not too bad*

DrPop
04-20-12, 12:03 PM
Personally, I think they are just being lazy. I do agree with you DrPop that creditnew is asinine, although I don't agree with the generalization that it's a liberal/academic thing...

Edited my post above for you, just to keep it "non-political", because I was not referring to "politics" as in conservative/liberal "GOP vs Dems". Not at all. I was talking about the philosophical bent of those in charge. There is a difference, but I suppose only those who really have gotten into either a lot of academia or are into that sort of thing would understand where I was going. Really, though - I do not want to hi-jack this thread in that way, and/or offend anyone who is easily offended in that regard.;) There is room in the crunching hobby for all "opinions" on that sort of thing.

Speaking of Credit New alone:
I do, however, completely fail to understand how ANYONE in their right mind could conceive of such a plan where those who have acquired newer, faster hardware that is able to get more work done, are PENALIZED to "support" and come in line with those crunching on old, dated, slow hardware that gets less accomplished in the same amount of time. Are we trying to go backward or forward here?
I think, based on that analysis of Credit New, you can appreciate where I was coming from in my post - they are taking a nearly "market driven" system and turning it into an entirely different regime when they do this. It is not good for those who wish to be competitive and boost their team's standings, etc. Penalizing genious, penalizing effort, penalizing ability, is not how you encourage growth. We're talking economics 101 here. They need to re-take that class, LOL! :)
Thank you for understanding about Credit New. And, we can always have the other political / common sense chats in a different thread or by PM anytime. :D

spingadus
04-20-12, 02:27 PM
Edited my post above for you, just to keep it "non-political", because I was not referring to "politics" as in conservative/liberal "GOP vs Dems". Not at all. I was talking about the philosophical bent of those in charge. There is a difference, but I suppose only those who really have gotten into either a lot of academia or are into that sort of thing would understand where I was going. Really, though - I do not want to hi-jack this thread in that way, and/or offend anyone who is easily offended in that regard.;) There is room in the crunching hobby for all "opinions" on that sort of thing.

Fair enough although I wasn't offended. It's just a conversation. I guess it was directed to the academia crowd on the forums. But hey, I may be there one day as I'm back in school again. I'll just keep an eye out for the DA types :)


Speaking of Credit New alone:
I do, however, completely fail to understand how ANYONE in their right mind could conceive of such a plan where those who have acquired newer, faster hardware that is able to get more work done, are PENALIZED to "support" and come in line with those crunching on old, dated, slow hardware that gets less accomplished in the same amount of time. Are we trying to go backward or forward here?
I think, based on that analysis of Credit New, you can appreciate where I was coming from in my post - they are taking a nearly "market driven" system and turning it into an entirely different regime when they do this. It is not good for those who wish to be competitive and boost their team's standings, etc. Penalizing genious, penalizing effort, penalizing ability, is not how you encourage growth. We're talking economics 101 here. They need to re-take that class, LOL! :)
Thank you for understanding about Credit New. And, we can always have the other political / common sense chats in a different thread or by PM anytime. :D

I concur.

DrPop
04-20-12, 03:31 PM
Fair enough... I'll just keep an eye out for the DA types :)...

Haha! ;) Yes, thanks for not being offended, this is a discussion that for some reason gets people up in arms. To me it seems pretty simple - these artificial stones we get "the credits" are the only leverage a project has to get more crunchers, beside the "goodness" of the project. So for example, WCG gets a lot of crunchers because of its humanitarian nature. Yay! Good for them. But, their credits are loooooow...and that's simply because they can be. So they don't care. I guarantee if everyone stopped crunching WCG tomorrow, they would consider raising the credits. That's the simple, wonderfulness of supply and demand, if "outside powers" don't interfere with it, it always works.
Consequently, who would *really* crunch DiRt instead of WCG if the credits were equal? I wouldn't. The guys that run DiRt understand this. :) Therefore, the giant credit bombs they offer per WU get us to crunch their project. :D

With DA's Credit New scheme, you take that leverage away from the projects...that's kind of what I was meaning by a "market driven" economy in BOINC - I'm not sure how the academics that are running the thing see it - but they apparently either don't see reality, or they know something we don't? Because from our viewpoint as the crunchers...well, I'm just not getting it.:confused::p

Anyone else have any ideas as to why they are doing this?

Zytozux
04-20-12, 06:34 PM
Thank you all for your wonderful responses. I like zombie's and poorboy's ideas on crunching a lot. So, I have decided that stats like this (http://boincstats.com/stats/boinc_team_stats.php?pr=bo&st=0) are not a very good judge of who is contributing the most work.

We enjoy ~8 new members a day on Seti@home, and I think some of that comes from the visibility of being in 1st place. Therefore, as a member of the TopGun division, I am going to donate my resources towards improving our rank on a popular project where the team is in 66th place, WCG.

DrPop
04-20-12, 09:01 PM
Thank you all for your wonderful responses... I am going to donate my resources towards improving our rank on a popular project where the team is in 66th place, WCG.

Nice, and a worthy cause to be sure! :) But, can you also do folks like Slicker and I a favor and put your GPU on something high paying while your CPU is on WCG? :D Thank you if you can!!! :o:cool:

zombie67
04-20-12, 09:05 PM
WCG will have GPU apps soon. They won't pay like moo, distrrtgen, donate, etc. More like Einstein, seti, etc. Still, better than CPU credits!

kaptainkarl1
04-20-12, 09:41 PM
Where does this DA guy live? Is it near the water? He can always go for a long walk off a short pier...

Mumps
04-20-12, 11:32 PM
Well, that is *obviously* the perfect way to score BOINC. :D

Sorry, any scale that has Maxwell at 40, and me not even in the Top 100 has obviously got to be flawed! :)

Fire$torm
04-20-12, 11:37 PM
Sorry, any scale that has Maxwell at 40, and me not even in the Top 100 has obviously got to be flawed! :)

=)) =)) =))

Mumps
04-20-12, 11:37 PM
[rant on]
Old vs new CPU is the same arguement. If one CPU won't support SSE4 and but another does and that allows it to compete a WU faster, shouldn't it get more credit per second when compared to some other project which doesn't have SSE4 apps and both CPUs earn the same there?
[rant off]

Also, don't forget that CreditNew will actually *decrease* the credits granted when a project releases newer, more efficient, code that completes the same work in less time. Look at Seti for example. You get less credit by running the Lunatics optimized apps than you should because it's more efficient. So, CreditNew even encourages *bad* programming.

Zytozux
04-21-12, 02:12 PM
Nice, and a worthy cause to be sure! :) But, can you also do folks like Slicker and I a favor and put your GPU on something high paying while your CPU is on WCG? :D Thank you if you can!!! :o:cool:

Thanks. I really only have one. It's ATI so I have it on MOO. I got it on Black Friday for $20 dollars. I am a starving college student etc :D . I would stick it on Donate@home (most ATI credits right now), but when you snooze Donate@home it resets your workunit back to 0 :(

spingadus
04-21-12, 02:54 PM
Thanks. I really only have one. It's ATI so I have it on MOO. I got it on Black Friday for $20 dollars. I am a starving college student etc :D . I would stick it on Donate@home (most ATI credits right now), but when you snooze Donate@home it resets your workunit back to 0 :(

Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.

DrPop
04-21-12, 06:39 PM
Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.

Hmmm...yeah, it might. Good question?:confused:

@Zytozux - awesome man! Thanks for the help. I agree with this, that CPU power can be used for almost anything these days - pet projects, MMs, etc - but GPU power is where we are really going to lift this team up "credit wise" and approach the Germans et. al again! [..] GO TEAM GO!!!**==

Slicker
04-22-12, 10:08 AM
Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.

Some time back, "you know who" decided that snoozing or suspending a GPU app should kill it rather than suspend it. So, "leave app in memory" means suspend any CPU app but go ahead and kill the GPU app.

Maxwell
04-22-12, 12:46 PM
Sorry, any scale that has Maxwell at 40, and me not even in the Top 100 has obviously got to be flawed! :)
And by flawed, you clearly mean "recognizes the beauty and shape of a god-like person."

Fire$torm
04-23-12, 02:42 PM
Some time back, "you know who" decided that snoozing or suspending a GPU app should kill it rather than suspend it. So, "leave app in memory" means suspend any CPU app but go ahead and kill the GPU app.

Just more evidence of _____________ (We all know whats on the blank line).

Nuadormrac
08-22-12, 02:28 PM
Have you tried setting the 'leave application in memory while suspended' option? I wonder if that would solve the issue at donate@home.

Oh, that sort of thing... I found it bad enough when on Yoyo, one had an evo WU that would go past 100%, no idea how long it would take, but the only touchy issue would be that being on a laptop, and actually needing the computer somewhere else, for instance at work (and the whole issue of being late), left one few choices.

- Hibernate if one's computer supports it (my new laptop does, my old one, the battery was so far gone, it well doesn't quite work that way)

- Shutdown, and lose 12 hours of crunch time

- Be late (not a good option when things are work related)

Well hibernate works, though there's the itsy bitsy issue that nVidia beta driver set 304.79 and IE 9 don't exactly get along all that well. The 64 bit browser crashes constantly on loading, the 32-bit less so, but the driver compatibility bit does come up sometimes, usually the browser just restarts itself, but every once in awhile (especially if the OS has been running a bit and has had browser crashes a fair few times), it takes the user interface with it. Even task manager won't respond at that point, to EP the IE 9 process...

Yeah, I browsed around, and the common consensus on that one is it's a Microsoft vs nVidia strange compability issue with their driver set. And given it's a Keppler based GPU, well there ya go....

It's annoying to lose crunch time period, though if they would abort a task because BOINC was scheduled to switch projects, that would be enough to swear a project off. I, personally lost enough crunch time, when for instance an evo task reset itself after that above mentioned bug crashed things, and had to ctrl + alt + del, force logout, then cancel logout when the hung IE 9 was forced to shut down. Stupid IE + nVidia driver bug...

Slicker
08-22-12, 02:51 PM
Donate basically has a wrapper for BitCoin apps. Every time a BitCoin is discovered, it makes it harder to find the next one. That way, there isn't an unlimited supply of the electronic currency (unlike here in the USA where they just print money whenever they feel like it). Because of that, the WUs are time sensative. Snoozing for hours or hibernating for days and then picking up where you left off would be a waste since the data would no longer be valid because the work being done by others while your host was snoozing/hibernating would negate any results. So, because of that, the app needs to talk to the BitCoin servers and report results regularly. I believe that's done via a proxy through Donate. So, even though it looks like your app is starting over, it is really using new data.

The fix would be for Donate to implement "trickle" credit so that you would get credit for the work done so far. That way, you would only lose the credit since the last "trickle" message. Climate Prediction does that with their WUs that can take months to finish, but I don't think too many other projects use it.

Nuadormrac
08-22-12, 03:08 PM
[rant on]What if you have the same app and two different users running it. One downclocks his GPU by 50% and the other overclocks his GPU by 20%. The latter does more than double the work but gets the same credit per second because they are the crunching with the same device.

CreditNew assumes all projects have the same requirements. Bad assumption. Does the RAM or MB speed make a difference? Ram amount? Disk speed? Network speed? None of those are accounted for in NewCredit. What about 32 vs 64 bit? The latter can do double the work on some projects but still gets the same credit as the 32-bit app because the CPUs are the same.

Old vs new CPU is the same arguement. If one CPU won't support SSE4 and but another does and that allows it to compete a WU faster, shouldn't it get more credit per second when compared to some other project which doesn't have SSE4 apps and both CPUs earn the same there?
[rant off]

That's what I hate about this whole idea behind credit new, and "cross platform parity", where the proponents always seem to be for the nerfing of the credits in some mass credit deflation for the projects that pay more, but never seem to get after Rossetta, World Comunity Grid, or other poor payers, to encourage them to pay more, to come closer to the median...

They make way too many assumptions, and try to level things off so far, that a person crunching something on a 486 (if there's any projects that could crunch on a 486 still) will earn the same credits as an i7. There's just too many differences, starting from CPU architectures themsleves.

Back in the day, when Digital was still relavent (before Intel acquired their fab, AMD acquired their R&D engineering team, and Intel got other of their engineers, prior to Compaq buying up what was left of DEC), the Alpha EV6, not to mention an EV67 could seriously out pace a Pentium III in crunching SETI tasks and the like. It wasn't all clock rate, or what have you. The architecture had the edge, especially with 64-bit floating point, which for obvious reasons of greater precision will be used in scientific apps... They weren't even in the same class, as the EV6 was out-performing the Sun SPARC in completition times, by more then a little bit...

But even in the x86 space, what happened? AMD came out with the Athlon, and then the Athlon 64 platform, and struggled to counter the marketing perception Intel left, of leading the consumer to believe that clock is everything. The Pentium III 1 GHz was getting old, and the old P6 arch had already been extended about as far as it was going to go. The then new Athlon arch, still had headroom to grow...

So Intel released the Pentium 4, arguably in rushed fashion, as the woefully under-performing Willemette core (which didn't even include all that was intended for the P4's release, Northwood was closer to what was actually planned) was going through benchmarks so poorly that a 1 GHz Pentium III was out-performing a 1.5 GHz Willemette core Pentium 4. The result, Tom's Hardware, Anandtech, and other sites got them, tested them, and were anything but kind in their analysis, while also questioning the wisdom behind the net burst architecture altogether... Intel did, what AMD struggled to do, convince the customer that there's more to peformance then the clock frequency. What could they say, when their own new Pentium 4s were being bested on performance, not just from their competition [b]but from their own companies own predecessor offering in CPUs, at only 2/3 the clock frequency). The genie was otu of the bottle. And worse, Intel demonstrated what AMD was trying to tell the consumer, for years, through their own, then new, product offering.

Well there's more to it, but in the end Intel went much the same route, but with the core 2, and then the core i processor offerings. The clock frequency really hasn't gone much over 3 GHz between then and now, but the newer cores have become a lot more efficient. In essence, getting more work done, in fewer clock cycles...

But then we get into where this competition can really head up for the CPU manufacturers, which is also at the heart of what can happen with optimized apps; aka getting the hardware to perform more work, while using fewer resources, fewer CPU operations, or cheaper operations that use less clock cycles to complete. An example of this would be for instance a matter that's fundamental to mathematics. If you multiply or divide a number by the mathematical base, you're simply moving the decimal place. There's no need to multiply or divide the number out (both operations being relatively expensive operations to perform on a CPU). One can simply use a bit wise manipulator, to shift the decimal place to the left or right, it gets the same result, but is a much cheaper operation to perform. In the case of binary, multiplying or dividing by 10 (binary) which would be 2 decimal, simply shifts the decimal place. A studious programmer, who cares about the efficiency of their code, could avoid a multiply or divide in such cases. Now that might count as fewer math ops being performed, but only because one found a "short cut" to get the same result...

It isn't even just "math tricks" if you will, and we all know some of them, aka multiplying by 1, something is itself so why multiply? If that happens often enough:

som_func (some_var)
{
// ....... some stuff

if (some_num == 1)
result == some_var;

else
result == some_var * some_num;

// ..... some stuff
}

might safe a little CPU time. Of course an if statement introduces a mathematical test, so is testing it going to save time, well depends perhaps.... But in terms of what we know about math, it could save the multiplication, but how often would depend on how often that statement would be true.

But on the CPUs, a fair time back, many of the arguments went away, for instance wrt to RISC vs CISC, where many of DEC's engineers for instance basically said, that in the end it doesn't really matter. What they're really after is a FISC (fast instruction set computer), and whatever will get that performance gain. If one CPU, can perform the same work, and get the same result in fewer instructions (for instance SIMD type extentions), then it's beneficial to add them. Work done, is simply not a matter of counting operations, when differences in efficiency in either the code, or the CPU or GPU is taken into account... And as they each try to out-perform the other, to win the performance crown if you will (so peeps will buy their product over a competitors...)

Now add other differences from hard drive space, to RAM, to others. My computer has 8 GB of RAM, and a quad core i7 proc (which is seen as 8 cores). Given the operating system takes memory, and other stuff is running, a task taking 1 GB or more of RAM (regardless of crunch time on the CPU) gives that much less memory for other tasks to run. If to many tasks take 1 GB, it means that all cores won't be able to be populated anymore, because out of memory problems will result. It's reasonable that those projects who suck up people's RAM better then a Dyson should pay a bit more. They in effect are creating a situation where other projects might not be able to run, because of the total RAM they're taking. But credit new is blind to this, along with other computer resource sinks that some projects might provide...

And given that efficiencies from code, to different CPUs can't even be the same; a simple flops counter even wouldn't tell the whole picture. There is no way that a Pentium 4 Willemette core (based on the benchmarks that the review sites came up with when it came to market) is anywhere near being as efficient as a Athlon 64, AMD's lattest offering, or a core i processor when it comes to how much work it can complete in a given amount of time, or a given number of clock cycles (given the nature of the net burst architecture and it's very deep pipeline).

Fire$torm
08-22-12, 04:00 PM
The solution to the IE9 issue is an easy one, use FireFox (http://www.mozilla.org/en-US/firefox/new/) or one of its variants like WaterFox (http://waterfoxproject.org/) which is what I am currently using.

Nuadormrac
08-22-12, 04:31 PM
Yeah, obviously... The main thing for me though was wanting to use an x64 browser rather then an x86... True, it might not use the whole lot of RAM (at least a web page shouldn't), but I'm not sure if they ever did solve that slow memory leak problem with x86-32 software in Windows 7 yet, or not... Read about that one a fair bit ago, so did try to move apps over to try to avoid that one....

Well, and it can make a slight perf difference, not that there's really much to viewing most web pages, unless one's watching video on it or something. In the WoW beta for instance, and have noticed that the frame rates and all does seem a bit higher on the x64 variant of it. Course raiding is far more instense then web browsing though, for obvious reasons. The memory leak, never did hear if that ever got fixed or not....

Fire$torm
08-23-12, 08:58 PM
Actually every version of Windows to date has had poor to mediocre memory management. Part of the reason for that is the backwards compatibility M$ has maintained between successive versions of the OS. The two worst IIRC were WinXP and Millennium Edition.

Nuadormrac
08-23-12, 10:55 PM
It's funny to see XP on the same list as winME... Often one sees Vista there... ME was of course so bad (was an MSDN subscriber at the time), many fellow MSDN subscribers swore off ever loading winME onto their development machines, and had pretty much only bad things to say about it on the MSDN forums. tbh, I remember running betas that had less issue then ME, which umm yes, I did see in RC1.

But yeah, Windows does have some issues; security is one where they have been very much exposed (no pun intended), in part because of security vulnerabilities that has tended to exist in different versions of Windows, and in part because, due to it's popularity there is more payoff for black hatters (and the unfortunate script kiddies as well) to focus some effort in exploiting it.

I'm not sure if I could find that article again, it was long ago. But from it, it sounded like win64 software run under win7 x64 wasn't seeing the same leak as x86 under win64... It's less of an issue if one reboots quite frequently, though with some projects (evo under Yoyo being one noteable exception), the lack of check pointing can mean that if one shuts down, rather then hibernates, one can end up losing a lot of work.... Evo, which doesn't always end at 100%, can also crunch for a half a day or more, even on an i7...

hehe, other then looking on it, on a dual boot, I skipped winME (though did keep XP rather then going to Vista, with the exception of the beta I did look at) having gone to 7. For me, I went from win95 (pre-SE, yuck for many reasons), to 98, to dual booting NT 4.0 (which became my main OS then) with 98 for the software that woudn't run on NT, to win2k... winME arguably sucked big, fat, hairy, donkey balls :o