credit per unit dropped a second time

Anonymous
Topic 13416

Note that adjusting the credit level as a whole (i.e. of all Workunits) doesn't affect the competition within a project at all. It simply doesn't matter if hosts running faster Apps do get more credit (as they do e.g. in the Beta Test phase) or hosts that doesn't (or can't) run them get less credit. It's just getting used to new numbers, and a bit of transition time with some turbulences.

The "overall credit level" is just important to avoid credit inflation between different projects.

We are still watching the "credit level" and will continue to make adjustments as needed.

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

credit per unit dropped a second time

Quote:
Quote:
Note that adjusting the credit level as a whole (i.e. of all Workunits) doesn't affect the competition within a project at all.

Ideally, yes. However, this assumes that all hosts within that project are equally benefited by use of an optimized application.

Well, optimized applications indeed do affect the competition, but the credit level doesn't. Optimization means squeezing the last out of a particular architecture(*), and thus it's only this architecture that benefits from it. But if the credit granted for the same work is levelled for all work (and thus all architectures), the competition between them isn't affected.

So you might complain about optimizing the Apps, but not about levelling the credit ;-)

Seriously: For competition within a project it doesn't matter if hosts running faster Apps will get more credit than before or hosts running slower Apps will get less. For people that decide which project to join based on the credit they get, this points in the direction where every project attracts the machines where the best Application exists for, i.e. (ideally) the project that fits them best (also see my post over here).

Edit: (*) That's true only for our present application. Structural and algorithmical optimizations have been incorporated into the analysis code even before the public launch of Einstein@Home. A single nice idea from Akos was general enough that it sped up all code by a factor of two, but that was also about half a year ago. Since then (sloppily speaking), everything is about tweaking and twisting the (assembler-) code so that it runs faster on more CPUs than it runs slower on.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

@Scott: Hm. (Here I'll write

@Scott:

Hm. (Here I'll write short "CPU" for "Architecture")

1. If the App is 33% faster for CPU A and the distribution of CPUs A:B in the project is 2:1, the credit correction would be 22%, not 33. Note that we are (trying) keeping the average credit constant.

2. I still don't see the diffrence it makes within a project of RACs A/B between 122/100 (without credit correction) and 100/78 (after credit correction). Edit: Note that the latter is not right, after credit correction it will rather be something like 105,5/84,5

3. This will happen on all projects that successively optimize their Apps, with their very own instances of what CPUs are "A" and "B". So for a given CPU there will always be a project that grants the most credit per "CPU hour", which will most likely be a different project than that for another CPU type. The crucial point in that is that (between all projects that grant the same average credit) this is already the case. Every App of every project runs faster on one CPU than on another. The starting point where you define A/B as 100/100 is completely arbitrary, and not at all the same between any two projects, neither now, nor will it be in the future.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

RE: RE: 1. If the App is

Quote:
Quote:
1. If the App is 33% faster for CPU A and the distribution of CPUs A:B in the project is 2:1, the credit correction would be 22%, not 33. Note that we are (trying) keeping the average credit constant.

1. Okay, if that is what you are doing, then it seems that you are already accounting for what I was concerned about regarding the cross-project issues.


Citing Bruce again:

Quote:

My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.

Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).


That's precisely what we are doing.

Quote:
2. Perhaps this is better stated using the actual credit changes that have occured. Here I will use long workunits and assume (however incorrect that assumption might be) that under 4.02 everything was uniformly equalized within the project. 4.02 long workunits typically received 175-180 granted credits across platforms. When 4.24 was introduced, granted credit was reduced twice to a long workunit level of around 120 granted credits across platforms. The effect of this on 4.24 capable systems was to correctly recalibrate credits for the optimizations (i.e., fewer Flops and less time to do a workunit from any given client). However, for Macs, this results in considerbly lower granted credit for a workload (i.e., flops and time) that remained unchanged (or at least were not reduced nearly as much). Thus, I would argue that these changes (even though you are correct that they are a necessity for projects that optimize) do result in within project credit differences (I would also add that, even though RAC's for Macs would be lowered, RAC is a poorly calculated figure in BOINC and probably should not be part of credit adjustments).

I see that the situation is pittyful for the Macs (and Solaris SPARC...) on Einstein@Home presently, but believe me, it will get a bit better soon. We have Akos as a low-level Expert for x86 only, and sure the priority of platforms is according to their numbers of hosts on Einstein@Home - but I have not been inactive regarding the Macs.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.