Petition - Deadline Relief for Longest Results

Anonymous
Topic 13565

Just two things to mention: Doubling the deadline means basically doubling the size of our database, and it means that people have to wait for their results to be validated and thus credit granted potentially twice as long.

A deadline that depends on the "size" (i.e. expected run-time, credit etc.) of the workunit would be an interesting idea. I'll discuss that with the team.

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

Petition - Deadline Relief for Longest Results

Quote:
I guess if the project should go for for variable deadlines, one would make the decision based on RAC instead of benchmark results? That should do the trick


The deadline (actually its "length") is a property of the workunit and thus inserted by the workunit generator at time of creating the workunit, nothing is known (and necessary to know) about the host they will later be assigned to. The workunit generator, however, knows about the "size" of a workunit that is reflected by the number of credits that will finally be granted for it. A variable deadline would be derived from this "size" of the workunit, not from any info about any host.

Would this concept of a variable deadline be desirable?

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

RE: RE: The other thing

Quote:
Quote:
The other thing that SETI does that would help here is the initial replication of 3 with a quorum of 2. With the new BOINC software(5.8.x and up), the software will send 3, wait for the first 2 results and then cancel the 3rd WU if the host has not started it yet. We could eliminate the 45-60 day waits some people have gotten for credit that way.
Off topic: We could also send smaller, more reasonable WU's that don't scare people off.

But doesn't that take a backend update as well (which is needed here)?

I think so, but even more important is that it also requires a newer "minimal" Client version. We're still issuing work for all Clients from version 4.19 on, and I don't intend to change this without need.

Another aspect is that this way the computation time spent on the canceled result is alway wasted (does the participant get credit for it anyway?). My guess would be that while this gives faster results for the project and faster credit for the fast participants the waste of computing power is larger than what is lost by results arriving too late in the current scheme.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

RE: RE: This has only

Quote:
Quote:
This has only broken down recently in S5R2, and then only because the beta apps have been a lot slower than what we had before, ....

Akos made an interesting remark, stating that the new apps are, in fact, several orders of magnitude *faster* than the old ones, probably meaning that they can do the same "scientific work" many times faster. So if the pre-S5R2 apps were biplanes, the new ones seem to be jet fighters. Problem is they get assigned much longer missions (just to stretch your paradigm a bit more :-) ) in the hierarchical all-sky search of S5R2.

I just wanted to clearify "slow" a bit so people don't get the impression that the apps "deteriorated" over time in some way.


Let me just emphasize what I wrote in the original "S5R2" posting:

Quote:

The "science run #5" of the LIGO instruments, or S5 for short, gives us not only the most sensitive data, but also the largest amount of data we ever had. [...]

However, with our present [i.e. S5R1] analysis tool, the computation time needed grows to the power of six over the amount of data.


If an analysis of the S4 data took a year, analyzing twice as much data would have taken about 64 years with the old program (and the same computing power). The new program should basically be able to do this in about a year again (actually less, but we have more than twice the data). So it's about fair to say that the new program does the same work 64 times faster than the old.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

We just started a new

We just started a new workunit generator with "dynamic deadlines". The deadlines of workunits generated from now on will vary between two and three weeks depending on the size of the wokunit (i.e. the number of templates within it, which should be proportional to the credit granted).

We'll watch it for a while, maybe we need to adjust the actual numbers.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

RE: Bernd Machenschalk I am

Quote:
Bernd Machenschalk
I am trying to contact an Einstein@home project Admin.


So the right address would probably be Bruce Allen and David Hammer.
I'll forward the request to them.

In the longer run after upgrading our backend to newer BOINC you should be able to solve this problem yourself, but I don't know for when such an upgrade is scheduled.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Joined: 15 Oct 04
Posts: 2,684
Credit: 25,950,161
RAC: 34,820

Unfortunately the change only

Unfortunately the change only affects the newly generated workunits, older ones will still keep the old deadline. Feel free to abort the Task if you feel you can't meet the deadline.

(Gary, I'd like to contact you individually. I wrote two messages to the eMail address you registered here. Did you get them?)

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.