Hi!
In the near future we intend to add the (spare) computing power of ATLAS (5368 Core2 cores) to Einstein@home. This will probably generate a lot of "credits" which we have no real use for.
We are aware that, if the account joins a team, this will have effects on the team statistics that will not necessarily be desirable. To avoid this, we are thinking of two different alternatives:
1. ATLAS doesn't join a team at all.
2. Once every few days (~ a week to start with) ATLAS will join a random team present (and possibly active) on Einstein@home. This will result in some kind of lottery of additional credit for the teams at this project, as both the team and the amount of credit is really rather random.
AFAIK there is no "poll" option in the current BOINC web code, so I'd just ask for people having strong feelings about one or the other alternative to express them in this thread.
BM
BM

ATLAS joining Einstein@home - RFC
)
That would be either "Einstein at work" or "Albert-Einstein-Institut Hannover (AEI)"
BM
BM
From what it currently looks
)
From what it currently looks like ATLAS will run as 1342 quad-core machines (and possibly a few more management machines) on a single user account that will join no team at all. I don't think that that "user" will keep the machines hidden.
BM
BM
RE: How about a third
)
Oh, I like the idea. But ATALS was built and funded for data analysis @AEI, which includes Einstein@home as an AEI project, but no other (BOINC) projects.
As I recently wrote in some other thread the contribution of Bruce/"Nemo" to Einstein@home is negligible by now. I think he (i.e. his account) is still stuck with team "Ireland". He once wanted to write a script that does the team change automatically, but never got around to that. He asked us to do this for ATLAS, but I thought I'd better ask first what people want.
BM
BM
RE: So the total duration
)
Ok, I'd bet on ~10 months.
However we recently found that we might be able to increase the sensitivity of the S5R4 search a little further with a new run, so we might not fully complete S5R4 if that turns out to be true (simulations are running right now to investigate this). This would just require a re-design of the workunits, no changes to the Apps needed.
Back to the original topic: ATLAS has been attached to Einstein@home; the user account #342084 will not join any team.
BM
BM
RE: As of BoincSynergy
)
In comparison to Einstein@Home, Atlas is very general-purpose. It offers high IO bandwidth, rapid access to more than 1 Petabyte of data, fast interprocessor communication, 'reliable' hardware, and other features that E@H lacks. Since there are many other types of gravitational wave searches other than searches for Continuous Wave sources, our hope is that Atlas is primarily used for these, and that the Atlas cores are occupied running analysis that can not be done on E@H.
For example two of the significant activities on Atlas just now are the post-processing of the E@H S5R1 and S5R3 results. In the past week, Holger Pletsch has completed a first pass through the S5R1 results. This work requires a resource like Atlas to carry out.
Cheers,
Bruce Allen
RE: I wonder: Is maybe
)
Random distribution is how it's designed to be. But as S5R2 already covered the lower frequencies of S5R3 and we used the same data files for both, people's machines already had the lower data files at the start of S5R3. Together with the "locality scheduling" that tries to minimize the additional downloads this lead to eating up the frequency band from bottom to top.
BM
BM
RE: Thank you, Dr.Allen for
)
Though the question is meant for Bruce, I hope you don't mind that I'll try to answer it:
1. The post-processing is basically combining the information in the results that we got back from an Einstein@home analysis run, so it's not easily possible to split up this task - what we were able to split up already has been done in setting up the run.
2. The computational power might be large enough on Einstein@home, but the data transfer bandwidth is not. You'd spend more time down- and uploading data than for the computation. This is unacceptable both for you and for our servers.
3. The software and parameters we use in for post-processing is under continuous refinement based on previous results. Setting up a new Einstein@home analysis run typically takes 4 weeks minimum, getting a new application ready for BOINC takes months.
4. For comparison: Holger runs a complete S5R1 post-processing "run" (given a set of filter parameters) in a bit more than a day (on ATLAS with a data distribution scheme specifically developed for his pipeline. This mean that the longest job runs that long, most jobs finish way earlier). With the new "Hierarchical Search" Application we are using since S5R2 a complete post-processing "run" (with a set of filter parameters) takes about two hours on ATLAS.
Bottom line: it's much faster to do this on our own clusters than it would take to just set it up as a "run" on Einstein@home.
BM
BM
RE: In last days ATLAS have
)
Actually we shut down the BOINC backfill on ATLAS for a while, as it seemed it was giving the project database a hard time. We are still investigating what could be done in the configuration to avoid this in the future.
BM
BM
RE: There's been a recent
)
Yep. Increasing the work cache of the account (i.e. all clients) apparently did help.
However at the very moment ATLAS hasn't much spare time to contribute, only two nodes are running BOINC.
BTW: As our scheduler fills ordinary jobs from the "front", I'd expect nodes with higher numbers to get the highest RAC. But I'm afraid ordinary user accounts can't see the node names / IP addresses.
BM
BM
She's running E@H jobs on
)
She's running E@H jobs on OSG.
BM
BM