I'll update this as our generation and testing of our first S5 workunits proceeds
The names of these workunits are of the form h1_XXXX.X_S5R1x_* and l1_XXXX.X_S5R1x_*.
The application running these is called einstein_S5R1. This application incorporates many of the speedups and other code changes suggested by Akos. Note: please do NOT replace these stock executables with custom versions of 'albert'. They are not compatible.
There are two types of workunits: short and long. The short workunits have XXXX.X less than or equal to 0400.0.
There are also two types of data files: short and long. The short data files (l1_XXXX.X) are from the LIGO Livingston Observatory, and are about 4.5MB in size. The long data files (h1_XXXX.X) are from LIGO Hanford and are about 16MB in size. Note: once your computer downloads one of these data files, it should be able to do many workunits for that same file.
We are switching to a new uniform system for awarding credits. All users on all platforms will claim the same credit for each workunit, with an amount of credit proportional to the length of the workunit: 'equal credit for equal work'.
To try and increase the total amount of computing power available to the project, we have changed the target number of results and minimum quorum from 3 to 2. Only if the first two results from different hosts/users do not agree will additional work be generated.
Please be patient with us if we have to sort out last minute problems or other issues. We have been testing this privately for some time, so we are fairly confident that there are no significant issues that remain. Nevertheless, several of us on 'the cutting edge' are rather short on sleep!
Bruce
Added 14/06/2006: please feel free to post questions here, but keep in mind that it may take some time before we have time to answer everything.
Added 15/06/2006: we have now generated about 1500 workunits. So far all is looking well. There are a fairly large number of download errors in downloading data files. But these appear to be mostly due to some problems we had yesterday in replicating data to our mirror sites. There are also interesting errors on *some* Mac OS X PPC systems. I'm sure we'll sort this out quickly.
Added 16/06/2006: Bernd Machenschalk has tracked down and fixed the error refered to above. It only occured on Mac OS X PPC G3 systems. We will be distributing a new application later today. We are also going to now finish creating the last of the remaining S4 workunits! On the project status page, you will now see that the S4 workunit generation is complete (but that the number of unsent results will be quite high for a couple of days).
Added 16/06/2006: We have completed the GENERATION of the remaining S4 workunits. They are in the database and should complete crunching during the next 2 to 3 weeks. We have also begun automatic generation of the S5 workunits, and have updated the server status page to reflect this. I also added an item on this status page to track the progress of the remaining S4 workunits.
Added 20/06/2006: The transition to S5 workunits seems to be proceeding very smoothly. There are currently only about 13,000 remaining (unsent) S4 workunits in the database; the remainder are either in progress or finished. There are now in excess of 250,000 S5 workunits in progress and the S5 failure rate is well under 5%. So from the project perspective the overall picture is extremely positive: the S4 -> S5 transition is essentially complete and was successful. In a few more days when the S5 work in progress is dominant (with respect to S4 work) I will do some more modifications of the server_status page to estimate the S5 analysis completion date.

Information about the new S5 workunits
)
It might be worth to add:
- the analysis code in the new Apps is the same that's used in the current Beta Test Apps, so the S5R1 Apps should be precisely as fast as them. However the current Beta Test Apps won't work as they are now, you will have to remove the app_info.xml file to get the new official Apps and work for S5R1.
- To make up for the faster Apps we increased the size of the workunits. The "long" ones will be roughly five times as long as the "long" ones from S4, the "short" ones will be roughly twice as long as their S4 counterparts.
BM
BM
RE: Yes, thanks for keeping
)
For now just wait. We're only distributing very few S5R1 WUs right now for some fine-tuning. When we start distributing them on large scale, we will probably announce it here (and/or on the front page). Then both ways should work. Your second approach, i.e. setting "no new work" first, is, however, the cleanest way, so preferable.
Note that while S4 units are still around, you may also get them after you removed the app_info.xml and you will download and run them with the old official App (4.40 in your case). We can't completely avoid this, but the way to make this rather unlikely would be to reset the project after your client has ran out of S4 work to remove the data files that still refer to S4.
BM
BM
RE: I hope the credit
)
I'm not aware of what's going on at SETI, and all project people, I think, have their hands full with E@H. Can you give me a short summary (without initiating a new discussion here) what the trouble or different opinions are there @SETI?
The credit will be granted totally based on the size of the Workunits as determined on the server side, regardless of the time it takes a specific host or App to process it. Maesurements have been incorporated into the Apps so that they should also claim this credit for transparency, but the credit actually granted this way doesn't need to have anythig to do with the claimed credit anymore.
BM
BM
RE: Do I need Boinc-client
)
We've successfully tested 4.19 on two machines. You probably shouldn't get older than that, but anything newer should be ok.
It might be that there are old or modified Clients around with which the claimed credit will somewhat differ from what we intend to grant, but that shouldn't have an impact on the results or the granted credit.
BM
BM
RE: Bernd, for all
)
The exact crediting is one of the parameters we want to fine-tune with the current first S5R1 WUs.
Hm - Feiertag? "Happy Kadaver"? Das ist hier Brandenburg, Mann, wir haben hier nichts zu feiern :-)
[Edit] Ich koennt ja verstehen, wenn das heute in Dortmund ein Feiertag waere, aber in Berlin ist auch ganz schoen was los. Moechte nicht wissen, wieviele sich heute krank gemeldet haben...
BM
BM
RE: broadly speaking, it
)
The intention is that the average Einstein@Home participant will get the same credit per hour "work" than what he gets on other BOINC projects. However with the large variety of Platforms, Apps and Clients we currently have it requires quite some investigation and testing to find out the average participant configuration, or the "standard credit rate" which you mentioned.
BM
BM
RE: 1. Will non-standard
)
I don't expect any issues, though I don't know all the Clients that may be around. We are testing with and referring to the official Clients. I don't know and honestly don't want to know what other people do with the BOINC code to build their own clients from. We don't explicitely support any other Clients than the official BOINC ones. If an inofficial Client causes you trouble, I'd say don't use it.
I can't guarantee that there is no unofficial BOINC Client out there that messes around with the information we pass from the Apps for claiming the credit. There may even be some older, official BOINC Clients that don't pass this information correctly. However the credit will be granted based on information exclusively from the server side, so no information from the Client (benchmark, timing etc.) will have an effect on the granted credit.
The length of the Workunits (i.e. the number of parameter sets scanned for, that we call "Templates") varies, even between Workunits for the same frequency band (i.e. with the same "major number"). The server (here: the validator) knows about this and will take this into account when granting credit. Two Workunits will only get the same credit granted if they are of exactly the same length, i.e. if they have the same number of Templates and thus, roughly speaking, need the same number of operations to be processed.
BM
BM
RE: 4. Is the daily quota
)
I'll let Bruce have the final word on it.
In principle the longer Workunits should presently (i.e. with the current speed of the Apps) keep the machines busy even with 32 short WUs / day / CPU. The downside of a large quota is that we allow a host to trash a large number of Tasks at once if something goes wrong on that machine.
BM
BM
RE: Should yet further
)
Precisely. That's what's intended. And working together with Akos I actually intend to further speed up the Apps a bit during the run.
BM
BM
RE: RE: We're only
)
I'll put up the S5 work status box as soon as we are done making S4 workunits in a couple of more days. And yes, I'll add something to the page to track the number of S4 workunits which are still in progress (no canonical result found so far). Look for it over the weekend.