Hi!
The S5R3 Apps have become much faster and more reliable and we even gained some more computing power from more and newer machines in past months. The Workunit Generator where the overall run progress on the server status page is derived from doesn't know about that and its predictions still average over the whole run, so are slightly off. According to internal statistics the last Workunits of S5R3 will be generated in the first days of August.
Therefore the Einstein@home development team has begun to prepare the next run "S5R4", which will include the remaining data from S5 that wasn't available when we started the previous S5 searches (that much we know yet - the data that will actually be used will be chosen from the whole S5 data in the next few weeks).
In particular this means that the development of the Einstein@home Applications will now focus on what is needed for S5R4, there will probably be no more new Apps for S5R3.
Some details for the techs: The "Fstat" engine will stay the same, with all optimizations achieved so far. In the "Hough" code we'll probably switch on the "weights", which might need some adaption of the optimizations done to the "weightless" code. Some more columns with additional (statistical) information about the "candidates" will be added to the result file. This will probably slightly increase the size of the files sent back to the server, and that of the checkpoint files, too. We're discussing measurements to average out the runtime differences visible in the current workunits in the workunits themselves by splitting the sky among them differently from the way we do it now.
BM
BM

Preparing S5R4 - No new Apps for S5R3
)
Planned is a smooth transition as we did before, i.e. the first new WUs should be delivered before we run out of old ones.
Good point. I'll take a look at the results & statistics of this App once I find the time.
Bad thing is that with the current old server-side code even publishing "APIv6" Apps (and some other maintenance) has become really tricky, and a complete upgrade - which has been on my todo list for quite a while - apparently will get delayed even further due to the work on S5R4.
BM
BM
RE: I gather the sky grid
)
There have been two proposals in our group: mine was to distribute the sky-grid points such that every workunit covers the whole sky, just with a much coarser grid, and let the remaining workunits cover the points in between, so that we end up with a grid of the same coverage. So given 4 WUs the distribution of the gridpoints (based on the numbering used in R3) would be
WU#1: 1, 5, 9 ...
WU#2: 2, 6, 10 ...
WU#3: 3, 7, 11 ...
WU#4: 4, 8, 12 ...
Bruce proposed to slice the grid in right ascension instead of declination, where a run-variation is noticeable, too, but below the usual error of 4-5%. We're currently discussing both ideas (and hopefully more) with the people more involved in post-processing the data - at the end they will have to live with the results.
BM
BM
RE: Well the significant
)
Yes, the will still be a variation in calculation time between individual sky locations. But the idea is to distribute the sky positions over the workunits such that the sum of the variations is more or less constant over the workunits. Both proposals would achieve this with regular schemes, i.e. without too much "intelligence" necessary in the Workunit Generator, just by re-ordering the points in the skygrid files.
BM
BM
RE: The all-sky-approach
)
Yes, that was my intention.
But the problem is that you actually change the grid in making it much coarser, which in combination with the limiting the number of candidates sent back ('toplist') changes the statistics of the results. It's definitely the post-processing and the final analysis of the results that will drive the decision here.
BM
BM
RE: RE: And one more
)
This will even get kinda worse in S5R4, as the size of the skygrid files will stay roughly the same while the instrument data volume will definitely increase - not by much I hope, but it will.
BM
BM