We're not countinously feeding new data from the detectors into the project. What we actually do is when an analysis run on Einstein@Home comes close to an end, we set up a new run with the most sensitive data available at that time, design a search pattern for it, pre-process the data accordingly, write a new workunit generator and adapt other parts of the system (Apps, validator, scheduler, etc.) as needed. Then we migrate to the new run on the project.
The current "S5R1" analysis run analyzes the S5 data that has been taken up to May 2006. When this run ends, we will set up a new one (probably named S5R2) that includes the data that has been captured until then.
The Apps currently in Beta Test will be made public in the next few days, I haven't seen any signs of trouble in the results from the beta test.
Akos didn't find any clues for further speedup in the current Beta Apps. I will continue to play with compilers & flags, and I am also working on another idea (which includes assembler coding for CPUs that have at least SSE2), but rignt now that code is even slower than the one in the Beta Apps. I doubt that we can get more than 10% further speedup out of all that.
We are working on a different search algorithm we call "hierarchical search" that will give us a lot more resolution from the same computing power. We hope to have it ready for the next analysis run; analyzing, say, three times as much data with the current method will take ages with our current computing power.
BM
