Ok, let me try to give you a short summary of the "evolution" of our search strategies used in successive runs. For a slightly more general overview of where we currently stand, there is a poster on E@H [presented at a recent conference on pulsar astronomy], which you might find interesting:
G070593-03.pdf
The key step starting with S5R2 was to move part of the "post processing" from our server to the E@H hosts: previous searches performed one (or two) "F-statistic" searches on the host before sending back the results. These searches were performed over a number (between 17 and 60 in different runs) of different time stretches ("stacks"), which we combined using a "coincidence scheme" in the post-processing stage on the server. The amount of data (ie number of candidates) that can be allowed to be sent back from each host to the server is limited, and it turned out that this was the main factor holding back our achievable sensitivity.
The new "Hierarchical" search scheme, used since S5R2, performs F-statistic searches over 84 different stacks, then combines the results by a sophisticated coincidence scheme ("Hough transform") on the host, and only *then* sends back the results to the server. This avoids the data-returning bottleneck of previous runs and substantially increases the expected sensitivity (by about a factor of 6!)
The first Hierarchical search [S5R2], suffered from certain limitations (too technical to go into here ...) in the workunit-design, due to this new code and search scheme. These limitations were overcome in S5R3 by splitting the sky into several patches and having each workunit search only over one patch at a time, instead of the whole sky at once.
The resulting current search is a substantial leap forward for E@H, and promises unprecedented sensitivity to gravitational waves from spinning neutron stars. However, we are already working on future improvements to this scheme, which should allow us to further increase our reach in distance to spinning neutron stars (namely by increasing the range of frequency spin-downs searched over)
Hope this helps clarify a bit of what is going on "behind the scene".
Best,
Reinhard.

S5R3 search strategy ?
)
I don't have time to look it up, but from the top of my head:
- S5R1 used roughly the same amount of data than we used in S4 runs picked from the data that was available then of S5, which was from about half a year. S5RI used the same data set. searching for sources with different "spindowns" than what we looked for in S5R1.
- For S5R2 we used more data (and thus a new data distribution scheme), from the first 13 or 14 months of S5 (S5 finally lasted 22 months). The parameter ranges we searched for in S5R2 were limited by a number of (mostly technical) things (Reinhard mentioned this), we are searching over much larger ranges (of frequency and spindown) in S5R3 in the same data we used for S5R2.
- According to current plans S5R4 will cover the whole S5 data set, which wasn't available until this month (and pre-processing will still take a while anyway).
Sorry, I don't understand that part.
It might help to keep in mind that it isn't (only) the sensitivity that varies between the seaches, but also the properties of the GW sources we are looking for.
BM
BM