I had asked this question already when S5R3 was at finish line. But, here it is again. Why not finish S5R4 ASAP by crunching it inside? There are only 27 units without final result - about a week of work for single computer. This will lead to removing excessive daemons like S5R4 assimilator, S5R4 validator and maybe even S5R4 filedeleter (not sure, may be it is common for all S5). If it was useful search - than it will be time to analyze the data, if not - throw it away ASAP. Are there any thoughts about this?
If scientists here would be eagerly awaiting the S5R4 results, we could help finish this run faster by raising the "initial replication" of the remaining workunits (i.e. sending out more tasks for them, two of these will hit fast computers). But actually they are still working on previous runs (finishing S5R1 publication, analyzing S5R3 results). If by the time they are done with that the (higher sensitivity) S5R5 results for the same parameter space have been finished, they'll probably won't look at the corresponding S5R4 ones at all.
Like OS daemons, the S5R4 ones just sleep until there is something to do. They don't harm the system at all.
For the time being we're just keeping the S5R4 workunits in the system for participants to get credit, and to save us unnecessary additional work.
If by the time they are done with that the (higher sensitivity) S5R5 results for the same parameter space have been finished, they'll probably won't look at the corresponding S5R4 ones at all.
So it's possible all that work and crunch time could have been for nothing?
At the time we started S5R4 it was the best search we could do. But then learning from analyzing the results we had so far we found a way to improve the sensitivity without requiring more computing power, so S5R5 was started, and S5R4 was cut short in favor of it. I would call S5R4 wasted if we had it continued till the end instead of superseding it by S5R5.
Dakota tribal wisdom says that when you discover you are riding a dead horse, the best strategy is to dismount.
RE: I had asked this
)
If scientists here would be eagerly awaiting the S5R4 results, we could help finish this run faster by raising the "initial replication" of the remaining workunits (i.e. sending out more tasks for them, two of these will hit fast computers). But actually they are still working on previous runs (finishing S5R1 publication, analyzing S5R3 results). If by the time they are done with that the (higher sensitivity) S5R5 results for the same parameter space have been finished, they'll probably won't look at the corresponding S5R4 ones at all.
Like OS daemons, the S5R4 ones just sleep until there is something to do. They don't harm the system at all.
For the time being we're just keeping the S5R4 workunits in the system for participants to get credit, and to save us unnecessary additional work.
BM
BM
RE: RE: If by the time
)
At the time we started S5R4 it was the best search we could do. But then learning from analyzing the results we had so far we found a way to improve the sensitivity without requiring more computing power, so S5R5 was started, and S5R4 was cut short in favor of it. I would call S5R4 wasted if we had it continued till the end instead of superseding it by S5R5.
Dakota tribal wisdom says that when you discover you are riding a dead horse, the best strategy is to dismount.
BM
BM