4. Is the daily quota staying at 32 or changing to something different?
I'll let Bruce have the final word on it.
In principle the longer Workunits should presently (i.e. with the current speed of the Apps) keep the machines busy even with 32 short WUs / day / CPU. The downside of a large quota is that we allow a host to trash a large number of Tasks at once if something goes wrong on that machine.
BM
I didn't plan to change the daily quota, but if you have machines running out of work, please shout and I will bump it up.
Since you guys will use some engines from Akos, the question is, did he have any time yet to work on the engines for Linux or Mac too?
Bernd will have the final word on this, but yes the Linux and OS X apps have been significantly speeded up. I think on my Linux Opteron boxes I should be getting RAC values almost twice as large as before (much closer to Win32 numbers). Mac OS X PPC should be faster, and I think Mac OS X Intel will be *much* faster than before.
Since you guys will use some engines from Akos, the question is, did he have any time yet to work on the engines for Linux or Mac too?
Bernd will have the final word on this.
The "Akos engine", i.e. his ideas and some code from him (and a bit of me, too) have gotten into the code the Apps for all x86-based platforms are built from, so the Windows-, Linux- and Intel-Mac-Apps should now all be roughly equally fast (i.e. do the same on the same hardware).
The PPC Mac App has seen some improvent, too, due to ideas from Akos and me, but not as much as the x86-based ones.
Anyway, the work on the Apps will continue during the new run. I believe there is still some room for improvement.
Got my first S5 unit today. Should I replace akosf's (thanks dude!)albert_4.37_windows_intelx86.exe with the original before the new S5 unit starts processing?
I think you better reset the project - the error messages from your S5R1 results look like the files have been corrupted and your client apparently didn't try to get them again.
One think I'm curious about is checkpoint frequency. With the general increase in completion times, will results in progress be checkpointed often enough to avoid large losses when the application (and BOINC client) is stopped and restarted?
The checkpointing frequency is determined by the "write to disk at most every" setting in your general preferences. There is a limit in the App, of course, but I doubt that this is above the 60s default even on slow machines. The times between checkpoints is the maximum time that gets lost when the App is interrupted, plus a few seconds it takes to read and process the checkpointed state when resuming.
I think this may work, if it is fixed NN credits for long units and fixed nn credits for short units I can see problems if there is any significant variation in process times, this would have to be less than +/- 5% to stop all complaints.
If amount of work can be easily and accurately predicted in advance, then fixing the amount of credit per WU is a way to go. This works perfectly for CPDN and may work fine for EAH S5 ...
On Einstein@Home the "Templates" (see aboove) are all identical in the processing time they take on a particular machine, so the total amount of "work" needed for a specific WU can be easily predicted (there is a bit of unpredictable overhead when e.g. restarting the App and processing the checkpoint, but that should be below 0.1% of the total time unless you interrupt the App more than once a minute). The run times of the WUs vary in the the order of +-5% on average, but this is taken into account when granting the credit (again, see my earlier posts).
Please try to avoid continuing the SETI discussion here; let's keep this thread dedicated to the (preliminary) S5 work of Einstein@Home.
As it has been said, his generic optimizations, and some extra optimizations have been put into the new application. Specific optimizations like SSE, SSE2, SSE3, etc. can not be pushed out, because currently the BOINC system does not relay that information back to the projects as to ask for that deep of an optimization.
The current (x86-) Apps do incorpoarate specific optimizations for SSE, they do detect the CPU they're running on and will choose to execute code for this specific CPU. However this distinction is currently only made between SSE and non-SSE CPUs. You will find a line in the stderr output of your results that reflects this: "Detected CPU type" - 1 means SSE, 0 means generic.
We (Akos and I) will continue to work on speeding up the Apps. This will probably include code for more CPU variants (3Dnow!, SSE3). Some things that are advantagous for one or the other CPU type may be hard to incorporate in an App that switches CPU types, e.g. instruction order and code alignment, such that the switching doesn't eat up all the speed gained, but I think there are quite some things left to try.
IMHO reporting a more detailed platform from the client to the project would lead to the possibility of distributing faster Apps. Some discussion about that has started on the BOINC devleopers mailing list, but that's quite a way to go until that would be implemented and can be benificial to the projects.
Akos isn't working exclusively for Einstein@Home and I have also been busy with preparing, implementing and testing the S5R1 setup, and will probably continue to be the next one or two weeks to iron out the last issues that arise. So the "science code" that determines the speed of the Apps is currently freezed.
It's true. Bruce has forced the old S4 WU generator to generate all the remaining S4 Tasks and put them into the database, so that it is not needed any longer and the WU generator for S5R1 could be properly set up.
RE: RE: 4. Is the daily
)
I didn't plan to change the daily quota, but if you have machines running out of work, please shout and I will bump it up.
RE: Since you guys will use
)
Bernd will have the final word on this, but yes the Linux and OS X apps have been significantly speeded up. I think on my Linux Opteron boxes I should be getting RAC values almost twice as large as before (much closer to Win32 numbers). Mac OS X PPC should be faster, and I think Mac OS X Intel will be *much* faster than before.
RE: RE: Since you guys
)
The "Akos engine", i.e. his ideas and some code from him (and a bit of me, too) have gotten into the code the Apps for all x86-based platforms are built from, so the Windows-, Linux- and Intel-Mac-Apps should now all be roughly equally fast (i.e. do the same on the same hardware).
The PPC Mac App has seen some improvent, too, due to ideas from Akos and me, but not as much as the x86-based ones.
Anyway, the work on the Apps will continue during the new run. I believe there is still some room for improvement.
BM
BM
RE: Got my first S5 unit
)
not necessary - the S5R1 App will be a new file.
BM
BM
Some hours ago we had some
)
Some hours ago we had some problems with at least one download mirror, which should, however, be solved by now. The problem may have been on our end.
BM
BM
I think you better reset the
)
I think you better reset the project - the error messages from your S5R1 results look like the files have been corrupted and your client apparently didn't try to get them again.
BM
BM
RE: One think I'm curious
)
The checkpointing frequency is determined by the "write to disk at most every" setting in your general preferences. There is a limit in the App, of course, but I doubt that this is above the 60s default even on slow machines. The times between checkpoints is the maximum time that gets lost when the App is interrupted, plus a few seconds it takes to read and process the checkpointed state when resuming.
BM
BM
RE: RE: I think this may
)
On Einstein@Home the "Templates" (see aboove) are all identical in the processing time they take on a particular machine, so the total amount of "work" needed for a specific WU can be easily predicted (there is a bit of unpredictable overhead when e.g. restarting the App and processing the checkpoint, but that should be below 0.1% of the total time unless you interrupt the App more than once a minute). The run times of the WUs vary in the the order of +-5% on average, but this is taken into account when granting the credit (again, see my earlier posts).
Please try to avoid continuing the SETI discussion here; let's keep this thread dedicated to the (preliminary) S5 work of Einstein@Home.
BM
BM
RE: As it has been said,
)
The current (x86-) Apps do incorpoarate specific optimizations for SSE, they do detect the CPU they're running on and will choose to execute code for this specific CPU. However this distinction is currently only made between SSE and non-SSE CPUs. You will find a line in the stderr output of your results that reflects this: "Detected CPU type" - 1 means SSE, 0 means generic.
We (Akos and I) will continue to work on speeding up the Apps. This will probably include code for more CPU variants (3Dnow!, SSE3). Some things that are advantagous for one or the other CPU type may be hard to incorporate in an App that switches CPU types, e.g. instruction order and code alignment, such that the switching doesn't eat up all the speed gained, but I think there are quite some things left to try.
IMHO reporting a more detailed platform from the client to the project would lead to the possibility of distributing faster Apps. Some discussion about that has started on the BOINC devleopers mailing list, but that's quite a way to go until that would be implemented and can be benificial to the projects.
Akos isn't working exclusively for Einstein@Home and I have also been busy with preparing, implementing and testing the S5R1 setup, and will probably continue to be the next one or two weeks to iron out the last issues that arise. So the "science code" that determines the speed of the Apps is currently freezed.
BM
BM
It's true. Bruce has forced
)
It's true. Bruce has forced the old S4 WU generator to generate all the remaining S4 Tasks and put them into the database, so that it is not needed any longer and the WU generator for S5R1 could be properly set up.
BM
BM