Note that while S4 units are still around, you may also get them after you removed the app_info.xml and you will download and run them with the old official App (4.40 in your case). We can't completely avoid this, but the way to make this rather unlikely would be to reset the project after your client has ran out of S4 work to remove the data files that still refer to S4.
BM
So I received today 3 old WUs and 1 new. I noticed a message in my log saying
"Got server request to delete file z1_1416.5"
Does this mean I will get no more S4 WUs? Not sure if that's the data file for S4 data or not. Just wondering if I'll have to monitor my machine or do the reset project thing you propose to prevent crunching S4 data after removing my app_info.xml file.
M
This means that after finishing crunching WU for this file on your computer, BOINC will delete that data file from your computer because there is no more work remaining for it in the project.
One think I'm curious about is checkpoint frequency. With the general increase in completion times, will results in progress be checkpointed often enough to avoid large losses when the application (and BOINC client) is stopped and restarted?
Oh, and personal opinion only - standardized credit = good; longer completion times = not so good. Completing 8 "work units" in 16 hours just "feels better" than completing 2 work units in 16 hours, credit issues aside. Of course, that's purely subjective and mostly irrelevant.
The main difference between these two scenarios is that on the server side, the 8 x 2 hours scenario requires having 8 results in the database, whereas the 1 x 16 hour scenario requires only a single result in the database. Since the database is our main project bottleneck, and we would like to be able to scale up to more users, the 1 x 16 hour is vastly better for the project.
Any guesstimates on how many WUs will come from each dataset download?
There are 5802 data files, total. With 16.45 million workunits, this implies about 2834 workunits per data file.
But this number is a bit misleading, because there are far fewer workunits per file at low frequencies and more workunits per file at high frequencies. Note that the frequency is the XXXX.X part of the file name.
Einstein@Home is currently analyzing only the data from S5 that has been captured so far (i.e. 'til May). The S5 science run ist still going on, and the current Einstein@Home analysis run S5R1 will probably be followed by another one covering the whole data of S5.
Further into the future the crystal ball becomes cloudy, but I think that as long as the projects lasts, it will be dedicated to analyzing data from ground-based gravitational wave detectors, and maybe LISA in the far future.
So I removed the file and rebooted to get it to download new work, this gave me Einstein S5 WUs and a single Albert 4.40 WU (don't know where he came from).
Until the S4 work that is 'in the pipeline' is finshed, there will still be some S4 work in progress. If you look at the server status page you can keep track of how many S4 workunits are still 'in progress'. Roughly speaking this number will decrease by 60 or 70 percent each week.
But one Question to the team:
in the report its said that after the detector now runs 1 year at its design sensibility its upgraded / replaces by a 10 time more sensible detector. Is it planned that the data of that detector is also worked on with einstein@home or are there other plans or maybe no plans at all at this point of time?
It's fair to say that there are no detailed plans at this time. But Einstein@Home is the largest computing resource in the LIGO Scientific Collaboration. So I expect that we'll be making use of it when we have advanced LIGO data to analyze. But that won't be until fairly well into the next decade.
One of my computerscomputers reached daily quota 7 WU per day. Others still have 32 WU - why did you change the quota ? The system performed couple failures with transition to S5. I am waiting now for downloading new wu tomorrow.
The daily quota is reduced by one with every client error, and doubled (up to 32 max) with every successful result reported. If you have completed a result successfully, do a manual "update" to report it and raise your daily quota again.
l1: detector name. L1 is Livingston LA, H1 and H2 are at handford WA.
1391.0: the frequency that the data is being examined at
S5R1: science run 5, revision 1.
738: Subcomponent number for the data at the observatory/frequency. I assume this number is time dependent, adjacent groups of 8 units share the same data file.
S5R1A: Repeat of the science run number,not sure what the repeat is for, or the meaning of the A is.
0: I'm the first person assigned the WU.
S5R1A means: Data from Science run #5, analysis Run #1, issue A.
If we later find that for one reason or another we have to add some WUs (maybe recalculate results we found to be bad), these will get an issue letter B asf. In some sense this string reflects the build of our WU generator.
l1, 1391.0 and S5R1 are the tags that form the name of the datafile "l1_1391.0_S5R1" (which you should find in the project folder on your machine).
The 738 has no obvious relation to the parameters (frequency etc.), it's just the 738th WU generated from WU generator S5R1A referring to this datafile.
RE: RE: Note that while
)
This means that after finishing crunching WU for this file on your computer, BOINC will delete that data file from your computer because there is no more work remaining for it in the project.
RE: One think I'm curious
)
The main difference between these two scenarios is that on the server side, the 8 x 2 hours scenario requires having 8 results in the database, whereas the 1 x 16 hour scenario requires only a single result in the database. Since the database is our main project bottleneck, and we would like to be able to scale up to more users, the 1 x 16 hour is vastly better for the project.
RE: Any guesstimates on how
)
There are 5802 data files, total. With 16.45 million workunits, this implies about 2834 workunits per data file.
But this number is a bit misleading, because there are far fewer workunits per file at low frequencies and more workunits per file at high frequencies. Note that the frequency is the XXXX.X part of the file name.
Bruce
Einstein@Home is currently
)
Einstein@Home is currently analyzing only the data from S5 that has been captured so far (i.e. 'til May). The S5 science run ist still going on, and the current Einstein@Home analysis run S5R1 will probably be followed by another one covering the whole data of S5.
Further into the future the crystal ball becomes cloudy, but I think that as long as the projects lasts, it will be dedicated to analyzing data from ground-based gravitational wave detectors, and maybe LISA in the far future.
BM
BM
RE: So I removed the file
)
Until the S4 work that is 'in the pipeline' is finshed, there will still be some S4 work in progress. If you look at the server status page you can keep track of how many S4 workunits are still 'in progress'. Roughly speaking this number will decrease by 60 or 70 percent each week.
Cheers,
Bruce
RE: But one Question to the
)
It's fair to say that there are no detailed plans at this time. But Einstein@Home is the largest computing resource in the LIGO Scientific Collaboration. So I expect that we'll be making use of it when we have advanced LIGO data to analyze. But that won't be until fairly well into the next decade.
Cheers,
Bruce
RE: One of my
)
The daily quota is reduced by one with every client error, and doubled (up to 32 max) with every successful result reported. If you have completed a result successfully, do a manual "update" to report it and raise your daily quota again.
BM
BM
RE: Thanks, but I'm down
)
No need to manually install this -- it is now the standard app.
Bruce
RE: l1_1391.0_S5R1__738_S5R
)
S5R1A means: Data from Science run #5, analysis Run #1, issue A.
If we later find that for one reason or another we have to add some WUs (maybe recalculate results we found to be bad), these will get an issue letter B asf. In some sense this string reflects the build of our WU generator.
l1, 1391.0 and S5R1 are the tags that form the name of the datafile "l1_1391.0_S5R1" (which you should find in the project folder on your machine).
The 738 has no obvious relation to the parameters (frequency etc.), it's just the 738th WU generated from WU generator S5R1A referring to this datafile.
BM
BM