You can check your local cache and look at the task names. The first part is the name of the beam that is announced here. You can selectively abort those tasks (maybe BOINCTasks has a feature for this?). Or if you prefer you can clean your complete cache. Since the feature to cancel tasks via our server has a big impact on the performance of the whole project we can't cancel them this way.
You can check your local cache and look at the task names. The first part is the name of the beam that is announced here. You can selectively abort those tasks (maybe BOINCTasks has a feature for this?). Or if you prefer you can clean your complete cache. Since the feature to cancel tasks via our server has a big impact on the performance of the whole project we can't cancel them this way.
If you have several machines, some of them controlled remotely, it's not easy to constantly babysit them and check all tasks. Is there no chance to improve the server-side cancelling ?
boinccmd get_tasks produces all the tasks, and then grep matches out all the tasks matching badtasks. The final grep -v removes the WU names which are not helpful. The output is a list of tasks.
A simple script repeating each host if you have many.
Then i cut and paste any task matches into this line
There is a BOINC function that can cancel WUs already sent to the client but I think it's been said that the server doesn't respond very well when running that option.
The above commands like a decent alternative though. Manually looking through over 1k tasks is a pain.
Currently the beams show a different behavior than usual. In the past one could see that a beam was bad when a large portion of tasks got validation errors. Right now we have only a handful of tasks per beam that show validation problems. So I'm switching to a different cancellation strategy and will only cancel tasks that failed validation but not the complete beam as the rest seems to be fine.
Here is the list of affected beams (total of 308 jobs from 20 beams):
You can check your local
)
You can check your local cache and look at the task names. The first part is the name of the beam that is announced here. You can selectively abort those tasks (maybe BOINCTasks has a feature for this?). Or if you prefer you can clean your complete cache. Since the feature to cancel tasks via our server has a big impact on the performance of the whole project we can't cancel them this way.
Christian Beer wrote:You can
)
If you have several machines, some of them controlled remotely, it's not easy to constantly babysit them and check all tasks. Is there no chance to improve the server-side cancelling ?
-----
It is a little hard to find
)
It is a little hard to find the bad tasks when you have several hundred over several hosts.
This is a bit linux orientated but the same basic idea works for all platforms - you'll just need to get a grep from somewhere.
I keep a list of badtasks.txt file which i put tasks i know are troublesome, or i want to abort a lot.
i run (you might need a passwd parameter)
boinccmd --host <name> --get_tasks | grep -f badtasks.txt | grep -v "WU name"
boinccmd get_tasks produces all the tasks, and then grep matches out all the tasks matching badtasks. The final grep -v removes the WU names which are not helpful. The output is a list of tasks.
A simple script repeating each host if you have many.
Then i cut and paste any task matches into this line
boinccmd --host <name> --task http://einstein.phys.uwm.edu/ <task name> abort
hth.
There is a BOINC function
)
There is a BOINC function that can cancel WUs already sent to the client but I think it's been said that the server doesn't respond very well when running that option.
The above commands like a decent alternative though. Manually looking through over 1k tasks is a pain.
I just canceled the following
)
I just canceled the following beams:
p2030.20141129.G194.05+00.35.N.b1s0g0.00000
p2030.20141130.G192.79-01.95.S.b2s0g0.00000
p2030.20141130.G192.79-01.95.S.b3s0g0.00000
p2030.20141201.G193.19-01.71.C.b5s0g0.00000
p2030.20141226.G193.43-00.80.S.b1s0g0.00000
p2030.20141227.G193.68-00.34.N.b5s0g0.00000
I'll look into the performance problems again after the next gravitational search is underway and does not need my full attention.
Thanks for the tips AgentB,
)
Thanks for the tips AgentB, works great.
Thanks for keeping us updated CB.
Cheers!
I just canceled the beams (a
)
I just canceled the beams (a majority of tasks had errors):
p2030.20150129.G191.26+00.43.C.b0s0g0.00000
p2030.20151219.G177.56-00.11.C.b6s0g0.00000
p2030.20141226.G193.43-00.80.S.b0s0g0.00000
p2030.20141226.G193.43-00.80.S.b2s0g0.00000
p2030.20141226.G193.43-00.80.S.b3s0g0.00000
p2030.20141226.G193.43-00.80.S.b5s0g0.00000
p2030.20141227.G193.30-01.03.N.b3s0g0.00000
p2030.20141227.G193.68-00.34.N.b1s0g0.00000
p2030.20141227.G193.68-00.34.N.b3s0g0.00000
I also canceled parts of the following beams (only workunits with errors):
p2030.20141119.G191.70-01.11.C.b3s0g0.00000
p2030.20141122.G192.59+00.50.S.b2s0g0.00000
p2030.20141124.G192.11-01.31.S.b5s0g0.00000
p2030.20141125.G193.51+00.77.S.b0s0g0.00000
p2030.20141125.G193.51+00.77.S.b2s0g0.00000
p2030.20141125.G193.51+00.77.S.b5s0g0.00000
p2030.20141128.G193.04-01.05.N.b6s0g0.00000
p2030.20141129.G194.05+00.35.N.b3s0g0.00000
p2030.20141129.G194.05+00.35.N.b4s0g0.00000
p2030.20141129.G194.05+00.35.N.b5s0g0.00000
p2030.20141130.G192.79-01.95.S.b4s0g0.00000
p2030.20141227.G193.68-00.34.N.b2s0g0.00000
p2030.20141227.G193.68-00.34.N.b4s0g0.00000
p2030.20141227.G193.68-00.34.N.b6s0g0.00000
...perhaps some WU's more
)
...perhaps some WU's more than expected?
259790180 259790179 259790178 259790171 259790164 259790163 259790162
all from today finished, send out 28-10-2016.
BR
DMmdL
Greetings from the North
Currently the beams show a
)
Currently the beams show a different behavior than usual. In the past one could see that a beam was bad when a large portion of tasks got validation errors. Right now we have only a handful of tasks per beam that show validation problems. So I'm switching to a different cancellation strategy and will only cancel tasks that failed validation but not the complete beam as the rest seems to be fine.
Here is the list of affected beams (total of 308 jobs from 20 beams):
p2030.20141119.G191.70-01.11.C.b0s0g0.00000
p2030.20141119.G191.70-01.11.C.b2s0g0.00000
p2030.20141125.G193.51+00.77.S.b5s0g0.00000
p2030.20141129.G194.05+00.35.N.b3s0g0.00000
p2030.20141129.G194.05+00.35.N.b5s0g0.00000
p2030.20141130.G192.79-01.95.S.b4s0g0.00000
p2030.20141227.G193.68-00.34.N.b2s0g0.00000
p2030.20141227.G193.68-00.34.N.b4s0g0.00000
p2030.20141227.G193.68-00.34.N.b6s0g0.00000
p2030.20150129.G191.26+00.43.C.b4s0g0.00000
p2030.20150129.G191.26+00.43.C.b5s0g0.00000
p2030.20150210.G193.98-01.67.N.b0s0g0.00000
p2030.20150210.G193.98-01.67.N.b1s0g0.00000
p2030.20150210.G193.98-01.67.N.b2s0g0.00000
p2030.20150210.G193.98-01.67.N.b3s0g0.00000
p2030.20150210.G193.98-01.67.N.b4s0g0.00000
p2030.20150210.G194.36-00.98.N.b2s0g0.00000
p2030.20150210.G194.36-00.98.N.b4s0g0.00000
p2030.20150210.G194.36-00.98.N.b5s0g0.00000
p2030.20150210.G194.36-00.98.N.b6s0g0.00000
Thanks Christian, i had 7
)
Thanks Christian, i had 7 tasks matching these jobs - and promptly beamed them up!