number of tasks downloaded

Gary
Gary
Joined: 29 Aug 06
Posts: 3
Credit: 70544
RAC: 0
Topic 196866

is there any way to configure boinc or einstein so that when i download from einstein i dont get 500 tasks.

what i want to have happen is this.

if my main crunch project runs out of work or has a server go down, once i have no tasks available from it to work on, i want my system to contact einstein and download 1wu per processor + 1wu per gpu

once those are finished i want boinc to check with my main project and then if it has no work go back to einstein and ask for more work.

if i can not find a way to do this unattened i will mass abort einstein tasks everytime it tries to dump more then 1 or 2 wu per processor

Henk Haneveld
Henk Haneveld
Joined: 5 Feb 07
Posts: 18
Credit: 14258679
RAC: 1307

number of tasks downloaded

Set resource share to 0 (zero) in the Einstein preferences.

Einstein will then only get a limited amout of work when 1 or more processors are idle.

Gary
Gary
Joined: 29 Aug 06
Posts: 3
Credit: 70544
RAC: 0

did that and it dumped a ton

did that and it dumped a ton of wu on me a couple of hours ago

Henk Haneveld
Henk Haneveld
Joined: 5 Feb 07
Posts: 18
Credit: 14258679
RAC: 1307

In that case I suggest you

In that case I suggest you send a bug report to the Boinc developers because that should not happen.

Patrick
Patrick
Joined: 2 Aug 12
Posts: 70
Credit: 2358155
RAC: 0

I can see in your sheduler

I can see in your sheduler log that your minimum workbuffer is set to this

available disk 9.31 GB, work_buf_min 864

That are 0.01 days or 14.4 minutes which could not be your problem.
How is your Max. additional buffer set?

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

My cache is 0.25 days and I

My cache is 0.25 days and I never get more than two or three units in Einstein.
Tullio

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2946187448
RAC: 689482

OK, I think I can see what's

OK, I think I can see what's happened. Here at Einstein, we can see what the servers did to you last time you contacted them.

http://einstein.phys.uwm.edu/host_sched_logs/6798/6798090

The log that's displayed there will change over time, but what I can see at the moment includes:

2013-03-20 06:59:32.5719 [PID=11298]    [send] CPU: req 0.00 sec, 0.00 instances; est delay 0.00
2013-03-20 06:59:32.5719 [PID=11298]    [send] CUDA: req 0.00 sec, 0.00 instances; est delay 0.00
2013-03-20 06:59:32.5719 [PID=11298]    [send] work_req_seconds: 0.00 secs


So, you didn't need any new work - fair enough, you were reporting the surplus work you'd aborted.

2013-03-20 06:59:32.6209 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0457.20_S6GC1__S6BucketLVEa_457.313682292Hz_1381_0
2013-03-20 06:59:32.6209 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0457.25_S6GC1__S6BucketLVEa_457.363682292Hz_1509_0
2013-03-20 06:59:32.6209 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0457.25_S6GC1__S6BucketLVEa_457.363682292Hz_1508_0
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0459.90_S6GC1__S6BucketLVEa_460.013682292Hz_1674_1
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0459.90_S6GC1__S6BucketLVEa_460.013682292Hz_1673_0
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0459.90_S6GC1__S6BucketLVEa_460.013682292Hz_1672_0
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0460.00_S6GC1__S6BucketLVEa_460.113682292Hz_1734_1
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0460.05_S6GC1__S6BucketLVEa_460.163682292Hz_1786_0
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0460.05_S6GC1__S6BucketLVEa_460.163682292Hz_1785_0
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0457.15_S6GC1__S6BucketLVEa_457.263682292Hz_1255_0
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0457.10_S6GC1__S6BucketLVEa_457.213682292Hz_1150_1
2013-03-20 06:59:32.6210 [PID=11298] [debug]   [HOST#6798090] MSG(high) Resent lost task h1_0460.00_S6GC1__S6BucketLVEa_460.113682292Hz_1733_0


But the server allocated 12 tasks anyway.

That's a bug, but for the moment, I think you're going to have to grin and bear it - feel free to abort the excess tasks.

It should clear itself up automatically. Those "lost tasks" are jobs which got misplaced during a communications glitch of some sort. Once they've all been relocated and re-processed (either aborted or computed), they should stop being sent, and from then on BOINC should only supply new work when you specifically request it.

Newer versions of the BOINC server code already behave that way - even lost tasks are only resent when you ask for new work - so I don't think the BOINC developers will want to get involved. But here at Einstein we're still using older code, with some specialist customisations - this problem may arise from that. I'll drop a note to the admins here - they may be able to do something about it.

Gary
Gary
Joined: 29 Aug 06
Posts: 3
Credit: 70544
RAC: 0

argh!!!!! go so ticked that i

argh!!!!! go so ticked that i wasnt fully watching what i was doing while mass aborting einstein tasks that i abourted about 100 finished work units.

oh well. never mind about reporting anything to the admins as i fixed my problem by detatching from the project. i'll just process from just one project and if they are down or do not have any work i guess my system will get a break too.

mikey
mikey
Joined: 22 Jan 05
Posts: 12644
Credit: 1839035786
RAC: 5045

RE: argh!!!!! go so ticked

Quote:

argh!!!!! go so ticked that i wasnt fully watching what i was doing while mass aborting einstein tasks that i abourted about 100 finished work units.

oh well. never mind about reporting anything to the admins as i fixed my problem by detatching from the project. i'll just process from just one project and if they are down or do not have any work i guess my system will get a break too.

One thing I now do for projects that have trouble giving me enough work is to set a 2nd project at 0%, this means it won't ask for ANY work from the 2nd project unless the first project is not sending you any work. Then it will only get a little bit and then ask the first project again before it gets more. You set the 0% on the projects webpage under Your Account and then Preferences for this project. The default is 100%, just edit and change the number for your backup project and you should be good to go.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2946187448
RAC: 689482

RE: RE: argh!!!!! go so

Quote:
Quote:

argh!!!!! go so ticked that i wasnt fully watching what i was doing while mass aborting einstein tasks that i abourted about 100 finished work units.

oh well. never mind about reporting anything to the admins as i fixed my problem by detatching from the project. i'll just process from just one project and if they are down or do not have any work i guess my system will get a break too.


One thing I now do for projects that have trouble giving me enough work is to set a 2nd project at 0%, this means it won't ask for ANY work from the 2nd project unless the first project is not sending you any work. Then it will only get a little bit and then ask the first project again before it gets more. You set the 0% on the projects webpage under Your Account and then Preferences for this project. The default is 100%, just edit and change the number for your backup project and you should be good to go.


He's already been advised to do that, and followed the advice (posts #2, #3 this thread). Unfortunately, he bumped into some legacy server code we didn't know about, and got his fingers burned - didn't hang around to wait and see what happened.

Bernd has replied to my report - unfortunately, he hasn't got time to keep going round patching these old holes: it makes more sense to test the newer code and make a wholesale migration when he's sure it's working properly for Einstein's rather special needs.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

Test4tTheory@home has just

Test4tTheory@home has just migrated to new BOINC server code and the problems seem minimal, considering that it is a more complex project making use of a Virtual Machine to enable all users to run CERN programs in a Scientific Linux environment.
Tullio

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.