Bacula-users

[Bacula-users] Scheduling wierdness

2009-05-14 04:20:40
Subject: [Bacula-users] Scheduling wierdness
From: Christian Gaul <christian.gaul AT otop DOT de>
To: bacula-users AT lists.sourceforge DOT net
Date: Thu, 14 May 2009 10:17:18 +0200
I just started a manual job to run a rather large-isch backup while i
still had 6 incrementals scheduled via Schedule resources that were
being worked on.

As soon as the running job finished, the new large-isch manual job was
started even though all the jobs use the same priority. Shouldnt the
already scheduled (and in a state of waiting for Storage) jobs be served
first, before they time out, instead of a new random job i just started?
I now need to cancel the manual job and wait for the original jobs to
finish (or run it with a different priority, which would block different
Storages).

Is that how its intended to work?

While i'm on the subject of scheduler wierdness, i tried using the new
duplicate jobs feature with:

#  Allow Duplicate Jobs = No
#  Allow Higher Duplicates = No
#  Cancel Queued Duplicates = Yes
#  Cancel Running Duplicates = No

which resulted in me being unable to start any kind of job, manual or
via schedule. New jobs were canceled right away, even if NO other jobs
were running, scheduled or even about to be scheduled for an hour.

I checked the sources, and to me it looks like (in job.c,
allow_duplicate_job() ) you check for a duplicate in all JCRs, but only
via the name, and the job to be scheduled will be in that list when you
do the check for duplicates, so it will always be canceled. Setting the
DuplicateJobProximity to something slightly higher than 0 would probably
suffice, but as far as i can see, there is no way to set that variable
except compile my own version. But i probably missed something in the
C++ sources, since that really isnt my language.

Am i missing something here? If that behavior was normal, then nobody
would be able to schedule jobs anymore, so i doubt it. Maybe it is the
combination of options im using to achieve what would be most useful for
my use case?


The way i understood the options, my version should be "not allowing
duplicate jobs", "not even if they are higher priority" (since i only
use one anyways, except for DB backups), "cancel the new job right away,
if it gets scheduled and the last one didnt finish running yet" and
"leave the old job that was running alone" so i get a backup to finish
running instead of always starting new ones that only finish 50%.

Did i misunderstand how the duplicate jobs are supposed to work and
enter a combination of options that is mutually exclusive and doesnt work?

No news on the other thread with the concurrent jobs marking tapes in
error yet. Not enough time to finish that "project" yet.

Thanks for your time.



P.S.: in case it matters for the discussion, bacula 3.0.0 running on
gentoo, external mysql DB, file storage, dell tl2000 autoloader and
another lto2 drive.

------------------------------------------------------------------------------
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Bacula-users] Scheduling wierdness, Christian Gaul <=