-----Original Message-----
>I
suggest running smaller jobs. I don't mean to sound trite, but that
really is the solution. Given that the alternative is non-trivial, the
sensible choice is, I'm afraid, cancel the job.
I'm already kicking off 20+ jobs for a single system already. This does not work when we're talking over the 100TB/nearly 200TB mark. And when these errors happen it does not matter how many jobs you have as /all/ outstanding jobs fail when you have concurancy (in this case all jobs that were qued and were not even writing to the same tape were canceled).
> This sounds like a configuration issue. Queued jobs should not be cancelled when a previous job cancels.
Not queued, concurent jobs (all are active at the same time but only one writes at a time from it's spool file) This was done to avoid the write|spool|write|spool loop for a serial job against a large system cutting backup times in half.
|
------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|