Bacula-users

Re: [Bacula-users] Catastrophic error. Cannot write overflow block todevice"LTO4"

2011-07-10 23:08:00
Subject: Re: [Bacula-users] Catastrophic error. Cannot write overflow block todevice"LTO4"
From: "Steve Costaras" <stevecs AT chaven DOT com>
To: "Dan Langille" <dan AT langille DOT org>
Date: Mon, 11 Jul 2011 03:05:41 +0000
 
-----Original Message-----


>I suggest running smaller jobs. I don't mean to sound trite, but that really is the solution. Given that the alternative is non-trivial, the sensible choice is, I'm afraid, cancel the job.

I'm already kicking off 20+ jobs for a single system already.   This does not work when we're talking over the 100TB/nearly 200TB mark.     And when these errors happen it does not matter how many jobs you have as /all/ outstanding jobs fail when you have concurancy (in this case all jobs that were qued and were not even writing to the same tape were canceled).  
> This sounds like a configuration issue.  Queued jobs should not be cancelled when a previous job cancels.

Not queued, concurent jobs (all are active at the same time but only one writes at a time from it's spool file) This was done to avoid the write|spool|write|spool loop for a serial job against a large system cutting backup times in half.



------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
<Prev in Thread] Current Thread [Next in Thread>
  • Re: [Bacula-users] Catastrophic error. Cannot write overflow block todevice"LTO4", Steve Costaras <=