Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*\[Bacula\-users\]\s+seeking\s+advice\s+re\.\s+splitting\s+up\s+large\s+backups\s+\-\-\s+dynamic\s+filesets\s+to\s+prevent\s+duplicate\s+jobs\s+and\s+reduce\s+backup\s+tim/: 7 ]

Total 7 documents matching your query.

1. [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: mark.bergman AT uphs.upenn DOT edu
Date: Wed, 12 Oct 2011 17:53:41 -0400
In an effort to work around the fact that bacula kills long-running jobs, I'm about to partition my backups into smaller sets. For example, instead of backing up: /home I would like to backup the con
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00099.html (17,827 bytes)

2. Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: "James Harper" <james.harper AT bendigoit.com DOT au>
Date: Thu, 13 Oct 2011 11:54:47 +1100
jobs, I'm of example: names, that full director to job that "Allow Does Bacula really kill long running jobs? Or are you seeing the effect of something at layer 3 or below (eg TCP connections timing
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00100.html (15,969 bytes)

3. Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: mark.bergman AT uphs.upenn DOT edu
Date: Wed, 12 Oct 2011 21:58:28 -0400
[SNIP!] Yes. Bacula kills long running jobs. See the recent thread entitled: Full backup fails after a few days with "Fatal error: Network error with FD during Backup: ERR=Interrupted system call or
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00101.html (16,494 bytes)

4. Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: John Drescher <drescherjm AT gmail DOT com>
Date: Wed, 12 Oct 2011 22:55:10 -0400
I believe it automatically kills jobs that are longer than 5 days or something similar. At least that was discussed recently on the list. John -- All the data continuously generated in your IT infra
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00102.html (14,009 bytes)

5. Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: "Steve Costaras" <stevecs AT chaven DOT com>
Date: Thu, 13 Oct 2011 03:09:42 +0000
seems to be a common mis-conception or I'm /much/ luckier than I should be as I routinely run jobs that last over 15-20 days with zero problems (besides them taking 15-20 days. ;) ). I've been doing
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00103.html (14,791 bytes)

6. Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: Thomas Lohman <thomasl AT mtl.mit DOT edu>
Date: Thu, 13 Oct 2011 12:18:58 -0400
Since we may end up having jobs that run for more than 6 days, I was pretty curious to see where in the code (release 5.0.3) this insanity check was happening. Looking at your previous thread's erro
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00115.html (14,937 bytes)

7. Re: [Bacula-users] seeking advice re. splitting up large backups -- dynamic filesets to prevent duplicate jobs and reduce backup time (score: 1)
Author: Martin Simmons <martin AT lispworks DOT com>
Date: Thu, 13 Oct 2011 20:24:21 +0100
Assuming you mean ~20 separate client machines (File Daemons), then you can set Maximum Concurrent Jobs in the director's config for the large client. In fact, the default is 1, so it is surprising
/usr/local/webapp/mharc-adsm.org/html/Bacula-users/2011-10/msg00116.html (14,078 bytes)


This search system is powered by Namazu