Bacula-users

Re: [Bacula-users] Big Backups: Incremental forever, off-site.

2009-08-11 09:54:59
Subject: Re: [Bacula-users] Big Backups: Incremental forever, off-site.
From: Shawn <shawn AT artemide DOT us>
To: Arno Lehmann <al AT its-lehmann DOT de>
Date: Tue, 11 Aug 2009 09:44:49 -0400
Thank you kindly, this helped me solve my problem, Arno.

    Our recycling program went wonderfully well, and ironically I came across a cross post from John Drescher that helped me further my education in regards to the Automatic Recycling as well as AutoLabeling for the director and storage, so it doesn't pester about that if an error does occur:

http://www.bacula.org/en/rel-manual/Automatic_Volume_Recycling.html

    Our strategy is working quite well, we have a Full backup performed, and during quiet operations we snuck out the Full backup settings for the big clients within the bacula-dir.conf, reloaded and with shortened volumes (Max 500MB each now) it's working out wonderfully now!
   

--

Shawn Qureshi
Artemide, Inc.
IT Specialist


On Tue, 2009-08-11 at 12:07 +0200, Arno Lehmann wrote:
Hi,

10.08.2009 22:37, Shawn wrote:
>   Hello again, folk,
> 
>     I'm experimenting with volume management, and could use a few 
> pointers on some of the recycling capabilities of Bacula when dealing 
> with massive backups.
> 
>     What we're trying to accomplish, is an off-site backup solution.  As 
> such it has to be flexible, in case an Internet outage occurs or 
> otherwise...
> 
>     We are keeping primarily smaller transfers, up to perhaps a Gigabyte 
> of any given users Home folder data, but, there are a few servers and 
> other systems we'd like to fully back up.
> 
>     The problem with this, however, is that our upstream bandwidth could 
> take almost a whole month to perform a Full Backup for a really big file 
> system, so as such, we want to approach this with a sort of "Incremental 
> forever" strategy for the off site.

The first challenge will be the required initial full backup.

>     Currently the files are being backed up to the server in-house for 
> further testing, the server backs them up to disk.
> 
>     What I've watched is, once in a while, for whatever reason (maybe 
> the user shut off their computer) if a network time out occurs - the 
> Volume/Media for that users pool becomes in an Error state.  When this 
> happens, during their next scheduled back up, Bacula forces a Full 
> Backup - claiming it has no records of a previous Full Backup (since the 
> last Media it used was in an Error state).
> 
>     What I'd like to do is one of the following scenarios:
> 
>     A)  Ignore an error on Media during a backup, and continue writing
>     anyway, using the Last Good Incremental as it's basis for the next
>     Incremental backup.
> 
>     B)  Use the "Maximum Volume Bytes =" pool option, and limit the
>     volumes to perhaps a few GB.  Add a pile of volumes to this pool,
>     and if one fails it only has to recover a few GB during an error,
>     instead of the whopping 200GB (or whatever it comes out to).
> 
>     C)  Cancel a backup job during storage, and purge the incremental
>     job files in question if an error like a network problem occurs. 
>     Leaving the Media in Append mode, so next backup schedule can run as
>     normal again.

The problems you observe are not related to volumes - Bacula "thinks" 
in terms of jobs. The volumes don't really matter (except that they 
might waste space for incomplete backups that you don't want to keep).

You need to make sure that Bacula does always run an incremental job 
and never elevates the level to full. Normally, that should be the 
case as long as there are previous valid jobs. For example, I back up 
notebooks and, when they are not turned on or in the network, the 
backups fail. The next backups run are still incrementals, though.

Checking your retention times and recycling settings might be a good 
next step to make sure full backups are never purged.

Also, I believe that there is a somewhat unexpected bahviour of recent 
Bacula versions - people seem to observe that incrementals require a 
full backup in the same pool they use (which I would consider a 
bug...). If that's what is happening, you may want to check the bug 
tracker if this is alredy reported or even fixed, and / or consider an 
upgrade.

Cheers,

Arno

> 
>     Can any of this be accomplished?  Does someone have a better 
> alternative for an off site type of "Incremental forever" solution?
> 
> Thanks in advance,
> 
> -- 
> 
> Shawn Qureshi
> Artemide, Inc.
> IT Specialist
> 
> 
> ------------------------------------------------------------------------
> 
> ------------------------------------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
> trial. Simplify your report design, integration and deployment - and focus on 
> what you do best, core application coding. Discover what's new with 
> Crystal Reports now.  http://p.sf.net/sfu/bobj-july
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Bacula-users mailing list
> Bacula-users AT lists.sourceforge DOT net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
<Prev in Thread] Current Thread [Next in Thread>