Bacula-users

Re: [Bacula-users] Resuming backups

2008-08-19 12:58:50
Subject: Re: [Bacula-users] Resuming backups
From: Andrea Conti <ac AT alyf DOT net>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 19 Aug 2008 18:58:35 +0200
Hello,

> 4Gb for a full backup (which takes about 24 hours
> over a 512K link). However, as it is rare for the link to stay up for 
> that long, I regularly get 3.999999999Gb failed backups.

I know it's not what you're asking for, but for a similar setup (two
hosts, 6GB total, 512kbps DSL link) I find that rsync'ing [parts of] the
remote filesystem to a local copy and backing up the second works much
better.

First, depending on what you're backing up you will likely save a lot of
time and bandwidth -- you only transfer differences, even for Fulls, and
rsync can do deltas for changed files.
In my case the bulk of the data stays the same, so I rarely have to
transfer more than a couple hundred MBs.

Second, if you do the rsync in a RunBefore script, you can use the rsync
exit status to control the actual backup -- if the connection is dropped
you can choose whether to wait and retry the transfer or to generate an
error and exit, thus cleanly aborting the job.

In my experience there are only two downsides in this approach: you need
some space for the local copy of the data, and restores require two
steps -- restoring to the local filesystem, plus manually copying back
on the remote host. Whether these are showstoppers or just annoyances
depends on your scenario.

> even though the full backup failed, the next series of incremental backups 
> runs as normal, ie 
> bacula doesn't seem to realise that there isn't a full backup in the 
> catalogue,

Make sure that RerunFailedLevels is set to yes for your job; from your
description it likely is not.

Anyway, this is strange: from the documentation it seems that an
incremental will be always upgraded to full if the catalog does not
contain record of a successful instance of the same job with the same
fileset.

On the other hand, there is no mention of it needing to be the last job,
or even a _restorable_ job (that is, a full or a diff/incr for which the
catalog still contains a complete chain of jobs ending with a full)

In other words, is it perhaps possible that bacula is basing the new
incrementals not on the failed full (which would be a bug as far as I
can tell), but on a previous successful incremental which is still in
the catalog but whose full parent has been deleted, thus resulting in a
restore failure?

andrea

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>