Bacula-users

Re: [Bacula-users] Resuming backups

2008-08-19 09:58:12
Subject: Re: [Bacula-users] Resuming backups
From: Ryan Novosielski <novosirj AT umdnj DOT edu>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 19 Aug 2008 09:57:35 -0400
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Michael Winiberg wrote:
> Hi,
> 
> I realise this has probably been discussed before, but I am wondering 
> whether there is yet any
> Way to cope with a dropped connection part way through a backup? I am 
> backing up some remote machines with bacula 2.2.8 (to disk, not tape). 
> I've reduced the stuff to be backed up to the absolute minimum, but it 
> still comes to about 4Gb for a full backup (which takes about 24 hours 
> over a 512K link). However, as it is rare for the link to stay up for 
> that long, I regularly get 3.999999999Gb failed backups. Some questions 
> therefore arise:
> 
> 1. Is it possible to access the files that were backed up before the 
> link failed? I can't seem to find a way to do so, but the disk space is 
> certainly occupied! This is also annoying because, even though the full 
> backup failed, the next series of incremental backups runs as normal, ie 
> bacula doesn't seem to realise that there isn't a full backup in the 
> catalogue, until you try to restore something of course! The catalogue 
> reports the number of files and space taken by the failed job, but won't 
> allow you to list or restore them.
> 
> 2. Is there any way to resume a backup from where it left off (or is 
> there likely to be?) From my point of view it would be quite acceptable 
> to have an option to suspend all backup activity until the link could be 
> re-established so that there was no problem with files getting 
> intermixed. Alternatively one could store the current state of the 
> backup at the client end, add all files successfully transferred so far 
> to the catalogue (to save wasting the effort expended so far) and then 
> get the client to restart with the file in which the link was lost once 
> the link is re-established. Doing it that way would allow recovery even 
> with concurrent backups running. I'm willing to provide some programming 
> resource towards doing this if you think it's practical in view of the 
> way bacula works internally.
> 
> Sorry if I'm reopening a debate or have missed something obvious in the 
> most recent release, but this seems to me to be a major failing in this 
> otherwise excellent system and I'm willing to help do something about it 
> if thought practical...

There presently is not a way to do this. What you could do, I suppose,
is transferring a tar or something like that via other, resumable, means
and then backing that up locally on the machine. There are a lot of ways
to skin that cat.

Note though that no backup system that I know of supports resuming
backups. I know HP's Storage Data Protector/OmniBack did not the last
time I used it (which was not that long ago).

- --
 ---- _  _ _  _ ___  _  _  _
 |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer II
 |$&| |__| |  | |__/ | \| _| |novosirj AT umdnj DOT edu - 973/972.0922 (2-0922)
 \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIqtFPmb+gadEcsb4RAu5KAJ9eaYqMw4H4wEnaf7lyNt2Vu0lOawCfdvZS
hg2a/xV3Yjit9sECaK/ihac=
=m+WK
-----END PGP SIGNATURE-----

Attachment: novosirj.vcf
Description: Vcard

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
<Prev in Thread] Current Thread [Next in Thread>