Bacula-users

Re: [Bacula-users] How to handle long, time consuming backups

2008-12-31 10:54:09
Subject: Re: [Bacula-users] How to handle long, time consuming backups
From: Massimo Schenone <mschenone AT sorint DOT it>
To: Brian Willis <Brian.Willis AT noaa DOT gov>
Date: Wed, 31 Dec 2008 16:51:35 +0100
Thanks! I just set up the heartbeat however I guess that a time out is
not the reason why firewall is canceling the connection because
throughput is quite constant

On Wed, 2008-12-31 at 10:00 -0500, Brian Willis wrote:
> Massimo Schenone wrote:
> > Hi all,
> > I have the same question: how to backup a remote samba server with 60GB
> > of file system space?
> > I get an average throughput of 220Kb/s but the job always fails with
> > network error (Connection reset). In the middle there is a firewall and
> > I asked to people guys to raise tcp conn timeout..
> > Splitting the job into 5 subjobs doesn't help.. after 2 or 3 hours the
> > connection is broken..
> 
> I managed to fix mine by adding a heartbeat statement to my fd client.
> 
> > After going through docs and forums I tried to add spooling and
> > concurrency, so that other jobs must not wait for the samba one to
> > complete.. no way! I can't see any concurrency at all
> > Here it is the config
> > 
> > Director {                           
> >   ... 
> >   Maximum Concurrent Jobs = 4
> > }
> > 
> > JobDefs {
> >   ...
> >   SpoolData = yes
> > }
> > ---
> > Storage {
> >   ...
> >   Maximum Concurrent Jobs = 20
> > }
> > 
> > Device {
> >   ...
> >   Spool Directory = /data/spool;
> >   Maximum Spool Size = 40 GB ;
> >   Maximum Job Spool Size = 1 GB;
> >   ...
> > }
> > 
> > I get spooling but jobs don't run concurrently.. may because jobs use
> > different pool? I also run interactively 3 jobs associated to the same
> > pool..
> > 
> > spooling itself is a serial process: mount tape, write data to spool
> > until it reaches the max size, destage to tape, restart writing to
> > spool...
> > I started using 10gb max size but I always get broken pipe error
> > message, now with 1GB it's working (22 seconds to despool 45MB/s)
> > 
> > I want to solve the problem in bacula way, not rsyncing data between the
> > client and the server.. do you think setting up a file storage daemon on
> > the client and then migrate volume to main storage is reasonable?
> > Thanks a lot!
> > Massimo
> > 
> > 
> > 
> > 
> > 
> > 
> > In the middle there is a checkpoint firewall 
> > 
> > On Tue, 2008-12-30 at 19:38 -0500, John Drescher wrote:
> >>> A large, mostly static data set, and low bandwidth connection: this is
> >>> rsync's specialty!  If you are not familiar with it, look here:
> >>>
> >>> http://samba.anu.edu.au/rsync/
> >>>
> >>> I suggest you use rsync to sync your Windows client to a copy at your main
> >>> site (costs you 20GB of disk space somewhere) then do your full Bacula
> >>> backup from there.
> >>>
> >>> There will no doubt be some issues with timing/scheduling, and possibly
> >>> permissions, to work out.  And restores will be two-stage.  But I think it
> >>> would be worth it.
> >>>
> >> I do that for a remote backup of a few GB and it works great.
> >>
> >> John
> >>
> >> ------------------------------------------------------------------------------
> >> _______________________________________________
> >> Bacula-users mailing list
> >> Bacula-users AT lists.sourceforge DOT net
> >> https://lists.sourceforge.net/lists/listinfo/bacula-users
> > 
> > 
> > ------------------------------------------------------------------------------
> > _______________________________________________
> > Bacula-users mailing list
> > Bacula-users AT lists.sourceforge DOT net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> > 
> 


------------------------------------------------------------------------------
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users