Bacula-users

Re: [Bacula-users] How it optimize the transfer speed.

2010-01-26 05:22:14
Subject: Re: [Bacula-users] How it optimize the transfer speed.
From: Gavin McCullagh <gavin.mccullagh AT gcd DOT ie>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 26 Jan 2010 10:19:46 +0000
Hi,

as a relative bacula newbie myself I have a couple of suggestions.

On Tue, 26 Jan 2010, Cyril Lavier wrote:

> Now that my exclude rule works perfectly (thank you guys), I just begin 
> to see a problem.
> 
> Backups are made on a LAN 100Mbit.
> 
> But the actual speed of bacula's backup is about 22GB/hour, it's about 
> 50Mbit/second, so it's the half of the actual capacity of the network.

The thing you need to establish here I think is where exactly your
bottleneck lies.  A few tests to narrow things down:

 - try an iperf test between client and server first, to check what speed
   the TCP stacks and the network can sustain a TCP connection going at
   higher than this speed.  Depending on your network, an upgrade to GigE
   may be quite affordable ... or not.

 - identify how quickly the server can receive and write data to disk/tape.
   Perhaps a local bacula backup from the SD to itself of some large
   sequential data might give you a clue (ideally don't read and write from
   the same disk).

 - identify how quickly the client can send data.  You could try a backup
   to an SD nearer (even on the client -- though on a different disk).

The above may not give absolute certainty but it may be enough to give you
a strong indication.

I've managed 10.67 M bytes/second (about 85Mb/sec) on 230GB with the client
on a 100Mb/sec link -- but I had to change a firewall out to do that as the
old firewall couldn't deal with the load.  Without the firewall in the way
and the same server, I've seen 1Mbyte/sec -- due to the client being too
slow (slow disk, fragmented filesystem and slow cpu).

Compression (if you're not already using it) might be useful, but it loads
the client cpu quite heavily so if the client were the bottleneck, this is
unlikely to help much.  It would help with either the network or the
storage daemon though.  If you are already using compression, cpu might be
your bottleneck.

> Now the problem is real, because I have a fileset which is 1.5TB big, 
> and I need to make a full backup once a month (the other backups are 
> incremental ones), and with the transfer speed I have now, the full 
> backup would last for about 3 days.

One workaround to this is to use virtualfull backups (which require the
accurate backup option).  With this method you can use an old full backup
and subsequent differential/incremental backups to construct a more recent
full backup -- without bothering the server.  In principal, this can allow
you to avoid running your 1.5TB full backup.

Gavin


------------------------------------------------------------------------------
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users