Bacula-users

Re: [Bacula-users] Bacula slow transfer / Compression latency / feature request

2011-06-01 03:39:38
Subject: Re: [Bacula-users] Bacula slow transfer / Compression latency / feature request
From: Hugo Letemplier <hugo.let.35 AT gmail DOT com>
To: Sean Clark <smclark AT tamu DOT edu>
Date: Wed, 1 Jun 2011 09:35:57 +0200
2011/5/31 Sean Clark <smclark AT tamu DOT edu>:
> On 05/30/2011 02:11 PM, reiserfs wrote:
>> Hello, im new with Bacula scene, i have used the HP Dataprotector with a HP 
>> Library fiber channel backup.
>>
>> With the Dataprotector i got 1gbps interfaces and switch and all jobs get 
>> done very fast, with transfer like 50-80MB/s.
>>
>> Now im using Bacula with a DELL TL2000 iSCSI, and with my first experience i 
>> got only 6MB/s transfer with 1gbps interfaces and switch.
>>
>> So what im missing
>>
>> Used to tes:
>> Bacula Director runing on Slackware64 13.1
>> Bacula Cliente Windows 2003 Server
> Turning on software gzip compression on the client is definitely a major
> performance killer, unfortunately, so that would be my first guess as
> well.  This looks like a good place to mention some testing I've done.
>
> I've been doing some testing lately due to also being somewhat
> aggravated as the apparently slow transfer rates I get during Bacula
> backups, but it's starting to look like it's not really Bacula's fault
> most of the time.  Most of the time, it looks like the problem is just
> how fast the client can read files off of the disk and send them.  The
> network (at least on Gb) is not usually the problem, nor even database
> activity on the director (attribute spooling will help if you DO have
> any problems with that).
>
> Encryption and gzip compression by the client introduce major latency
> that unavoidably slows down the transfer, and this isn't specifically a
> bacula client issue.  Other things I have seen that cause major
> slowdowns are antivirus software on Windows (particularly "on-access
> scanning") and active use of the computer while the backup is running.
>
> Regarding compression, specifically, though - testing on my laptop here,
> I tested just reading files from /usr and /home with "tar", piping them
> through "pv" to get the transfer rate (and then dumping them directly to
> /dev/null).  I repeated the tests then with some different compression
> schemes inserted.
> for example:
>
> tar -cf - /usr | pv -b -r -a > /dev/null ("No Compression")
> tar -cf - /usr | gzip -c | pv -b -r -a > /dev/null ("GZIP")
> tar -cf - /usr | gzip -1 -c | pv -b -r -a > /dev/null ("GZIP1")
> tar -cf - /usr/ | lzop -c | pv -b -r -a > /dev/null ("LZO")
>
> (and repeated for /home)
>
> Here are my results:
>
> /usr
> No Compression: 5.58GB total data, Avg 13.1MB/s (436s to finish)
> GZIP: 2.11GB Total data, Avg 2.97MB/s (727s to finish)
> GZIP1: 2.36GB Total data, Avg 4.13MB/s (585s to finish)
> LZO: 2.82GB Total Data, Avg 6.48MB/s (445s to finish)
>
> /home (includes a lot of e.g. media files that are not very compressible)
> No Compression: 91.56GB Total Data, 34.5MB/s Avg,, (~2700s to finish)
> GZIP: 77.1GB Total Data, 9.78MB/s Avg, (8072s to finish)
> GZIP1: 77.6GB Total Data, 11.7MB/s Avg, (~6790s to finish)
> LZO: 80.6GB Total Data, 28.3MB/s Avg, (~2900s to finish)
>
> So, yes, if you have gzip compression turned on, you'll almost certainly
> see a huge increase in speed if you turn it off (I believe most tape
> drives can or will do compression in hardware, so you don't need to
> pre-compress at the client).
>
> If you are backing up to disk as I am (or for some reason aren't doing
> hardware compression on the tape drive), you can also get a small speed
> increase by dropping the gzip compression down to the minimum
> ("Compress=GZIP1" in the FileSet), which seem to compress almost as well
> overall but induces less latency.
>
> FEATURE REQUEST:
> However, assuming my tests so far are representative, it looks like LZO
> compression can get backup jobs transferred in almost the same amount of
> time as no compression at all, while still substantially reducing the
> amount of data transferred and stored (not as much as GZIP does, but
> still a noteworthy amount).  Is it possible we could get a
> "Compress=LZOP" capability added to bacula-fd?
>
> tl;dr: Turn off compression until and unless an LZO compression option
> is implemented, unless you are desperate for space on your backup media,
> in which case you'll just have to cope with the slow backups.
>
> ------------------------------------------------------------------------------
> Simplify data backup and recovery for your virtual environment with vRanger.
> Installation's a snap, and flexible recovery options mean your data is safe,
> secure and there when you need it. Data protection magic?
> Nope - It's vRanger. Get your free trial download today.
> http://p.sf.net/sfu/quest-sfdev2dev
> _______________________________________________
> Bacula-users mailing list
> Bacula-users AT lists.sourceforge DOT net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>


+1, LZO seems really faster, that would be a good feature

I want to add that I don't think that the whole directory is in a
"tar". I think that each file is compressed and encrypted separately.
Can someone tell us the algorithm that is used by bacula to compress
and encrypt ( by block, by file, … )  ?

------------------------------------------------------------------------------
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>