BackupPC-users

Re: [BackupPC-users] overlapping BackupPC_nightlys

2008-04-18 15:32:19
Subject: Re: [BackupPC-users] overlapping BackupPC_nightlys
From: Jonathan Dill <jonathan AT nerds DOT net>
To: Tony Schreiner <schreian AT bc DOT edu>
Date: Fri, 18 Apr 2008 15:31:59 -0400
On Apr 18, 2008, at 2:00 PM, Tony Schreiner wrote:
> dedicated backup server, 64-bit CentOS on 4 GB RAM. dstat doesn't  
> show any paging. the clients tend to have much more RAM

Very good, that sounds adequate, more cores and more RAM on the server  
could still help a bit, or an additional server or two (but again  
budget) but sounds like not the main cause of the problem.

> The BackupPC host summary web page shows speeds between 9 MB/s and  
> 35 MB/s depending on the client, I think that's in line with what  
> others are seeing on Gbit networks. The storage is a 3Ware 9550 with  
> 10 disks ( I admit that the controller BBU has failed and needs  
> replacing, and that is slowing down my write speed).

Sounds about right.

> When I watch the progress of backups on the server, what really  
> seems to be the slowest part is when large files ( > several Gb )  
> are being compared against the pool. I don't know enough about the  
> internals of the software to know quite what is happening there.

This is good info, could get you answers more suited to your  
particular set up.

First, a word of caution, it's possible that changing compression or  
checksum caching will cause BackupPC to immediately eat up 2x as much  
disk space until old files expire from the pool, someone here should  
be better qualified to explain if that could be a problem.  I do know  
for sure that happens if you change certain parameters that affect the  
hashing function.

You should use "Rsync checksum caching" if you do not already do so,  
that may help quite a bit, pretty good description in the faq in the  
section with the same title, this will take advantage of rsync's built- 
in checksum functions.  There is some risk that if a file gets  
corrupted in the pool for some reason, you won't learn about it until  
you try to restore, so you should keep that in mind especially since  
the RAID controller has dead BBU.

http://backuppc.sourceforge.net/faq/BackupPC.html

If these huge files are not more or less "static" and you can manage  
enough disk space, you may want to turn off compression, or perhaps  
exclude those files from BackupPC and use a different method to back  
them up, at least try a smaller value for $Conf{CompressLevel}  See  
also previous faq topic "Compressed file format".  If the files  
themselves are already compressed on the source, then BackupPC should  
"detect" that compressing them more doesn't help and switch to flush()  
in which case turning off compression probably won't help.

I haven't dug around in the guts of BackupPC for some time and all of  
the configuration options that are possible, there may well be some  
option to not compress files over some size limit, or other tweaks  
that you could make, hopefully someone here could give you some better  
answers on that.

Jonathan

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/