BackupPC-users

Re: [BackupPC-users] BackupPC speed and some basic questions

2009-03-02 18:58:12
Subject: Re: [BackupPC-users] BackupPC speed and some basic questions
From: Les Mikesell <les AT futuresource DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Mon, 02 Mar 2009 17:45:21 -0600
Reinhold Schoeb wrote:
> >
>>> That takes me to my question : The full backup size of my MacPro is about
>> 1.6 TB. It took 2250 minutes or 1.5 days with 9 MB/s to make a full backup
>>> and I had to not switch off the MacPro during that time. An incremental
>>> backup took about 100 minutes - that was ok. But after 7 incremental
>>> backups BackupPC is now trying to make another full backup. But since the
>>> MacPro is only running during daytime, it does not come to an end.
>> You didn't say which backup method you used when you timed those
>> backups. Generally speaking, rsync would be the best/quickest method in
>> my opinion, since you will transfer the least amount of data....
> 

> The client has 8 GB RAM, my BackupPC server has 1 GB RAM. Both machines do 
> not 
> use swap during backup.

More ram on the server would help by providing more filesystem buffer. 
Hard to tell how much difference it would make, though.


> 
>> CPU utilisation on client and server
> 
> Thats what sar tells me about my server load :
> 
> 16:25:02        CPU     %user     %nice   %system   %iowait    %steal     
> %idle

> 0,58
> 19:25:02        all     55,69      0,00     40,78      3,54      0,00      
> 0,00

Maybe this is wrapped so I'm misinterpreting it, but this looks like you 
have no idle time and a lot of iowait.

> e1l52@mebsuta:~$ scp -c blowfish test.zip alphacentauri:/tmp/
> test.zip   100%   78MB  19.5MB/s   00:04
> 
> Which is the double speed I get with BackupPC.

Don't forget that backuppc is running in perl and uncompressing the 
local copy on the fly for the rsync comparisons.

>> In any case, it is important to do regular full backups, this can be
>> more/less regular depending on your requirements. I increased the
>> frequency to every 3 days to improve performance (using backuppc
>> 2.1.2pl1 and doing remote backups).
>>
>> Hope that helps, for more assistance review the wiki, and then come back
>> and provide some more information and more measurements...
> 
> Yes, of course. What is your opinion ? I think you agree that it is 
> neccassary 
> to get the full backup time lower than the typical client uptime, which is in 
> my case about 12 hours per day. Otherwise a full backup will newer come to an 
> end and restarts every morning.
> 
> My guess is that my server CPU power is not good enough for BackupPC. What do 
> you think about that ? Is an AMD Sempron 2200 too slow for BackupPC ?

That's a relative question.  You might as well ask if 1.6 TB is too big 
- or if there are enough hours in a day. Adding RAM might help.  Rsync 
checksum caching should help if you haven't already enabled it.  If your 
subdirectory layout of that TB+ filesystem makes it feasible, a 
workaround would be to split the runs up into smaller chunks.  If you 
set it up so each run looks like a different host (with 
$Conf{ClientNameAlias} pointing them back to the real target, you can 
skew the days that the full runs happen.

-- 
   Les Mikesell
    lesmikesell AT gmail DOT com


------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/