BackupPC-users

Re: [BackupPC-users] Yet another filesystem thread

2011-06-30 06:57:15
Subject: Re: [BackupPC-users] Yet another filesystem thread
From: "C. Ronoz" <chronoz AT eproxy DOT nl>
To: backuppc-users AT lists.sourceforge DOT net
Date: Thu, 30 Jun 2011 12:54:44 +0200
>> What filesystem should I use? It seems ext4 and reiserfs are the only viable 
>> options. I just hate the slowness of ext3 for rm -rf hardlink jobs, while 
>> xfs and btrfs seem to be very unstable.
>>
>> - How stable is XFS?
>> - Is reiserfs (much) better at hard-link removal?
>> - Is reiserfs (much) less stable compared to ext4?
>>
>> BackupPC seems to recommend reiserfs although many sites say it's still an 
>> unstable file system that does not have much lifespan left.
>>
>> My first back-up has been taking 12 hours for a small server and it's still 
>> processing... there's only a few gigabytes of data on the Linux machine. 
>> There should be more than enough power as rsnapshot back-ups always were 
>> done in quick fashion. Even Bacula was able to do back-ups in less than 10 
>> minutes.
>
>If you are backing up a few gigabytes and it is taking 12 hours, then
>ext3 is not your problem.  It may be slower than some of the other
>options, but it is not THAT much slower.  My largest backup is 300GB and
>a full backup takes 15 hours.  Both the client and server are running ext3.
>
>How much memory do you have on the backup server?  What backup method
>are you using?
The server has 1GB memory, but a pretty powerful processor. Although load seems 
pretty distrastrous too: http://images.codepad.eu/v-ISmSn6.png

I found out that BackupPC is ignoring my Excludes though, while I have a 15GB 
/pub partition. 
This could explain why the run takes longer, but it should still finish within 
an hour? 
Rsnapshot runs were always lightning fast, network is 1gbit. 

$Conf{BackupFilesOnly} = {};
$Conf{BackupFilesExclude} = {'/proc', '/blaat', '/pub', '/tmp'};

>You can just delete the directory and remove the test host from your
>hosts file.
That will only remove the hardlinks, not the original files in the pool?
Running du -h --max-depth=2 on /var/lib/backuppc/cpool, pc does not complete 
within 20 minutes, so I can't show a listing.

>The space should be released when BackuPC_Nightly runs.  If you want to
>start over quickly, I'd make a new filesystem on your archive partition
>(assuming you did mount a separate partition there, which is always a
>good idea...) and re-install the program.

I ran backuppc nightly /usr/share/backuppc/bin/BackupPC_nightly 0 255 after 
removing all but 1 small host, but there are still lots of files left.
root@backuppc:~# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1             19909500   1424848  17473300   8% /
tmpfs                   513604         0    513604   0% /lib/init/rw
udev                    508852       108    508744   1% /dev
tmpfs                   513604         0    513604   0% /dev/shm
/dev/sdb1            206422036  24155916 171780500  13% /var/lib/backuppc
root@backuppc:~#

-- 

------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/