BackupPC-users

Re: [BackupPC-users] BackupPC_Link takes ages

2012-07-13 17:51:25
Subject: Re: [BackupPC-users] BackupPC_Link takes ages
From: Jonas Meurer <jonas AT freesources DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Fri, 13 Jul 2012 23:50:04 +0200
Hey again,

Am 05.07.2012 16:10, schrieb Adam Goryachev:
>> first, I forgot to mention an important detail: we changed from
>> ext3 to xfs on the 5TB RAID10 array which holds all BackupPC data
>> during the re-setup.
>>
>> Am 05.07.2012 15:04, schrieb Adam Goryachev: no, this doesn't seem
>> to be the problem. I didn't find any unexpected logs. The pc/$HOST
>> directory shrinks a lot during the link-process, so I consider this
>> as evidence that pooling actually works as expected. I did
>> recognize though, that the pool directory is empty on all BackupPC 
>> servers. All pooled files seem to be stored in cpool instead. Is
>> this expected?
> compressed files are in cpool, so this suggests you have compression
> enabled (for file storage, not in transit). This is 'normal'.
>> 2) Filesystem or hard drive layout/configuration (ie, RAID level, 
>> layout, chunk size, ext3 compared to jfs, etc) As written above:
>> moved from ext3 to xfs as we thought that this might increase
>> performance.
>
> Obviously this is the big change, so you should probably start here. I
> would suggest reading up on how to optimize xfs for backuppc, or
> generically, a large number of small read/writes.

I now set up a new server with comparable hardware resources and
migrated all backup clients to this new host. I added the clients in
three big steps, and so far, the new server keeps up pretty well. The
new server uses ext4 as filesystem.

But I recently found another reason why BackupPC_Link processes took so
long on the old server. The option to fill up incremental backups was
enabled for some reason. I turned it off now. I also lowered the maximum
number of pending link processes. I will keep the old server running for
some more weeks, to see whether  these changes finally make the difference.

> I recently subscribed to the linux-raid mailing list for a backuppc
> related issue in fact, and saw a few un-related (to me) posts regarding
> default raid stripe size, and (I think it was) xfs file system being
> problematic. I think there is some relationship with the FS storing some
> type of data every x bytes per disk, and if this data all ends up on the
> same physical disk (or in your case pair of disks) then you end up with
> a dis-proportionate amount of load on a small subset of your disk array.
> It seems you are using hardware raid which presents the disk as "sdk",
> however, the basic idea would still apply (stripe size of the raid and
> block size of the FS).

Unfortunately I didn't find the time to dig into xfs tuning yet, but
soon I'll know whether it makes a big difference, or whether the 'fill
incemental backups' option was the sole reason for my problems.

Regards,
 jonas


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/