BackupPC-users

Re: [BackupPC-users] Poor BackupPC Performance

2011-08-02 04:42:33
Subject: Re: [BackupPC-users] Poor BackupPC Performance
From: Frédéric Massot <frederic AT juliana-multimedia DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Tue, 02 Aug 2011 10:19:11 +0200
Le 26/07/2011 10:41, Pedro M. S. Oliveira a écrit :
> Hello,
> I ve hosts where I reach 20MBs over Gb links and about 10MBs over fast
> ethernet.
> You may want to look at the backuppc fs and also to the compression
> settings.
> Usually I have better performance with BackupPC than with bacula.  I
> also have BackupPC running over enterprise servers with 8GB ram,  hw
> raid,  and smp.  The host os is generally sles,  centos,  redhat.  The
> fs ext3 or ext4 but with some tweaks on the fs.  Don't  remember all and
> can't see it as I'm on my mobile.

Hi,

Following these questions about the peformances and tuning the file 
system, I will try to do a summary.

The comments and additions are welcome.

I will take a fairly trivial example for a server, some partitioned 
disks, partitions assemblies with Software RAID (mdadm), the RAID array 
is used as a physical volume (LVM), a volume group, and logical volumes 
with file systems.


- Disks

On the disks, the read-ahead can be configured, read-ahead controls how 
data is read in advance by the system.

To read the value:
sudo blockdev --getra /dev/sda
cat /sys/block/sda/queue/read_ahead_kb

To change the value:
sudo blockdev --setra 2048 /dev/sda
echo 1024 > /sys/block/sda/queue/read_ahead_kb

Also try /etc/sysfs.conf of sysfsutils package (Debian).

I don't know a rule to estimate the value to set.

For SATA drives, NCQ can be enabled or not, according to differents 
documents that may or may not improve performance.

To read the value:
cat /sys/block/sda/device/queue_depth

To disable NCQ:
echo 1 > /sys/block/sda/device/queue_depth

The maximum value depends of disc and chipset, configured by the BIOS. 
This value can be seen in /var/log/dmesg.


- Partitions

It is preferable that the partitions are created by taking into account 
the data block size sent by the system, usually 4KB. This prevents the 
system to cut / assemble blocks for they align to the geometry of the 
partition.

The old version of fdisk did not take into account the physical 
characteristics of the discs. Since version 2.17, fdisk uses the 
libblkid library to know the topology of the discs.

Otherwise no action, fdisk created the first partition from sector 63 
and did not take into account the data blocks size for the partition 
size. Since version 2.17, the first partition begins at sector 2048 and 
the size of partitions is a multiple of the data blocks size. This will 
take into account the new disk with 4KB sectors.

The alignment may affect performance.

We can re-partitioning drives by changing them one by one with discs a 
bit bigger and partitions of the same size or slightly larger. Then 
adjusted the size of the RAID array to the size of partitions, the 
physical volume, the logical volumes and finally the file systems.


- RAID array

As for disks, the read-ahead for RAID array can be configured, 
read-ahead controls how data is read in advance by the system.

To read the value:
sudo blockdev --getra /dev/md0
cat /sys/block/md0/queue/read_ahead_kb

To change the value:
sudo blockdev --setra 2048 /dev/md0
echo 1024 > /sys/block/sda/queue/read_ahead_kb

Also try /etc/sysfs.conf of sysfsutils package (Debian).

I don't know a rule to estimate the value to set.


When creating the RAID, the choice of chunk size has an importance. From 
what I read, it is 64 KB for RAID 5 and 6, 256 or 512 KB for RAID 0 and 10.

The current value is visible:
cat /proc/mdstat


The size of the stripe cache also important for RAID 5 and 6.

To read the size:
cat /sys/block/md0/md/stripe_cache_size

To see the value used:
watch cat /sys/block/md0/md/stripe_cache_active

To change the value:
echo 1024 > /sys/block/md0/md/stripe_cache_size

Also try /etc/sysfs.conf of sysfsutils package (Debian).

I don't know a rule to estimate the value to set.


- LVM

For LVM, the read-ahead can be also configured, he controls how that 
data is read in advance by the system.

To see all the values for all logical volumes:
sudo lvdisplay

Normally the line "Read ahead sectors" is "auto", meaning that the 
kernel automatically sets the value. By cons, it's not really dynamic, 
this value seems to be set at boot or when creating the logical volume.

To read the value:
sudo blockdev --getra /dev/dm-0 or /dev/vgX/lvY
cat /sys/block/dm-0/queue/read_ahead_kb

To change the value:
sudo blockdev --setra 2048 /dev/dm-0 or /dev/vgX/lvY
echo 1024 > /sys/block/dm-0/queue/read_ahead_kb

Also try /etc/sysfs.conf of sysfsutils package (Debian).

I don't know a rule to estimate the value to set.


- File System

For the file system, we must look the data blocks size, the mount 
options. For RAID 4,5,6 and 10 should also look at the RAID stride and 
RAID stripe width.

According to various documents, it is better to have a data block size 
of 4KB.

The active options can be viewed with:
cat /proc/mounts

Ext4fs is automatically mounted with extended attributes and acl, these 
options can be disabled by writing nouser_xattr and noacl to the file 
"/etc/fstab".

noatime or relatime should also be used.

The values of RAID stride and RAID stripe width are set automatically 
when creating XFS file systems. For Ext(2,3,4) family system these 
values are correctly initialized from the 1.41.10 version of e2fsprogs. 
These values are calculated using the topology of the disk with the 
blkid library.

Values can be read with:
sudo tune2fs -l /dev/dm-0

The rule of thumb to calculate these values is:

stride = chunk size / block size
For RAID 5 : stripe width = stride * (( n disks) - 1)
For RAID 10 : ???

These values can be changed after creating the filesystem with the 
tune2fs command:

sudo tune2fs -E stride=n,stripe-width=m /dev/dm-O


I hope not to have made too many mistakes, if you have any supplements 
or rule of thumb, thank you for posting.


-- 
==============================================
|              FRÉDÉRIC MASSOT               |
|     http://www.juliana-multimedia.com      |
|   mailto:frederic AT juliana-multimedia DOT com   |
===========================Debian=GNU/Linux===

------------------------------------------------------------------------------
BlackBerry&reg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos & much more. Register early & save!
http://p.sf.net/sfu/rim-blackberry-1
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
<Prev in Thread] Current Thread [Next in Thread>
  • Re: [BackupPC-users] Poor BackupPC Performance, Frédéric Massot <=