BackupPC-users

Re: [BackupPC-users] NAS performance over NFS

2008-09-02 22:55:35
Subject: Re: [BackupPC-users] NAS performance over NFS
From: dan <dandenson AT gmail DOT com>
To: "Holger Parplies" <wbppc AT parplies DOT de>
Date: Tue, 2 Sep 2008 20:55:28 -0600
I have run some thorough tests with various NAS/SAN  and network filesystems like NFS/SMB/CIFS so here is a bit of what I know.

local filesystem > iscsi > aoe > NFS > SMB/CIFS

a local filesystem will bascially always be faster until you move up to a many-drive SAN with bonded Gigabit NICS.  1 Gigabit nic is worth about 125MB/s raw.  the heavier the propocal the more overhead.  you will struggle to get 90MB/s on NFS or SMB but can get 116MB/s or so on iSCSI or AoE. 

many factors other than raw bandwidth effect performance, most important is I/O which is slowed by the round-trip time of your network.  try to ping the NAS, that time in ms is added to every single I/O.

NFS is has HUGE differences between configurations.  Make sure your have the mount option async set, this is huge.

Another HUGE factor for NFS is that it must write on a filesystem, which means that every transaction must first be handled by the NFS server and then by the filesystem which essentially doubles the system resources needed to handle each transaction.  NFS is also fairly old so it does not have any advanced mechanisms to hide these issues.

I run some XenServers and have both NFS and iSCSI data stores.  the iSCSI stores are monsterously faster.  I don't mean 50% faster, more like 2-3x as fast.  In raw transfer speed, I can get about 210MB/s on the iSCSI over a bonded pair of Gigabit NICs(intel) and only 160MB on NFS.  The difference is that random I/O on the iSCSI is no issue and performance drops like a local disk would but random I/O on NFS will drop performance down to 20MB/s.

I also happen to know that BackupPC is I/O heavy.  I have only run Backuppc on a NFS share when testing something as my Xenservers use iSCSI for production and NFS for testing to keep production data isolated.  Because of this I can't tell you how fast it was for me but in other situations NFS was just way way too slow for heavy I/O.


Also, I run home directories on NFS for my cluster.  They hold email in Maildir format as well as basic configuration for some other services that are user specific.  NFS works great for this BUT the server that holds the NFS shares often has system loads of 2 or 3 just during heavy email times.  It is a 2Ghz P3 with a linux RAID1 mirror on recent SATA drives.  about 250 email users on this machine, some heavy users.


So, to wrap this up here are some suggestions
1) get good Gigabit NICS.  Intel 1000/Pro are pretty nice without breaking the bank.  Dont use any realtek junk and 80% of Broadcoms are slow.
2) ASYNC! not sync
3) adjust both the NFS server and client's MTU to match your networking hardware.
4) adjust rsize and wsize which is the read and write block size.
5) make sure you use a good gigabit switch.  Cisco or HP.  Dells are generally slower, netgear and linksys and the like are junk.
6) choose a fast filesystem on the NFS server.  Use XFS or JFS if you can.  They are generally faster for NFS.  EXT3 is a good choice because of stability but is not terribly fast.  reiserfs is not a good choice for NFS because all of it's strengths are erased but NFS I/O weaknesses. 
7)see if you can do iSCSI or AoE.  AoE is dead simple to setup and iSCSI is not to bad either.  They are much faster. iSCSI can be routed over TCP/IP while AoE cannot BUT AoE does not need TCP/IP or an IP address and is faster in raw bandwidth but slower is seeks and I/O than iSCSI.
8) Put the NFS server on an Isolated network if possible.  If you have good switch hardware you have VLAN off a few ports or even VLAN by MAC on Cisco so your NFS transfers don't compete with other traffic.
9)increase the memory limits for NFSD.  add this to the begining of the NFS init script

echo 262144 > /proc/sys/ngr/core/rmem_default
echo 262144 > /proc/sys/ngr/core/rmem_max

and in case other filesystems depend on this size put this at the end of the script

echo 65536 > /proc/sys/ngr/core/rmem_default
echo 65536 > /proc/sys/ngr/core/rmem_max

10)consider using autofs on the backuppc machine to mount the NFS share.  This helps avoid stale NFS and server lockups if the NFS server stalls

good luck



On Tue, Sep 2, 2008 at 6:00 PM, Holger Parplies <wbppc AT parplies DOT de> wrote:
Hi,

Stephen Vaughan wrote on 2008-09-03 09:26:57 +1000 [Re: [BackupPC-users] NAS performance over NFS]:
> That's cool, I'll work on it. The NAS is fairly big, so I'm thinking if I
> turn compression completely off it should improve backup times. At the
> moment it's set to 3, but I don't really need the compression so I might
> just turn it off...

as your bottleneck seems to be data transfer between BackupPC server and NAS
device, I would expect increasing the amount of data to be transferred to
actually slow things down rather than speed them up. Remember that compression/
uncompression is done by the server, not the NAS device. Compression is
probably rather expensive in terms of server CPU usage, but BackupPC tries to
avoid compression in favour of uncompression where possible.
I'm not sure if compressed files need to be uncompressed in any case in order
to determine their length - that would definitely be slower than a stat on an
uncompressed file. A quick check shows that the file size is in the attrib
file, so it's probably not the case.

I'm surprised that noone seems to find using a dedicated network segment
between BackupPC server and NAS device worthwhile. While it may not solve the
problems caused by bad NFS parameters, and the effect depends on your current
network topology, you would in any case limit access to the NAS device, which
is a good thing (if you want that, that is :), and you can use the highest
link speed the NAS device supports (presuming you have/add an appropriate NIC
in the server, of course) without being limited by the rest of your network.

Your NAS device does not, by any chance, support iSCSI or ATA over ethernet? :)

Regards,
Holger

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url="">
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/