Networker

Re: [Networker] RHEL6/XFS/NW 7.6.2

2011-11-01 21:42:25
Subject: Re: [Networker] RHEL6/XFS/NW 7.6.2
From: Eugene Vilensky <evilensky AT GMAIL DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 1 Nov 2011 20:40:39 -0500
On Tue, Nov 1, 2011 at 8:59 AM, Francis Swasey <Frank.Swasey AT uvm DOT edu> 
wrote:

>
> Before I ask EMC and RedHat to point fingers at each other, I thought I'd
> ping you illustrious
> folk to see if any of you know of anything that would indicate I have
> either royally screwed up
> by choosing XFS or if perhaps you have more experience with XFS and can
> suggest something in
> the way of tuning to "make it stop" (right now, I'm using mostly default
> options, with the
> single exception of adding the inode64 option to allow inodes to be placed
> beyond the first 4TB
> of the 17TB disk).
>
> Thanks for any pointers!


Frank, is there any opportunity for you to take the storage node offline,
remove networker from the picture, and put some sequential and then random
IO stress on the subsystem?

I have a 24TB XFS AFTD volume right now, but few clients to stress it
(aside maybe for staging operations...); however it stayed rock solid
during benchmarking (dd, bonnie++, iozone).

file system mount options:
rw,noatime,nodiratime,logbufs=8,logbsize=256K,nobarrier,osyncisdsync,inode64

create options:
mkfs.xfs -l version-2 -d su=128k,sw=11

to match raid stripe size and the number of RAID bands in the device (22
disks, RAID1+0)

At boot:
elevator=noop

and a sizeable read-ahead for sequential reads (with read-ahead completely
disabled on the RAID controller itself):
/sbin/blockdev --setra 16384 /dev/sdxx

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>