Networker

Re: [Networker] RHEL6/XFS/NW 7.6.2

2011-11-02 07:56:25
Subject: Re: [Networker] RHEL6/XFS/NW 7.6.2
From: Francis Swasey <Frank.Swasey AT UVM DOT EDU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 2 Nov 2011 07:55:46 -0400
Sent from a medium size mobile device

On Nov 1, 2011, at 21:40, Eugene Vilensky <evilensky AT gmail DOT com> wrote:

> On Tue, Nov 1, 2011 at 8:59 AM, Francis Swasey <Frank.Swasey AT uvm DOT edu> 
> wrote:
> 
> Before I ask EMC and RedHat to point fingers at each other, I thought I'd 
> ping you illustrious
> folk to see if any of you know of anything that would indicate I have either 
> royally screwed up
> by choosing XFS or if perhaps you have more experience with XFS and can 
> suggest something in
> the way of tuning to "make it stop" (right now, I'm using mostly default 
> options, with the
> single exception of adding the inode64 option to allow inodes to be placed 
> beyond the first 4TB
> of the 17TB disk).
> 
> Thanks for any pointers!
> 
> Frank, is there any opportunity for you to take the storage node offline, 
> remove networker from the picture, and put some sequential and then random IO 
> stress on the subsystem?


There is, as long as I empty off these 34TB of data first.  I will get started 
on that.  I did performance testing of XFS vs EXT4 in a different environment 
before I deployed here.  Perhaps, there is something unique with the  QLogic 
HBA's or the NexSan itself here.

> 
> I have a 24TB XFS AFTD volume right now, but few clients to stress it (aside 
> maybe for staging operations...); however it stayed rock solid during 
> benchmarking (dd, bonnie++, iozone).

I am a firm believer in teach vs give (as in fish...) - So, if you don't mind, 
would you care to explain why you chose to use those mount/mkfs options?  
Perhaps there is some documentation that I have missed in my various searches.

> 
> file system mount options:
> rw,noatime,nodiratime,logbufs=8,logbsize=256K,nobarrier,osyncisdsync,inode64
> 
> create options: 
> mkfs.xfs -l version-2 -d su=128k,sw=11  
> 
> to match raid stripe size and the number of RAID bands in the device (22 
> disks, RAID1+0)
> 
> At boot:
> elevator=noop
> 
> and a sizeable read-ahead for sequential reads (with read-ahead completely 
> disabled on the RAID controller itself):
> /sbin/blockdev --setra 16384 /dev/sdxx

And above all, thank you for being willing to share your knowledge.

Frank
To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>