Networker

Re: [Networker] Query in Staging from adv_file, X4500 experiences

2008-08-21 15:43:02
Subject: Re: [Networker] Query in Staging from adv_file, X4500 experiences
From: "Ronquillo, Merill C CIV NFELC, IT41" <merill.ronquillo AT NAVY DOT MIL>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 21 Aug 2008 12:39:53 -0700
> -----Original Message-----
> From: EMC NetWorker discussion 
> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of Attila Mester
> Sent: Thursday, August 21, 2008 8:30
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Query in Staging from adv_file, 
> X4500 experiences
> 
> Let me share some of my experiences regarding the X4500 
> performance as a storagenode with you.
> 
> We have recently made a benchmark with the following configuration:
> - 8 NW clients with Sol.10, each connected with 1 GbE to a 
> Cisco network switch
> - X4500 Sol.10 with 48x1TB disks as storagenode, with 6 RAIDZ 
> pools (2 disks for OS, 4 spares and 6 x (6+1) RAIDZ pools)
> - each RAIDZ pool connected as AFTD, total netto usable 36TB out of 48
> - Sun 10 GbE network card connected to the Cisco switch
> - 2 dual-channel 4GbE FC HBAs connected to 4 LTO4 drives in a library
> 

Just curious about your config, since we also have X4500s set up as
storage nodes. Are you implementing any of ZFS' other features, such as
compression, reservations, and quotas? Our config is:

4 x X4500 (1 backup, 3 storage nodes)
-1 zpool comprised of 4 * 9+2 RAIDZ2 devices and 2 hot spares
-4 x 1Gbps ethernet trunked (use dladm)
-5 ZFS filesystems (one for each of our NetWorker media pools)
  -no quotas
  -no reservations
  -compression on

We're writing manual staging scripts so we can control when the RO
devices will be available for restores, but still keep staging policies
as a catch-all. With ZFS compression (we get 2-3x per filesystem!), it
would be interesting to see how NW deals with staging watermarks.

Also, have you had any stability issues with the X4500s? They seem to
randomly hang during really heavy I/O (200MB/s +) and the only fix is to
reboot. I've heard this may be related to their Marvell disk
controllers, which Sun is still trying to fix.

-merill

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER