Networker

Re: [Networker] Networker 7.2.2 on Sun X4500 using AFTD and zfs

2008-04-14 11:58:48
Subject: Re: [Networker] Networker 7.2.2 on Sun X4500 using AFTD and zfs
From: Ian G Batten <ian.batten AT UK.FUJITSU DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 14 Apr 2008 16:55:19 +0100
On 14 Apr 08, at 1140, tkimball wrote:

Ian, in regards to your question - our own concern is with ZFS in general. Though we're happy with its performance in a Development environment, we're leery of implementing it in production until we have it more widely deployed on the more 'stressful' (i.e. everyone runs something all the time) test servers and see what happens.

There's a performance hit at >95% occupancy under high load, otherwise it's been solid for us for some years.



We also don't like the idea that a corruption of the 'master indexing file' held on the boot disk will effectively wipe our entire configuration without recovery (we use our adv_file disks as a sort of 'data warehouse' for certain types of backup, so this is a showstopper).

I don't follow. Are you referring to /etc/zfs/zpool.cache? That only changes when you change the configuration of the zpool (ie adding and removing disk devices), is only a few kilobytes in size and is trivially easily backed up.

# pwd
/etc/zfs
# ls -ltr
total 16
-rw-r--r--   1 root     root        8024 Feb 29 07:56 zpool.cache
# zfs create pool1/throwaway
# ls -ltr
total 16
-rw-r--r--   1 root     root        8024 Feb 29 07:56 zpool.cache
# zfs destroy pool1/throwaway
#

Once you have the pools, the filesystems within them are recorded in the pools.

Were you to lose this file, `zpool import' searches all the devices in /dev/dsk and attempts to stitch back together the pools that it finds there. You can even import pools that have been destroyed. Read, for example:

http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s06.html



Lastly, from our own testing here, RAID-5 on hardware is usually more efficient on our SPARC servers than any software-based one (the sole exception, paradoxically, being Veritas VxVM on a FC JBOD).

You can (indeed, we do) use ZFS on hardware RAID-5. We just create LUNs and then catenate them. pool1 and pool2 span 48-disk DotHill boxes, each sub-device being a RAID 5 group:

# zpool status
  pool: onboard
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        onboard       ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t0d0s4  ONLINE       0     0     0
            c1t0d0s4  ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t1d0s4  ONLINE       0     0     0
            c1t1d0s4  ONLINE       0     0     0

errors: No known data errors

  pool: pool1
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 c7t600C0FF0000000000992DD1E11EA7D00d0 ONLINE 0 0 0 c7t600C0FF0000000000992DD2D599A5300d0 ONLINE 0 0 0 c7t600C0FF0000000000992DD02D3066400d0 ONLINE 0 0 0 c7t600C0FF0000000000992DD39CBE66800d0 ONLINE 0 0 0 c7t600C0FF0000000000992DD152E5BE700d0 ONLINE 0 0 0

errors: No known data errors

  pool: pool2
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 c7t600C0FF0000000000992DB2E5C0D7200d0 ONLINE 0 0 0 c7t600C0FF0000000000992DB4F24F7AE00d0 ONLINE 0 0 0 c7t600C0FF0000000000992DB6B49A6CB00d0 ONLINE 0 0 0 c7t600C0FF0000000000992DB037012C900d0 ONLINE 0 0 0 c7t600C0FF0000000000992DB149204DE00d0 ONLINE 0 0 0

errors: No known data errors
#




ian

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>