Networker

Re: [Networker] /nsr iops

2011-10-01 10:55:46
Subject: Re: [Networker] /nsr iops
From: Yaron Zabary <yaron AT ARISTO.TAU.AC DOT IL>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Sat, 1 Oct 2011 17:52:06 +0300
On 09/30/2011 11:47 PM, Eugene Vilensky wrote:
On Fri, Sep 30, 2011 at 12:28 PM, Yaron Zabary<yaron AT aristo.tau.ac DOT il>  
wrote:

  I am not sure why you think you need to assign a device to a pool (you
could, but I don't see why). Our setup has four LTO-4 drives and each drive
can stage to any pool and this has never been a problem. Of course, it might
happen that a two or even three staging operations are running concurrently,
but you really need to make sure that you have enough resources for feeding
the drives.

Please see the two images from NMC here: http://imgur.com/a/WtY9O

First I create a staging policy for the /mnt/adv_file device, which is
in "3 Month Pool", to stage to the same "3 Month Pool".

Then I try to create another stage for the same device into the "2 Week Pool"

I will re-read the guide I am probably not understanding this
properly.  I want to associate the /mnt/adv_file/ device with every
pool in my datazone and have all backups prefer adv_file.  Then I want
to stage to different tape pools (which are segregated by retention
and browse policies in our environment) based on the group....

I have ten AFTD devices (see below), which all sits in the same 30Tb file system. They belong to different (disk) pools based on their retention policy. Each AFTD has its own staging policy (actually there are some policies with more than a single device) which is writing to a tape pool (again based on retention policies). I think that your mistake is not creating multiple devices and trying to use this single device with multiple staging policies.

# ls -l /pool/DBO/
total 72
drwxr-xr-x 104 root     root         106 Sep 21 03:25 DBO1m
drwxr-xr-x 104 root     root         106 Sep 20 22:00 DBODefault
drwxr-xr-x 104 root     root         106 Oct  1 16:00 DBOERPM
drwxr-xr-x 104 root     root         106 Sep 20 20:00 DBOExchange
drwxr-xr-x 104 root     root         106 Sep 26 00:43 DBOLarge
drwxr-xr-x 104 root     root         106 Sep 21 00:43 DBOLarge2
drwxr-xr-x 104 root     root         106 Sep 22 00:50 DBOMail
drwxr-xr-x 104 root     root         106 Sep 20 20:20 DBOWinNovTel
drwxr-xr-x 104 root     root         106 Sep 21 00:32 Mail2



  Our setup has a single ZFS pool over three RAIDz1-2 (48 1Tb SATA drives) on
a Sun x4500 with a single 30TB file system. There are 10 AFTDs on this file
system. I never saw any issues with this setup.

I envy how easy it must be to create file systems on ZFS :).

The tape drives are
connected to a Sun T1000 (6core @1Ghz). The bottleneck we have is with the
1Gb link that connects the x4500 to the T1000 (there is no point in teaming
multiple 1Gb NICs because the switch that connects them cannot do L4 load
balancing).


About setup:
We are RHEL6.1, using XFS.

mkfs.xfs options were: -l version=2 -d su=128k,sw=11 to match the 11
spans of RAID10 and 128kb RAID stripe (appears to match the 128KB
block size of adv_file devices as according to the Tape Configuration
Guide, that can't hurt right?)

mount options are:
rw,noatime,nodiratime,logbufs=8,logbsize=256K,nobarrier,osyncisdsync

I'd have to reboot with mem=256m again to verify, but the last
sequential read was 480mbytes and sequential o_direct writes a little
over 550mbytes.  Hopefully we'll scale this to another 12 or 24
spindles in the not too distant future...

  That is very good. What processors do you have ? How many of them ?

Two 6-core Intel Westmere with 24GB of triple channel 1333 RAM, it's
connected via SAS 6Gbps to two Dell MD1220, multipathed SAS.  It's
replacing a Sun Fire v40z with dual 1.8 Ghz single-core Opterons and
4GB of RAM.

 That should be enough for driving your four LTO-5 drives.

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>