Networker

Re: [Networker] /nsr iops

2011-09-30 13:29:37
Subject: Re: [Networker] /nsr iops
From: Yaron Zabary <yaron AT ARISTO.TAU.AC DOT IL>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 30 Sep 2011 20:28:58 +0300
On 09/29/2011 11:46 PM, Eugene Vilensky wrote:
On Wed, Sep 21, 2011 at 12:14 AM, Yaron Zabary<yaron AT aristo.tau.ac DOT il>  
wrote:
  As a side note, I don't really see how you will be able to feed 4 LTO5
drives. You didn't say which hardware and OS you will be using, but keep in
mind that LTO5 is 140MBps native (the 2009 draft suggested 180MBps) which
means you will be trying to push ~500MBps from your array and server. This
will require you to optimize your setup or even replace or add hardware.

We currently have 4x LTO3 fed by a 1 Gb metro link, so they are either
shoe-shining like crazy or we multiplex so many save sets onto a
single tape that restores become a very drawn out process.  If the
array can keep one drive fed with sequentially staged savesets then
we've made a great improvement :)

I was hoping to keep things simple and use just a single device, but I
can only associate a single staging policy with a single adv_file
device, and hence only a single destination staging pool?  Is there a
recommendation for how many adv_file of what size I'd need (aside from
1 per staging destination pool) :)

I am not sure why you think you need to assign a device to a pool (you could, but I don't see why). Our setup has four LTO-4 drives and each drive can stage to any pool and this has never been a problem. Of course, it might happen that a two or even three staging operations are running concurrently, but you really need to make sure that you have enough resources for feeding the drives.

Our setup has a single ZFS pool over three RAIDz1-2 (48 1Tb SATA drives) on a Sun x4500 with a single 30TB file system. There are 10 AFTDs on this file system. I never saw any issues with this setup. The tape drives are connected to a Sun T1000 (6core @1Ghz). The bottleneck we have is with the 1Gb link that connects the x4500 to the T1000 (there is no point in teaming multiple 1Gb NICs because the switch that connects them cannot do L4 load balancing).


About setup:
We are RHEL6.1, using XFS.

mkfs.xfs options were: -l version=2 -d su=128k,sw=11 to match the 11
spans of RAID10 and 128kb RAID stripe (appears to match the 128KB
block size of adv_file devices as according to the Tape Configuration
Guide, that can't hurt right?)

mount options are:
rw,noatime,nodiratime,logbufs=8,logbsize=256K,nobarrier,osyncisdsync

I'd have to reboot with mem=256m again to verify, but the last
sequential read was 480mbytes and sequential o_direct writes a little
over 550mbytes.  Hopefully we'll scale this to another 12 or 24
spindles in the not too distant future...

  That is very good. What processors do you have ? How many of them ?


Thank you,
Eugene

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>