Networker

Re: [Networker] /nsr iops

2011-09-29 16:48:20
Subject: Re: [Networker] /nsr iops
From: Eugene Vilensky <evilensky AT GMAIL DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 29 Sep 2011 15:46:16 -0500
On Wed, Sep 21, 2011 at 12:14 AM, Yaron Zabary <yaron AT aristo.tau.ac DOT il> 
wrote:
>  As a side note, I don't really see how you will be able to feed 4 LTO5
> drives. You didn't say which hardware and OS you will be using, but keep in
> mind that LTO5 is 140MBps native (the 2009 draft suggested 180MBps) which
> means you will be trying to push ~500MBps from your array and server. This
> will require you to optimize your setup or even replace or add hardware.

We currently have 4x LTO3 fed by a 1 Gb metro link, so they are either
shoe-shining like crazy or we multiplex so many save sets onto a
single tape that restores become a very drawn out process.  If the
array can keep one drive fed with sequentially staged savesets then
we've made a great improvement :)

I was hoping to keep things simple and use just a single device, but I
can only associate a single staging policy with a single adv_file
device, and hence only a single destination staging pool?  Is there a
recommendation for how many adv_file of what size I'd need (aside from
1 per staging destination pool) :)

About setup:
We are RHEL6.1, using XFS.

mkfs.xfs options were: -l version=2 -d su=128k,sw=11 to match the 11
spans of RAID10 and 128kb RAID stripe (appears to match the 128KB
block size of adv_file devices as according to the Tape Configuration
Guide, that can't hurt right?)

mount options are:
rw,noatime,nodiratime,logbufs=8,logbsize=256K,nobarrier,osyncisdsync

I'd have to reboot with mem=256m again to verify, but the last
sequential read was 480mbytes and sequential o_direct writes a little
over 550mbytes.  Hopefully we'll scale this to another 12 or 24
spindles in the not too distant future...

Thank you,
Eugene

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>