Networker

Re: [Networker] e4500 Max I/O capacity issue as Legato Storage No de

2003-04-04 16:50:05
Subject: Re: [Networker] e4500 Max I/O capacity issue as Legato Storage No de
From: "Reed, Ted G II [CC]" <ted.reed AT MAIL.SPRINT DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 4 Apr 2003 15:49:51 -0600
Hmmm.
Matched pair of e4500s (as described below)
standard e450 for firewall servers (doesn't really matter in design)
v880 Master instance (little/no client data movement - ONLY the brain and
fast cpus (1+GHz))
Current Load:  ~500 clients, 8TB/night (staggered full/incr pattern for load
balance)
Anticipated Year End Load:  ~700-750 clients, 12+TB/night

We are seeking a <8hr backup window given current load, so when it grows 50%
by EOY, we can maintain our 'true' 12hr window standard.  Basic thought is
(Total Environment >= 1TB/hr average).

STK 9940B is fibre ready and accepts 2Gb fibre connects native.  I will zone
through a 2Gb switch, McData is current lead contender, and am considering
dynamic drive sharing (which would allow expansion allocation to 3 drives
per 2G HBA, using underutilized master/Firewall support drives in STK silo).

--Ted



-----Original Message-----
From: Bob Schuknecht [mailto:Bob_Schuknecht AT hilton DOT com]
Sent: Friday, April 04, 2003 3:35 PM
To: 'Legato NetWorker discussion'; Reed, Ted G II [CC]
Subject: RE: [Networker] e4500 Max I/O capacity issue as Legato Storage
No de


I'm curious, how much data do you think will be directed toward this
storage node and how many different clients?

This is one hoss of a storage node!

Are the 9940Bs Fibre ready or is there a router in there? What type of
Fibre switch are you using?

-Bob

-----Original Message-----
From: Reed, Ted G II [CC] [mailto:ted.reed AT MAIL.SPRINT DOT COM]
Sent: Friday, April 04, 2003 2:43 PM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: [Networker] e4500 Max I/O capacity issue as Legato Storage Node


I am trying to determine the max I/O capacity of a Sun E4500 for usage as a
Legato storage node.  Here are the current stats and 'designed' I/O plans:

Sun E4500, 8x 400Mhz 4-8G Ram
        4 System Boards, 3 I/O boards
4x Gb Ethernet (Trunked as single IP)
3x 2Gb HBA fibre to Tape devices
6x STK 9940B in STK 9310 silo***

Since you lose ethernet bandwidth due to TCP/IP overhead, you can anticipate
~80MB/sec* per NIC (total ~320MB/sec).  The HBA fibre can handle
~250MB/sec** (total ~750MB/sec) and the 9940B drives running compressed at
60MB/sec*** per drive (total ~360MB/sec).  This should mean I will be
passing 320MB/sec over the e4500 I/O boards.  I am still determining if this
needs to be done GigE+HBA per I/O board or an all GigE board, all HBA board,
and a 'rollover' I/O board.
So I guess I am really wondering if anyone is pushing anything close to this
level of bulk data over a single storage node, regardless of the hardware
types being used.  Thank you all in advance for any information you may be
able to provide.
--Ted Reed, Engineering Storage Services


* 10BaseT ~0.8KB/sec, 100BaseT ~8MB/sec, 1000BaseT ~80MB/sec due to ~20-25%
TCP/IP overhead
** Little to no overhead on fibre for protocols.  1Gb=125MB, 2Gb=250MB
*** 30MB/sec native, 60MB/sec compressed, 90MB/sec burst.  200G native
capacity, 300-600G compressed

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>