SAN throughput sizing

taro

Newcomer
Joined
Mar 21, 2008
Messages
1
Reaction score
0
Points
0
Hello,

I'm trying to size up my SAN throughput to a particular host, and wondering if this method is logical or not. Any advice will be deeply appreciated. Here's how i size it up:

Objective
To estimate the minimum aggregated sustained transfer rate (Reads only and writes ignored) from LUN to host.

Background
4+1 LUN (146GB 15K) direct connect to host via 4GB FC HBA
Seagate cheetah 15K.5 disks are used (ST3146855FC) - url: http://www.seagate.com/www/en-us/products/servers/cheetah/cheetah_15k.5/

Assumptions

  • Data path for this sizing is from LUN->Disk controller->FC Interface->FC HBA->PCI-e bus->CPU (one-way)
  • Dedicated PCI-e bus for FC HBA
  • RAID 5 with parity stored on all 5 disks
  • Sizing will be based on minimum sustained transfer rate of disks
  • Minimum sustained transfer rate of disk is based on RAW mode
  • Usable disk size is 130GB
  • Disk array uses switch loop connectivity to drive
  • Each FC interface on the disk array has a dedicated bandwidth of 350MB/s
  • What factors not taken into account:

  • OS
  • Filesystem
  • Disk fragmentation
  • File size
  • FC HBA driver
Sizing Parameters

  • Minimum Sustained Transfer rate of disks - 73MB/s (~262.8GB/hr)
  • Max throughput of FC Interface (Out) of disk array - 350MB/s (~1.26TB/hr)
  • Max throughput of FC HBA - 350MB/s (~1.26 TB/hr)

Method
Since each disk can transfer at a minimum of 73MB/s at RAW mode, all 5 disks can go at 5 x 73 = 365MB/s. However as my Array FC interface can only take 350MB/s at max load (Bottleneck @ ARRAY FC interface), data can be transported at 350MB/s to the FC HBA (host end). Since I have a dedicated PCI-e bus to CPU, the host can receive the data at 350MB/s. That means, it will take approximately 1.5hr to dump the all the data (520GB) from the LUN to host.

Blind spot
How much load can the disk controller take versus the aggregated throughput of the LUN? In normal cases, this info is not released to the general public.

Cheers,

Taro
 
It is very good that you are considering these things.

Your assumptions about 4GB HBAs and interfaces being able to do 350MB/sec are fine, they can often get to almost 400MB/sec.

Re your array FC interface being capable of 350MB/sec - Some arrays may be slower than that for writes (don't forget raid overhead can be higher for writes eg Raid1/10/5).

Regarding each disk, that 73MB/sec figure is probably maximum under ideal circumstances (ie large sequential IOs with read aheads keeping the queue full). You may or may not reach that figure depending on what sort of IO you are doing. So if you really want to make sure I would use more physical disks - more physical disks are always a good thing.
 
Back
Top