Re: TSM best Practices for tape drives using FC
The short answer is that you can't get there from anywhere. The slightly
longer answer is that this becomes an exercise in moving the bottlenecks
and choke points around.
The IBM documentation on the 6228 fiber card (200 Mbyte or 2 Gbit) in
the Subsystem Device Driver manual indicates that one card can saturate
a PCI bus - so make sure that each 6228 card is on it's own dedicated
bus. This is implied in other placement guides, but the SDD manual is
Now -- in my case I have LTO-2 drives (30 MB per second transfer rate,
"higher if the data is compressable"). I'm getting 5.2 to 1 compression
on Oracle backups. Um . . . 156 MB/Second anyone? One drive per fiber
per bus - 10 drives - 10 PCI buses; three buses per I/O drawer in the
Pseries (prior to the 550 and above).
Starting to look a bit pricey. Now, add in gigabit ethernet interfaces
to feed the drives (at no more than two to the bus). But the Gig
ethernet will limit my maximum throughput to 100 MB/second by definition
-- so I can now do two drives per fiber . . . And if I'm not running 10
Ethernet interfaces (bottleneck!) I can hang more tape drives per fiber
. . .
To add to the fun, again on IBM Pseries (AIX or Linux, your choice) the
RIO cable that interfaces between the CPU drawer and the I/O Drawer(s)
is rated at 1 GigaByte per second. Supposedly, the 2 GB/sec RIO
interface will be coming out next year.
So -- it's a bit of a crap shoot. Until the hardware supports direct I/O
from card to card without CPU/system memory involvement (S/390 channel
program, anyone?) you won't come close to theoretical (marketing)
Put the number of drives that seems 'reasonable' or 'works' on one fiber
-- I've got five per fiber at the host right now - host PCI bus
limitation - and will be dropping to three per fiber next year with the
hardware swaps I've got coming. No more than one fiber adapter per PCI
bus, or two per PCI-X bus; preferably with no other adapters on the bus.
And check for bottlenecks. Mine is currently the network overhead on my
primary Oracle DB server -- I can max out all 4 cpus during the 2.5 hour
backup. I'm looking at better network design and SAN backup for next
If you're still awake -- I hope this helped put things into perspective.
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Sent: Thursday, December 09, 2004 10:21 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: TSM best Practices for tape drives using FC
I read in a the Tivoli guide (A brief Introduction to IBM Tivoli Storage
under Tape Drives (best practices)
Where it say's Carefully consider card and bus throughput when attaching
drives to systems most protocal/tape combinations can accommodate 2-3
drives per card? We would like to use more than say 10 12 or more
but not to cause issues.
We are Using 3590-H1A's (soon to upgraded to 3592's) which
would could change the amout of tape drives we use.
We are using FC connections (the HBA's that are attached to the tape
drives are 1GB but are piped in at 2GB.
3590's Assumed speed 39 (GB/HR)
3592's Assumed speed 112 (GB/HR)
Thanks for any replies!
All thoughts are welcome!