Question on Disk

chad_small

ADSM.ORG Moderator
Joined
Dec 17, 2002
Messages
2,262
Reaction score
52
Points
0
Location
Gilbert, AZ
Website
www.tsmadmin.com
Ok I have a question I am putting out to everyone and anyone. What type(s) of disk arrays are you using for TSM? What have you found works best with Inline dedupe and does it also perform well with TSM/SP DB2 processing? I am in need of finding the best solution that, if possible, could satisfy the TSM/SP DB2 needs but also peform well with Inline dedupe and heavy I/O while still being cost conscience (i.e. bosses are trying to keep costs down). We will be using either AIX or Linux as our OS of choice (probably both depending on customer). If the disk also supported VTL that would be an added bonus.

Again I have to be cost conscience since everyone thinks our group costs way more than it should....don't get me started.
 
The choice is hard if you are cost concious.

- for DB2 volumes and logs, nothing beats a fast SSD or Flash drive SAN array; I would suggest a dedicated one. IBM, EMC, Netapp, etc sells these JBOD-type with some intelligence
- for storage (devclass=file) and can handle VTL, look at Exagrid or Data Domain (the latter is expensive but gives the best dedup ratio for your buck)
 
Have you ever done LAN-Free to disk? I notice IBM doesn't talk about it anymore but with large disk backups that leaves VTL since my network never seems to perform to the true 10Gb it claims to be.
 
My current setup is a first generation IBM v5k with SSD's for DB in controller shelves and NL drives in expansion (blueprint version 1.0 basically). Sadly, the controller is hit so hard that I'm not getting very good performance at all with a raid6 + hot. I hope to alleviate that a bit by attaching a 2nd controller just for the database (we have one sitting about collecting dust) and remove the raid6 and changing over to draid with a 7.8.x code (last/latest release for that storage unit). I hope that will help ease some of my troubles...but I know it won't work miracles.

Going forward into an upgrade, I like the idea of running the database volumes from the frame's internal drives. Use hdisk0,1,2 for your rootvg and the rest of your frame's slots for the database + actvielog. Direct attached storage then for your storage pools. I do not have any experiance with other other disk systems other than IBM, so cannot comment on costs/performance there.

Guess my point is, be careful of NL drives.
 
Have you ever done LAN-Free to disk? I notice IBM doesn't talk about it anymore but with large disk backups that leaves VTL since my network never seems to perform to the true 10Gb it claims to be.

I had setup LAN-Free to disk one time for a small environment but that was it. This proved to be fast especially for offloading combined small and big contiguous data.

I always trust FC since I know that much of the route is essentially direct. Ethernet (TCP/IP) links are fast and good IF the route does not pass any gateway or routers - in other words, source and target are on one subnet.
 
My problem with Ethernet is that I cannot get direct connects and everything is slowed down by the hops. I have one customer who divided up everything over 30+ VLANs and so I have to manage all these vlan IPs on the TSM/SP server. Even though my network team claims its all over a 40Gb LACP connection my throughput is horrible.
 
My problem with Ethernet is that I cannot get direct connects and everything is slowed down by the hops. I have one customer who divided up everything over 30+ VLANs and so I have to manage all these vlan IPs on the TSM/SP server. Even though my network team claims its all over a 40Gb LACP connection my throughput is horrible.

This is why I always preach that networks should be divided by: PROD, DEV/TEST, Backup and Management.

A flat network is not good from a technical viewpoint and for Security reasons. Whoever introduced the VLAN concept, I am sure, did not anticipate that others will be using this throughout a company's entire network.
 
Last edited:
Even though my network team claims its all over a 40Gb LACP connection my throughput is horrible.
How many channels are being used in the LACP connection? Is the LACP going to your TSM server or are they talking about core interconnects?
I found thanks to much trouble shooting the ideal number of connections is 1, 2, 4 or 8.
If you have 3, you get the speed of one. If you have 5, 6, 7 you get the speed of 2 or 4 (don't recall never had that many links).
From the network side, I had 3 LACP 1gb lines coming into my TSM server. After going back and forth with the networking team, I was able to prove out that if we reduced down to 2, my throughput doubled. As soon as that 3rd was brought up, standard 1gb speed.
 
Back
Top