ADSM-L

Re: TSM and SATA Disk Pools

2004-11-17 13:54:33
Subject: Re: TSM and SATA Disk Pools
From: Ben Bullock <bbullock AT MICRON DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 17 Nov 2004 11:53:54 -0700
        Ya,
        I'm pretty sure that it is not an "exclusive lock" on AIX
systems either. I have one disk storagepool that only has 1 volume (SSA
disk) in it and it has multiple TSM clients writing to it at the same
time with no issues. Sure there might be some disk contention, but no
failures or long waits on any of those sessions.

        Perhaps it's a "round-robin" or a "rotating" lock that migrates
between TSM clients so they all get a share of the I/O on the disk...

Ben


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
TSM_User
Sent: Wednesday, November 17, 2004 11:49 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: TSM and SATA Disk Pools


Mark,
Where did you get the information about "each backup session (remember a
client could have multiple session) opens a volume for its exclusive
use".

I can't find any documentation anywhere to support this.  I have heard
other AIX admins make the same claim.  I have quite  few Windows TSM
server that I am sure have more client sessions sending data (consumer
threads) then I have disk pool volumes. I can't say I've ever run a q
sess to see if the number of nodes sessions in a SEND state are greater
than the number of volumes I have though.

Anyway, if this were true then I would have expected to find a
performance tuning doc discussing this issue.  That is unless this is an
AIX issue.

Not to call out Richard on this one but I have to ask "Richard, have you
heard of this?".

"Mark D. Rodriguez" <mark AT MDRCONSULT DOT COM> wrote:
OK, so there seems to be some interest in how to layout disk pools on an
AIX system using JFS2 instead of raw lv's. I will try to keep this as
general as possible so please remember you must make some choices based
on your particular environment.

* In general I would rather have more small disks than a few large as
you will see. However, this would not apply if the larger disk where 15K
rpm vs. smaller disks of 10K rpm.
* Creating your hdisks - there are several possibilities here depending
on you environment. o Small environments with only a few disk should use
JBOD. obviously you give up some safety over running RAID 1, 5 or 10 but
small environments can't afford this anyway. o Mid size and above should
use one of the following configs that fits there environment the best.
If you will use RAID 5 then create several small arrays, 4 or 5 disks
per array is good if you have lots of disk then you can go as high as 8
per array. If you have a very large number of disks than you can use
either RAID 0 or 10, obviously RAID 10 will give you some disk failure
protection but at the cost of 2 x actual space vs. usable space. Again 4
or 5 disk arrays (8 or 10 if RAID 10) will work well and as before you
can go larger if you have a very large number of disks to work with. o
The idea of using small arrays is so that you wind up with as many
hdisk's as possible. I like to have at least 4 or 5, but I have also
worked in environments with over 50 hdisks each of which was a RAID
array. o NOTE: This section assumes you are not using any disk
virtualization. In virtualized environments you could have logically
created 4 disk arrays but physically they might all be on the same set
of disks. That situation could cause some performance issues. Disk
virtualization is way outside the scope of this note.
* Create a VG from all the hdisks above, nothing tricky here.
* Create a JFS2 large file enabled file system on each disk. Make sure
the file system consume the entire hdisk and that it does not span
multiple disks. Any reasonably skilled AIX admin can do this for you. In
regards to the log file for these file systems for absolute maximum
performance you could dedicate a separate disk to handle the logs, but
in most cases simply selecting "in line" log will do fine.
* NOTE: This is very important make sure that you add the mount option
of RBRW to each of these file systems. Also, it would help to add this
mount option to the file systems that contain your ITSM DB and LOG. This
option increases the I/O performance and reduces the load on the system.
You will also see a radical reduction in system non-computational memory
usage. Which means you can use more memory for DB and LOG pages as well
as for network performance. For a more in depth discussion of this
option please refer to the AIX Performance Management Guide.
* Now create the storage pool volumes. The size of these volumes is
somewhat up to you, but I like to make sure that I have at least as many
volumes as I might have backup sessions writing to this disk pool at any
given time. That is because each backup session (remember a client could
have multiple session) opens a volume for its exclusive use. Therefore
if I have enough volumes they can all run at once. NOTE: Again this is
very important, when you create your volumes for the storage pool make
sure you use a round robin approach to using the hdisks, i.e. if you
have 10 hdisks then create the first on hdisk1, second on hdisk2, third
on hdisk3, and so on so that 11th would be back on hdisk1. And you must
create them in sequential order! The reason for this is that ITSM
appears ( I have never seen the code nor have I had any developers
confirm this, although they all agree it appears to do
this) to use the volumes in the order they were created. Therefore, I am
sure that once a backup starts I will get all of my hdisks in the game
and the same thing will apply on migration.
* Some simple tunable system parameters, please note that when you begin
to do performance tuning you should know what you are doing if you don't
then get someone that does because you can cripple a system if you are
not careful. Having said that, you should definitely adjust the min/max
read ahead values (j2_minPageReadAhead and j2_maxPageReadAhead) with the
ioo command a good starting point is 16/128. If you use the RBRW on the
file systems you won't need to make changes to minfree and maxfree
despite what some of the literature says you need do when you increase
the page ahead value. Minperm and maxperm parameters have been talked
about on lot on this list, but again if you are using the RBRW mount
option these values will have a marginal effect since most of your
non-computational memory usage will be released immediately (without the
use of the LRU). However, it won't hurt any if you lower maxperm to 60%
with the vmo command so that you make sure that you have plenty of
memory for computational pages, i.e. ITSM DB and LOG pages as well as
network memory usage.
* One area of tuning that I can't cover here is tuning the path to your
disk and tape drives. There is just to many combinations possible (SSA,
SCSI, FC, iSCSI, etc.) to give any input. However it is important that
you address the performance issues of these various communication paths.
I will mention a couple of common problems. Make sure that you don't
overload you particular bus technology, i.e. you can't put 6 LTO1 drives
on the same SCSI bus. So make sure you know the bandwidth of you bus and
don't overload it! Another common mistake is in FC environments, don't
have disk I/O traffic and tape I/O running over the same HBA this just
causes horrible performance and there is no amount of tuning that can
fix it! You must use separate HBA's and zone your switch to make sure
that traffic stays separate. SSA loops should have at least 4
initiators, i.e. use at least 2 SSA cards on each loop, and make sure
that the SSA cards are connect as far away from each other in the loop
as possible.
* Some ITSM tunables. for the ITSM DB and LOG pages make sure you have
set BufPoolSize and LogPoolSize large enough so that you are getting at
least 99% Cache Hit Pct. on the DB and that your Log Pool Pct. Wait is
0. Your MoveBatchSize and MoveSizeThresh should be set to the max
values, this will help things like migration and storage pool backups.

This is a very general list of things you can do, but if you take these
guidelines and apply some common sense about your particular environment
I am sure that you can get very good performance out of your disk/tape
subsystems.

If you have any questions or comments on this than post them and lets
keep this discussion going.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

========================================================================
=======
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux Red
Hat Certified Engineer, RHCE
========================================================================
=======



Wells, William wrote:

>I would be interested in your post.
>
>-----Original Message-----
>From: Mark D. Rodriguez [mailto:mark AT MDRCONSULT DOT COM]
>Sent: Sunday, November 14, 2004 5:49 PM
>To: ADSM-L AT VM.MARIST DOT EDU
>Subject: Re: TSM and SATA Disk Pools
>
>
>Charles,
>
>I may be missing something here, but even your numbers out of the 
>Symetrix seem pretty bad. Are you sure you didn't drop a "0" somewhere?

>I have one customer that I set up using SSA drives with JFS2 
>filesystems and LTO1 drives and we average between 35 and 40MB/sec. and

>some days as high as 45MB/sec (compression of data plays a large 
>factor). Your Symetrix at 40GB/hr is only 11.11MB/sec! BTW, this is 
>with no unusual tuning to the system, since this was more than enough 
>performance for their needs. With a little more tuning I could easily 
>increase that by 50% and possibly double it if I really tried and that 
>is ancient SSA technology. FC technology should be much faster.
>
>I know there are many people who prefer raw lvs for there disk pools, 
>but on an AIX system I don't believe it is worth it. I have never had 
>anyone show me raw lv numbers on AIX that I could not match (with far 
>less hassle) with a good JFS2 configuration. If raw is the way you want

>to go than I wish you luck. However, if you are interested in switching

>to using a JFS2 approach I would be glad to post to the list some 
>simple guidelines for configuring your environment to get much better 
>performance than you are reporting in your post.
>
>--
>Regards,
>Mark D. Rodriguez
>President MDR Consulting, Inc.
>
>=======================================================================
>=====
>===
>MDR Consulting
>The very best in Technical Training and Consulting.
>IBM Advanced Business Partner
>SAIR Linux and GNU Authorized Center for Education
>IBM Certified Advanced Technical Expert, CATE
>AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
>Red Hat Certified Engineer, RHCE
>=======================================================================
=====
>===
>
>
>Hart, Charles wrote:
>
>
>
>>Thanks you for the link.. Good info!
>>
>>
>>-----Original Message-----
>>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf 
>>Of William F. Colwell
>>Sent: Friday, November 12, 2004 10:52 AM
>>To: ADSM-L AT VM.MARIST DOT EDU
>>Subject: Re: TSM and SATA Disk Pools
>>
>>
>>Charles,
>>
>>See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open
>>
>>This was in a recent IBM redbooks newsletter. It discusses SATA
>>
>>
>performance
>
>
>>and to me it says that the tsm backup diskpool is not a good use for 
>>SATA. Sequential volumes on SATA may be ok.
>>
>>Hope this helps,
>>
>>Bill
>>At 10:21 AM 11/12/2004, you wrote:
>>
>>
>>
>>
>>>Been asking lots of questions lately. ;-)
>>>
>>>
>>>We recently have put our TSM Disk Backup Pools on Clarrion SATA.  The

>>>TSM Server is being presented as 600GB SATA Chunks  Our Aix Admin has

>>>put a Raw logical over two 600GB Chunks to create
>>>
>>>
>a 1.2TB Raw Logical Volume
>
>
>>>Right now we are seeing Tape migrations @ about 4GB in 6hrs, where 
>>>before
>>>
>>>
>on EMC Symetrix disk we saw 29-40GB per hour. If anyone would like to 
>share their TSM SATA Diskpool layout and or tips we would much 
>appreciate it!!!
>
>
>>>TSM Env
>>>AIX 5.2
>>>TSM 5.2.4 (64bit)
>>>p630 4x4
>>>8x3592 FC Drives
>>>
>>>Regards,
>>>
>>>Charles
>>>
>>>
>>>
>>>
>>----------
>>Bill Colwell
>>C. S. Draper Lab
>>Cambridge Ma.
>>
>>
>>
>>
>>
>
>
>




---------------------------------
Do you Yahoo!?
 The all-new My Yahoo!   Get yours free!

<Prev in Thread] Current Thread [Next in Thread>