ADSM-L

AW: TSM and SATA Disk Pools

2004-11-16 02:23:45
Subject: AW: TSM and SATA Disk Pools
From: Stefan Holzwarth <stefan.holzwarth AT ADAC DOT DE>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 16 Nov 2004 08:23:21 +0100
Thank you Charles, very good overview.
I have one more thing to mention: (only my opinion)
You should try to setup storagpool hirarchy within the SATA/FC Disk-Pools.

As an idea:

FC Diskpool for Daily Backup ---> 
(S)ATA Diskpool for storing the files as long as the policy allows new
versions (migdelay ~ versions+1) --->
(S)ATA Sequential Pool for long term (small volumes and collocation per node
to avoid a lot of reclamation after deleting nodes) --->
Not neccessary a Tapepool as last chain

Kind Regards 
Stefan Holzwarth


-----Ursprüngliche Nachricht-----
Von: Mark D. Rodriguez [mailto:mark AT MDRCONSULT DOT COM] 
Gesendet: Montag, 15. November 2004 23:50
An: ADSM-L AT VM.MARIST DOT EDU
Betreff: Re: TSM and SATA Disk Pools


Charles,

There are some other limiting factors you must consider.  Although you
have 300+ clients how many do you schedule to back up at the same time?
Even if it is all of them what is your Maxsessions and MaxSchedsessions
values?  If I remember right you are running on a p630 that box can
probably handle (depending on the amount of memory, # of NICs and # of
processors) up to 200-300 concurrent sessions, but you probably have it
set much lower.  As with all things in life there are practical
limitations, 1200 volumes might seem to be a lot but I have worked in
large environments with several hundred volumes because that is what the
environment needed!  Another question are all of these clients going to
one disk pool and/or are some going straight to tape?

On another note that I did not address in my previous post, the original
topic was about SATA drives and their viability in an ITSM disk pool.  A
couple things to consider here:

   1. They are cheap so you can afford to have very large disk pools -
      Thats a good thing!
   2. SATA drives are typically large capacity (250GB and above) when
      used by IBM, EMC, LSI etc. - This is not so good, see me previous
      post more dirves is better.
   3. SATA drives are usually slower drives, 7200 or 10K rpm, FC drives
      can 15K rpm - Another performance hit.
   4. The reliability, i.e. failure rate, is not as good, but this might
      not be as important in a ITSM server as it might be in a
      prodcution DB server.
   5. In order to get good performance out of SATA you need to work a
      little harder and you probably want to go with RAID 10 or 50 to
      get the best performance/reliabilty.
   6. If you have to move huge amounts of data on a daily basis with a
      minimal amount of time, i.e. you need the best possible
      performance, than SATA is not your answer!
   7. But if you need large disk pools with reasonable performance at a
      great price than your going to love SATA.

Good Luck and let us know how it turns out

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

============================================================================
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
============================================================================
===



Hart, Charles wrote:

>Fantastic Read!!!!   Thank you very much for the info!  Just one of our TSM
server has 300+ Clients, currently using collocation and a client setting of
Resource of 4 we could potentially have to create 1200 volumes on Disk?
>
>Regards,
>
>Charles
>
>
>
>-----Original Message-----
>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf Of
>Mark D. Rodriguez
>Sent: Monday, November 15, 2004 1:58 PM
>To: ADSM-L AT VM.MARIST DOT EDU
>Subject: Re: TSM and SATA Disk Pools
>
>
>OK, so there seems to be some interest in how to layout disk pools on an
>AIX system using JFS2 instead of raw lv's.  I will try to keep this as
>general as possible so please remember you must make some choices based
>on your particular environment.
>
>    * In general I would rather have more small disks than a few large
>      as you will see.  However, this would not apply if the larger disk
>      where 15K rpm vs. smaller disks of 10K rpm.
>    * Creating your hdisks - there are several possibilities here
>      depending on you environment.
>          o Small environments with only a few disk should use JBOD.
>            obviously you give up some safety over running RAID 1, 5 or
>            10 but small environments can't afford this anyway.
>          o Mid size and above should use one of the following configs
>            that fits there environment the best.  If you will use RAID
>            5 then create several small arrays, 4 or 5 disks per array
>            is good if you have lots of disk then you can go as high as
>            8 per array.  If you have a very large number of disks than
>            you can use either RAID 0 or 10, obviously RAID 10 will give
>            you some disk failure protection but at the cost of 2 x
>            actual space vs. usable space.  Again 4 or 5 disk arrays (8
>            or 10 if RAID 10) will work well and as before you can go
>            larger if you have a very large number of disks to work with.
>          o The idea of using small arrays is so that you wind up with
>            as many hdisk's as possible.  I like to have at least 4 or
>            5, but I have also worked in environments with over 50
>            hdisks each of which was a RAID array.
>          o NOTE: This section assumes you are not using any disk
>            virtualization.  In virtualized environments you could have
>            logically created 4 disk arrays but physically they might
>            all be on the same set of disks.  That situation could cause
>            some performance issues.  Disk virtualization is way outside
>            the scope of this note.
>    * Create a VG from all the hdisks above, nothing tricky here.
>    * Create a JFS2 large file enabled file system on each disk.  Make
>      sure the file system consume the entire hdisk and that it does not
>      span multiple disks.  Any reasonably skilled AIX admin can do this
>      for you.  In regards to the log file for these file systems for
>      absolute maximum performance you could dedicate a separate disk to
>      handle the logs, but in most cases simply selecting "in line" log
>      will do fine.
>    * NOTE: This is very important make sure that you add the mount
>      option of RBRW to each of these file systems.  Also, it would help
>      to add this mount option to the file systems that contain your
>      ITSM DB and LOG.  This option increases the I/O performance and
>      reduces the load on the system.  You will also see a radical
>      reduction in system non-computational memory usage.  Which means
>      you can use more memory for DB and LOG pages as well as for
>      network performance.  For a more in depth discussion of this
>      option please refer to the AIX Performance Management Guide.
>    * Now create the storage pool volumes.  The size of these volumes is
>      somewhat up to you, but I like to make sure that I have at least
>      as many volumes as I might have backup sessions writing to this
>      disk pool at any given time.  That is because each backup session
>      (remember a client could have multiple session) opens a volume for
>      its exclusive use.  Therefore if I have enough volumes they can
>      all run at once.  NOTE: Again this is very important, when you
>      create your volumes for the storage pool make sure you use a round
>      robin approach to using the hdisks, i.e. if you have 10 hdisks
>      then create the first on hdisk1, second on hdisk2, third on
>      hdisk3, and so on so that 11th would be back on hdisk1.  And you
>      must create them in sequential order!  The reason for this is that
>      ITSM appears ( I have never seen the code nor have I had any
>      developers confirm this, although they all agree it appears to do
>      this) to use the volumes in the order they were created.
>      Therefore, I am sure that once a backup starts I will get all of
>      my hdisks in the game and the same thing will apply on migration.
>    * Some simple tunable system parameters, please note that when you
>      begin to do performance tuning you should know what you are doing
>      if you don't then get someone that does because you can cripple a
>      system if you are not careful.  Having said that, you should
>      definitely adjust the min/max read ahead values
>      (j2_minPageReadAhead and j2_maxPageReadAhead) with the ioo command
>      a good starting point is 16/128.  If you use the RBRW on the file
>      systems you won't need to make changes to minfree and maxfree
>      despite what some of the literature says you need do when you
>      increase the page ahead value.  Minperm and maxperm parameters
>      have been talked about on lot on this list, but again if you are
>      using the RBRW mount option these values will have a marginal
>      effect since most of your non-computational memory usage will be
>      released immediately (without the use of the LRU).  However, it
>      won't hurt any if you lower maxperm to 60% with the vmo command so
>      that you make sure that you have plenty of memory for
>      computational pages, i.e. ITSM DB and LOG pages as well as network
>      memory usage.
>    * One area of tuning that I can't cover here is tuning the path to
>      your disk and tape drives.  There is just to many combinations
>      possible (SSA, SCSI, FC, iSCSI, etc.) to give any input.  However
>      it is important that you address the performance issues of these
>      various communication paths.  I will mention a couple of common
>      problems.  Make sure that you don't overload you particular bus
>      technology, i.e. you can't  put 6 LTO1 drives on the same SCSI
>      bus.  So make sure you know the bandwidth of you bus and don't
>      overload it!  Another common mistake is in FC environments, don't
>      have disk I/O traffic and tape I/O running over the same HBA this
>      just causes horrible performance and there is no amount of tuning
>      that can fix it!  You must use separate HBA's and zone your switch
>      to make sure that traffic stays separate.  SSA loops should have
>      at least 4 initiators, i.e. use at least 2 SSA cards on each loop,
>      and make sure that the SSA cards are connect as far away from each
>      other in the loop as possible.
>    * Some ITSM tunables. for the ITSM DB and LOG pages make sure you
>      have set BufPoolSize and LogPoolSize large enough so that you are
>      getting at least 99% Cache Hit Pct. on the DB and that your Log
>      Pool Pct. Wait is 0.  Your MoveBatchSize and MoveSizeThresh should
>      be set to the max values, this will help things like migration and
>      storage pool backups.
>
>This is a very general list of things you can do, but if you take these
>guidelines and apply some common sense about your particular environment
>I am sure that you can get very good performance out of your disk/tape
>subsystems.
>
>If you have any questions or comments on this than post them and lets
>keep this discussion going.
>
>--
>Regards,
>Mark D. Rodriguez
>President MDR Consulting, Inc.
>
>===========================================================================
====
>MDR Consulting
>The very best in Technical Training and Consulting.
>IBM Advanced Business Partner
>SAIR Linux and GNU Authorized Center for Education
>IBM Certified Advanced Technical Expert, CATE
>AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
>Red Hat Certified Engineer, RHCE
>===========================================================================
====
>
>
>
>Wells, William wrote:
>
>
>
>>I would be interested in your post.
>>
>>-----Original Message-----
>>From: Mark D. Rodriguez [mailto:mark AT MDRCONSULT DOT COM]
>>Sent: Sunday, November 14, 2004 5:49 PM
>>To: ADSM-L AT VM.MARIST DOT EDU
>>Subject: Re: TSM and SATA Disk Pools
>>
>>
>>Charles,
>>
>>I may be missing something here, but even your numbers out of the
>>Symetrix seem pretty bad.  Are you sure you didn't drop a "0"
>>somewhere?  I have one customer that I set up using SSA drives with JFS2
>>filesystems and LTO1 drives and we average between 35 and 40MB/sec. and
>>some days as high as 45MB/sec (compression of data plays a large
>>factor).  Your Symetrix at 40GB/hr is only 11.11MB/sec!  BTW, this is
>>with no unusual tuning to the system, since this was more than enough
>>performance for their needs.  With a little more tuning I could easily
>>increase that by 50% and possibly double it if I really tried and that
>>is ancient SSA technology.  FC technology should be much faster.
>>
>>I know there are many people who prefer raw lvs for there disk pools,
>>but on an AIX system I don't believe it is worth it.  I have never had
>>anyone show me raw lv numbers on AIX that I could not match (with far
>>less hassle) with a good JFS2 configuration.  If raw is the way you want
>>to go than I wish you luck.  However, if you are interested in switching
>>to using a JFS2 approach I would be glad to post to the list some simple
>>guidelines for configuring your environment to get much better
>>performance than you are reporting in your post.
>>
>>--
>>Regards,
>>Mark D. Rodriguez
>>President MDR Consulting, Inc.
>>
>>==========================================================================
==
>>===
>>MDR Consulting
>>The very best in Technical Training and Consulting.
>>IBM Advanced Business Partner
>>SAIR Linux and GNU Authorized Center for Education
>>IBM Certified Advanced Technical Expert, CATE
>>AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
>>Red Hat Certified Engineer, RHCE
>>==========================================================================
==
>>===
>>
>>
>>Hart, Charles wrote:
>>
>>
>>
>>
>>
>>>Thanks you for the link.. Good info!
>>>
>>>
>>>-----Original Message-----
>>>From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf 
>>>Of
>>>William F. Colwell
>>>Sent: Friday, November 12, 2004 10:52 AM
>>>To: ADSM-L AT VM.MARIST DOT EDU
>>>Subject: Re: TSM and SATA Disk Pools
>>>
>>>
>>>Charles,
>>>
>>>See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open
>>>
>>>This was in a recent IBM redbooks newsletter.  It discusses SATA
>>>
>>>
>>>
>>>
>>performance
>>
>>
>>
>>
>>>and to me it says that the tsm backup diskpool is not a good use for
SATA.
>>>Sequential volumes on SATA may be ok.
>>>
>>>Hope this helps,
>>>
>>>Bill
>>>At 10:21 AM 11/12/2004, you wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>>Been asking lots of questions lately.  ;-)
>>>>
>>>>
>>>>We recently have put our TSM Disk Backup Pools on Clarrion SATA.
>>>>     The TSM Server is being presented as 600GB SATA Chunks
>>>>     Our Aix Admin has put a Raw logical over two 600GB Chunks to create
>>>>
>>>>
>>>>
>>>>
>>a 1.2TB Raw Logical Volume
>>
>>
>>
>>
>>>>Right now we are seeing Tape migrations @ about 4GB in 6hrs, where
before
>>>>
>>>>
>>>>
>>>>
>>on EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to
share
>>their TSM SATA Diskpool layout and or tips we would much appreciate it!!!
>>
>>
>>
>>
>>>>TSM Env
>>>>AIX 5.2
>>>>TSM 5.2.4 (64bit)
>>>>p630 4x4
>>>>8x3592 FC Drives
>>>>
>>>>Regards,
>>>>
>>>>Charles
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>----------
>>>Bill Colwell
>>>C. S. Draper Lab
>>>Cambridge Ma.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>

<Prev in Thread] Current Thread [Next in Thread>
  • AW: TSM and SATA Disk Pools, Stefan Holzwarth <=