ADSM-L

Re: Dasd for MVS server?

1996-06-19 12:17:19
Subject: Re: Dasd for MVS server?
From: "Andrew M. Raibeck" <araibeck AT VNET.IBM DOT COM>
Date: Wed, 19 Jun 1996 09:17:19 PDT
Betty Simonis asks:

>I am new to this list (and ADSM) and will be installing version 2 on

Welcome aboard!

>an MVS server. I'm trying to get a handle on the minimum amount of dasd
>that would be required to run ADSM to backup our aix and os/2 clients.
>We have 3390 mod 2's, and not a whole lot of it to allocate to ADSM. So,
>what I'm wondering is if I can define  storage pools for backup and
>archives that would just go directly to tape. I have about 200gb to backup
>from the clients, and only about 6gb of available disk space on the
>server side. Most of the clients would not require immediate access to
>backed up data. Any helpful hints or suggestions are most appreciated.

My first words of advice to anyone new to ADSM:

   1) If you already haven't done so, take the ADSM Administrator's Guide home
      with you, and read it from *cover to cover*. This is the best schooling
      in the product that you can give yourself. In particular, you'll want to
      read Chapter 14 on Recovering Data. Even if you plan on attending the
      ADSM education, read the book first! You'll be that much ahead of the
      game when you finally do take the class.

   2) Remember: The command line is your friend! Use the GUIs for what they
      are good for, but don't forsake the command line. For example, I find
      it much easier to use the command line to register new nodes, but the
      GUI is easier for defining new schedules.

Without knowing your environment completely, here are some ideas (this is not
an exhaustive list):

You can back up directly to tape. This is governed by the DESTINATION setting
  of your backup copygroup, which points to your tape storage pool.

When backing up directly to tape, the number of concurrent clients that can
  back up is limited to the number of drives available. Check your MOUNTLIMIT
  setting in the DEVCLASS for your tape storage pool. You should be sure to set
  it no higher than the number of physical drives you have available. For
  example, if you have 8 drives available for ADSM, set MOUNTLIMIT to 8. You'll
  also want to adjust your MAXSESSIONS (in the ADSM server options file) and
  MAXSCHEDSESSIONS (there is an admin SET MAXSCHEDSESSIONS command) accordingly
  (i.e. don't allow more than 8 scheduled backups).

Depending on the number of clients you wish to back up and the number of
  available drives, you should consider investing in more DASD space. For
  example, if you have 100 clients to back up, it might be difficult to do with
  only 8 tape drives.

Each client's first backup will be a full backup, but subsequent backups will
  be changed or new files only (assuming you do incremental backups only). So
  you need to know two characteristics: how much data will each client send to
  the server initially, then on a daily basis, how much data will it send
  on an incremental basis. For example, workstations often have very little
  changed data, so a 500 MB workstation might only back up 5 MB per night after
  the initial full backup. So besides the total capacity of each client, you'll
  want to know how much changes daily.

You can observe how much data is sent to your server on a daily basis. I
  recommend issuing the SET ACCOUNTING ON command to the server. This will
  cause SMF records to be created each time a client session ends. The SMF
  records are type 42, subtype 14. The layout of these records is described in
  the ADSM Administrator's Guide. Information includes the client (node) name,
  and number of KB backed up to the server (among a bunch of other stuff). If
  you go to our FTP anonymous server (index.storsys.ibm.com) and look in
  /adsm/nosuppt, you'll find a file called amrtools.exe. It contains several
  little utilities, of which one is a SAS program that does some rudimentary
  reports on these SMF records. One of the reports summarizes the total amount
  of data backed up for each day in the input sample. I recommend collecting
  these SMF records on a daily basis for reporting purposes. Suggestion:
  establish a GDG, where each GDS contains a month's worth of records. The
  point of having this information is that you will be able to see how much
  data you send to the server on a daily basis, and note the trends.

  By the way, you should note that if you choose to pull down amrtools.exe
  (it's a self-extracting executable) you will probably have to do some degree
  of customization.

You'll probably see that the amount of data backed up to the server on a
  daily basis is some relatively small percentage of the total capacity of
  your clients (unless a lot changes daily, or all of the files are HUGE).
  You could then conceivably size a disk storage pool based on the expected
  daily backup. For example, I used to manage well over 100 GB of client
  capacity, but on a daily basis, the maximum number of GB backed up was around
  12 GB. So I had a 16 GB disk storage pool (a little extra for "just in case")
  that the clients backed up to. During the day, I would then reduce the disk
  pool thresholds to force the data to migrate to tape. The advantage of this
  is that I was able to run more concurrent client backups, and the performance
  was better backing up to disk than tape.

Some performance hints:
     * On a regular basis (i.e. at least once a week) do a Q DB F=D. Observe
       the cache hit percent. If it is less than 98%, you should consider
       increasing your BUFPOOLSIZE setting in the server options file. Note
       that this value is expressed in KB. I suggest an initial value of 2048,
       and increasing it in increments of 1024 until the cache hit percent is
       98% or better.
          Also look at the cache wait percent. This value should be 0.0.
       A non-zero value is also an indicator that BUFPOOLSIZE needs to be
       increased.
     * Likewise, do a Q LOG F=D and observe the percent log wait value. If it
       is anything other than 0.0, your LOGPOOLSIZE setting needs to be
       increased. I suggest an initial value of 1024.
     * If you back up to tape, you should set TXNGROUPMAX (in the server
       options file) to 256. If you choose to use disk, try an initial value
       of 40.
     * This goes hand-in-hand with the TXNGROUPMAX suggestion: if you back up
       to tape, try setting TXNBYTELIMIT in the *client* options file to
       25600. If you use DASD, try 2048. These two settings can have a dramatic
       impact on performance, especially if you have a lot of small files and/
       or you back up to tape.
     * Keep in mind that large TXNBYTELIMIT values (these are expressed in KB)
       will require more space in the server recovery log. So do Q LOG on a
       regular basis and monitor the maximum utilization. Add recovery log
       space as necessary.
     * Review the ANR2110 SAMPLIB member that is installed with ADSM. It
       contains important information that didn't make it into the printed doc.
       In particular, look for info on MOVEBATCHSIZE and MOVESIZETHRESH
       settings. These affect performance of server operations such as
       migration and reclamation. Increasing these values can help improve
       performance, but again, monitor the recovery log size.

 - Code the DEVCONFIG and VOLUMEHISTORY options in the server options file.
   When you specify the data sets, try putting them in quotes, as ADSM may
   have problems allocating them if you don't use quotes. You will need
   these files in the event that you ever have to restore your ADSM
   database.

 - If you are using TCP/IP as a connectivity method, I *strongly* recommend
   the "server prompted" scheduling mode. The client options file will have
   to have an option coded that says SCHEDMODE PROMPTED. The default mode
   of scheduling is "client polling".

I hope this stuff helps,

Andy Raibeck
ADSM Level 2 Support
408-256-0130
<Prev in Thread] Current Thread [Next in Thread>