ADSM-L

NOTE 01/16/96 08:01:00

2015-10-04 18:17:09
Subject: NOTE 01/16/96 08:01:00
From: INTERNET.OWNERAD at SNADGATE
To: Jerry Lawson at TISDMAIL
Date: 1/16/96 9:17AM
Let me attempt to answer a couple of your questions.  The original text is
included below for anyone following along....

2.  Your scenario is correct, especially the part about brain retreading.  I
occasionally get this issue brought up from customers; I think it is mainly a
training issue - they are so used to having a "point in time" full backup that
they don't realize the advantages of the ADSM method.

The only exception that I have seen to this is with the Lotus Notes client,
where you actually backup only the changed notes.  Recovery of an old copy of
a notes database (.nsf file), followed by restoring every note could be
prohibitively long.  The recommended way seems to be to backup the .NSF files
on a periodic basis (weekly or monthly), and then do incrementals of the
documents.  they can then be restored back to point you need.....   I haven't
looked into it, but a database backup using the API (Oracle/Sybase/DB2-6000)
is probably the same way.

I wouldn't recommend Archive as a way to do what you are thinking - it really
isn't meant for that.  One problem is that Archive does not back up and
restore empty directory structures, so you can't really do a full restore
using it.

3.   The best rule of thumb I have heard of is that the DASD pool should be
sufficient to handle enough data for 1 days' wort of backups.  In that manner,
you can control the migration process.  I have seen no suggestion of a ratio
of Disk/tape - doesn't really make sense to me either.  As for the migration
process, largest first, as opposed to oldest first, has the advantage of
keeping all files grouped together on a tape, thus making restore easier and
quicker.  In your environment, with only 8 servers to backup, I wouldn't
expect that there would be much difference in actual use.

One variable here is the amount you migrate each day.  If your threshold is
set at, let's say 10% for low, and 80% for high, then everything will be
migrated (just about) every time migration is run.  If you have the thresholds
set at 50% and 90%, then only the biggest users will be migrated.  In either
case, if cache is enabled, copies are kept in the pool until the space is
needed.

4.  In use datasets are really application dependent, I believe.  Some
datasets are opened in a "Deny r/w" mode that will preclude anyone (including
ADSM) from touching them.  Obviously, applications must be taken down or end
before access can be gained.

Other datasets may just be open - with no lock on them.  This is where ADSM's
fuzzy backups come in.  If you have Dynamic or Shared Dynamic specified, if
the dataset changes while it is being backed up, you will still get a backup.
(shared of course will cause a retry.)   The real determination here is
whether the application can handle the file which might be less than perfect.
We backup cc:Mail's database on a Dynamic basis, on the theory that if we have
to restore it, a mail item lost because our database might be incomplete is
the least of our problems; there is most likely a lot of other mail missing
since the last backup was taken.  You may not always be able to make this
assumption.

A short answer here, I guess is "It depends on the application".

Hopefully someone else can handle questions 1 and 5

Jerry Lawson
ITT Hartford Insurance
jlawson AT itthartford DOT com


________________________Forward Header________________________
Author: INTERNET.OWNERAD
Subject: NOTE 01/16/96 08:01:00
01-16-96 09:17 AM

FROM: ROLF VALTERS  - ADVANCED TECHNICAL SUPPORT ANALYST
***********************************************
  TEL: (2162), LOC: (O1A-092), ID:(CA003095 VMCDN)
Hello ADSM collective....

I hate being a newbie, but one has to start somewhere......

We have just installed the following (in evaluation mode)

ADSM SERVER V1.R1.L08/1.8 on MVS
ADSM Client for OS2 Warp on 1 client ( 8 more to come)
  these clients are actually Server 500's with 5-8GB each of network
  data. (Approx 18GB total.)  This will grow as we add servers.

Other software....
Communication protocol- SNA lu6.2 using Communications Manager 2 to
  connect.
Network OS - IBM LAN Server 3.0 (Advanced)

It is our intent not to roll client access out to all network users,
but to backup their home directories directly from the servers.  The
thought is that this will save us training  600+ users (ease of use of
ADSM notwithstanding), save money, and controlled backup/recovery standards
Naturally there are some questions that arise from such a scenario....

First off, my apologies if any points rehash old news.. I did my best to
review the last 3 months (whew) of entries from this group. Many
questions have already been addressed by several .02 cent contributions
from Andy Raibeck.. many thanks.

1.  Now that it is all installed, I tried backing up the single Server
    500 (32MB ram - 4GB of files) and got about 27kb per second transfer
    rate.  At that rate, the 4GB of data would be backed up in about 45
    hours! Obviously there is/are tuning parm(s) to change.  I already
    changed the VTAM entry for MAXDATA from 2048 to 4096 with no
    difference.  We will not change from SNA to TCP/IP.  Our parent
    group in St. Paul are getting between 250 and 490 kb/sec, but are
    running with TCP/IP so can offer no aid.  From what I understand,
    SNA should be able to see comparable transfer rates.
    Is anybody else running SNA lu6.2, and where should I look to pump
    up the throughput?

2.  It seems that some brain retreading is needed regarding the rules
    surrounding backups/recoveries.  With other products, one would do
    a full backup, followed by a period of daily incrementals.  This is
    repeated in series of weeks or months depending policies of the
    company.  To recover, one would restore from the latest FULL backup
    tape, followed by all the incrementals done till the disaster date.
    ADSM seems to only need incrementals by design.  The first
    incremental backup is of course HUGE.  The following incrementals
    are considerably smaller, but you NEVER have to do a 'place in time'
    full backup.  All the information surrounding a client's files are
    kept in the ADSM DB.  Yet I read that some of you are still talking
    the FULL/INCREMENTAL scenario.  This would give you multiple copies
    of the same file for no apparent reason (to my way of thinking).
    If a client needed multiple sets of the same file for some reason,
    would not the ARCHIVE facility be enough?  It seems a great waste
    to have multiple versions of a static file.  Some other thoughts
    would be appreciated.

3.  Is there a rule of thumb regarding the ratio of DASD backup pool to
    TAPE backup pool?  We would like to see the latest files on DISK
    and the older static files on tape.  Can ADSM handle this?
    From what I've read, the migration from a DASD pool to tape pool
    is controlled by client size.  When the DASD pools' MAX threshold is
    reached, the largest clients files are migrated to tape, followed
    by the next largest etc. until the LOW threshold is reached.  Could
    this be set to have the OLDEST unchanged files go to tape, and leave
    higher active files local?

4.  The clients will of course be active during the backups, so we
    can never backup the 'in-use' files used by the OS.  I would
    like to hear how some of you handle the backup/recovery of these
    files.

5.  Since we are backing up entire servers, this should serve us well
    in the event of a disaster recovery.  For instance, if we for some
    horrible reason, lose an entire RAID array, we should be able to
    recover the entire server (once the hardware config is repaired).
    There is a question regarding LAN Server though.  Each network
    file has an ACL attached to it.  These ACL's are the security access
    rules for each file.  With our current method of backup (SYTOS
    Premium backing up to DAT tape), we have to do 2 restores.... one
    for the data, then another for the ACL's.  In our opinion this
    sucks (technical term ;-) ) because it doubles our restore time.
    Can someone from IBM tell how this is handled by ADSM? I can
    find no reference in the manuals.

Well I guess I've chewed enough bandwidth for now.  My thanks in advance
to any/all who reply.

REGARDS, ROLF
<Prev in Thread] Current Thread [Next in Thread>