ADSM-L

Re: Shortening a long backup

1997-09-16 16:56:17
Subject: Re: Shortening a long backup
From: Daniel Thompson <thompsod AT USAA DOT COM>
Date: Tue, 16 Sep 1997 15:56:17 -0500
 Note to all:  This is response to the msg Julie Phinney sent asking about
alternatives to shorten long NT backups on NOTES servers.

JP,

  We had the same problem with 2 sets of servers.  The first set was our NT
home directory servers which have humongous numbers of files and may of
those being touched daily.  The second set was our Lotus Notes servers.  I
may be behind the curve, but is there an NT lotus notes agent yet?  I
thought it was still pending.

We use 2 solutions to get our backups complete within our window.  The
first solution is for those servers who have multiple hard drives.  On
these servers we run command files at the beginning of our window to do
incremental by date backups on a subset of the hard drives.  For example,
for a server with c: d: f: g: and h: drives we run 2 command files that
issue DSMC INCR C: D: F: -INCRBYDATE and DSMC INCR G: H: -INCRBYDATE.
Later in our backup window we run a normal incremental via the ADSM
scheduler.  This third backup will do any rebinding, expiration etc. that
the incremental by dates do not do.  These seems to work very well for us.

However, if there is a single large hard drive, that solution does not
work.  In this case we have had to actually define 2 nodes and run two
schedulers at the same time.  In this case the ADSM on the server is still
the 16 bit code and we ran the second scheduler in a bat file that sets the
dsm_config environment variable to a second copy of the opt file with the
second nodename defined in it.  I truly do not like this solution as it
causes confusion as to where the data is stored.  Like you we break it down
by directory name.

I intend to do some testing to find alternatives.  One question I have is
simply whether running 2 backups at the same time for the same node with
the same opt file would run significantly faster than just one?  If you
have a data area almost entirely made up of large files, like NOTES, is
ADSM smart enough to know that a file is currently being backed up and just
to move to the next file?  For large amounts of small files this would be
prohibitive, but for large files it should not be all that inefficient.  I
will let you know the outcome, please let us know the outcome of any of
your testing.

If you want to see more specific examples of what we do, write me directly.

Good luck,
  Dan T.

PS.  I hereby officially offer my regrets to IM Tech IBM for having
potentially increased your experience with NT, a non-IM Tech IBM endorsed
system.
<Prev in Thread] Current Thread [Next in Thread>