ADSM-L

Re: Lotus Notes backups

1995-12-06 16:56:03
Subject: Re: Lotus Notes backups
From: Andy Raibeck <raibeck AT CONNIX DOT COM>
Date: Wed, 6 Dec 1995 16:56:03 -0500
Jerry Lawson relates:

> A couple of Lotus Notes backup problems I am having.......
>
> I am trying to back up an OS/2 machine that functions as a Lotus Notes
> server.  Because of the limit of the number of .NSF files the Notes client
> can back up, we are still just using the OS/2 client.  One of the servers,
> which functions in conjunction with a news search service, requires over
> 12 hours to do the backup.  Transfer rates this morning indicated that in
> excess of 2.5G of compressed data had been transfered.  This was an
> incremental, not an initial backup.
>
> 1.  It was suggested here that we set the backup to "Shared Static".  It
> would seem that this might be a big source of the slow backups.  However,
> it would appear that I may not have a good choice here - setting it to
> "Static" could mean that I don't get any backups.  What is my exposure if
> I set it to straight "Dynamic"?
>
> 2.  Another possible problem is that my DASD pool is relatively small -
> 1G, with thresholds at 90% and 50%.  Obviously this needs to be enlarged.
> Are there any guidelines anyone uses (outside of migrate no more than once
> a day?) to size the pool.  Is there any way to measure the amount of
> overhead having too small of a pool adds to the backup?
>
> 3.  We want to get to the Notes client, and exclude the .NSF files from
> the OS/2 backup.  We have also heard that the Notes backup client is not
> as fast as the OS/2 client.  Is this true?

1) Dynamic means you might get a fuzzy backup. So the question becomes: are
   fuzzy backups of Notes databases usable? Also, in the event you had to
   restore, you probably wouldn't even know which *.NSF files were fuzzy, and
   which ones weren't. Thus you don't even know which databases might be
   asynchronous.

   I'm pretty sure that IBM posted not too long ago about the 50-database
   limit with the Notes agent; that the next OS/2 client will come with the
   latest API that will eliminate this limitation. Hopefully this will be
   available in time to put under the tree.    (8->
   Then you can go the Notes agent route.

2) The approach I took with sizing my DASD pool was to keep the pool large
   enough to house one night's worth of backups (plus a little more "just
   in case"). If you're trying to stuff 2.5+ GB into a 1 GB pool every
   night, you end up storing the data on disk, then ADSM has to move it to
   tape almost as fast as you're putting it up there. I suggest that your
   minimum pool size be large enough to store one night's worth of backups.

   I have a 17 GB disk pool, and on a busy night, I back up as much as 13
   or 14 GB. I have my thresholds set to 10% and 80%. During the latter part
   of the afternoon, I have a scheduled administrative command that sets the
   high threshold to 11%, causing migration to kick off. The pool is set to
   run up to 6 migration processes. It takes around two to three hours to
   migrate the data to a collocated tape pool. (I have another scheduled com-
   mand that to reset the high threshold back to 80%.) This basically guaran-
   tees that I'll have enough room for that night's backups without having
   to incur the cost of migration during the backup. Migration happens later
   on, at *my* convenience. This scheme has served us well.

   To monitor the amount of data backed up to the server on a daily basis,
   I've written a simple SAS program that reads the SMF records generated
   by ADSM, and spits out several reports about ADSM activity. One of these
   reports shows me how much data was backed up for every day (24-hour
   period) of the month, so this is what I use to size my disk pool.

   I save a year's worth of SMF records in a GDG, one month per generation
   data set.

   If anyone is interested in my SAS program, let me know. It's nothing
   fancy, but it does the job.

3) IBM has said that the Notes agent isn't a high-speed performer. From what
   I've been able to gather, this is due in large part to the overhead
   incurred in accessing the Notes database. You might want to consider
   doing occasional (i.e. weekly or monthly) backups of the *.NSF files
   with the regular OS/2 Backup-Archive client, along with your daily
   backups with the Notes agent (especially for larger databases). This
   way, instead of potentially having to use the Notes agent to incrementally
   restore thousands of documents, you can do a "full" restore of the data-
   base with the OS/2 Backup-Archive client, then just restore the changed
   documents on top of it with the Notes agent.

Andy Raibeck
Connecticut Mutual
203-987-3521
<Prev in Thread] Current Thread [Next in Thread>