ADSM-L

Re: adsm3.1 to tsm 3.7 gotcha

2000-03-22 07:51:30
Subject: Re: adsm3.1 to tsm 3.7 gotcha
From: Reinhard Mersch <mersch AT UNI-MUENSTER DOT DE>
Date: Wed, 22 Mar 2000 13:51:30 +0100
Rik,

your procedure looks fine if your DB is small, but for big DBs (ours is
28 GB and still growing) it is quite expensive. But perhaps you are
right; adding some disks may be cheaper than buying DRM ...

Regards,

Reinhard

Rik Foote writes:
 > We don't have any DRM or server-to-server licenses but are managing to
 > exchange DB backups b/w 2 ADSM servers manually.
 >
 > Our environment is
 > OS = SUN Solaris v2.6
 > ADSM = v3.1.2.1
 > 2 x 3590 (B11s) with a 10-slot ACL SCSI library each
 >
 > We set up the following manual DB exchange between our servers to save
 > having to use one scratch cart per day in our 10 slot stackers for DB
 > backups, as we use almost all the 10 carts daily for archive/backup. If we
 > used one cart per day, and kept each for one week we practically had to
 > devote one whole stacker to DB backups unless we checked out the cartridge
 > each day and replaced it.
 >
 > We were able to do it by
 >
 > o    writing the DB backups to disk (COMMAND = backup db type=full
 > devclass=dbbackup)
 > o    then using a UNIX script to transmit them to the second server
 > o    Once there they are archived (retention period = one week) and deleted
 > by the second server (COMMAND = -archm=MC1WEEKA -delete -filesonly)
 >
 > We do this both ways between the two servers. So we are able to retain
 > copies of our DB at both sites, doing away with the need for copying the db
 > backup carts and sending them offsite.

--
Reinhard Mersch                        Westfaelische Wilhelms-Universitaet
Reinhard Mersch                        Westfaelische Wilhelms-Universitaet
Zentrum fuer Informationsverarbeitung - ehemals Universitaetsrechenzentrum
Roentgenstrasse 9-13, D-48149 Muenster, Germany      Tel: +49(251)83-31583
E-Mail: mersch AT uni-muenster DOT de                       Fax: 
+49(251)83-31653
<Prev in Thread] Current Thread [Next in Thread>