ADSM-L

Re: Sharing tapedrives between several adsm servers ???

2000-03-01 07:19:22
Subject: Re: Sharing tapedrives between several adsm servers ???
From: Andreas Buser <andreas.buser AT BASLER DOT CH>
Date: Wed, 1 Mar 2000 13:19:22 +0100
That's no problem at all on OS/390!

ADSM is bahving like any other STC or BATCH-Job on an OS/390 (expect he
uses a little bit more Tapes...).

From the technical point it is implementetd the same way as DFSMShsm is.

Two things are the key points:

DEVCLASS

Communication with Tapemanagementsystem (rmm, ca1, etc...) Concerning
this take a look at DELETIONEXIT in the servers Optionfile.


_________________________________________________

Kind Regards
Andreas Buser

Tel: ++41 61 285 73 21  Fax: ++41 61 285 70 98

Email: Andreas.Buser AT Basler DOT ch

Address:
Basler Versicherungen
Andreas Buser
Abt. Informatik
Aeschengraben 21
4002 Basel
Switzerland



                    Andreas Rensch
                    <RenschA@ALTE-LEI        An:     ADSM-L AT VM.MARIST DOT EDU
                    PZIGER.DE>               Kopie:
                    Gesendet von:            Thema:  Sharing tapedrives between 
several adsm servers ???
                    "ADSM: Dist Stor
                    Manager"
                    <[email protected]
                    .EDU>


                    29.02.00 16:46
                    Bitte antworten
                    an "ADSM: Dist
                    Stor Manager"





Hi,

is it possible to share (eight) Magstar tape drives in one 3494 tape
library
between 4 ADSM servers (on OS/390)? Do I have to establish a server2server
connection. Or does anybody know a refering manual? Thanks for your help.

mfg / regards

andreas rensch / rz-qs
tel : +49(0)6171 66 3692 / fax : +49(0)6171 66 7500 3692 /
mailto:renscha AT alte-leipziger DOT de
Alte Leipziger Lebensversicherung aG - Alte Leipziger Platz 1 - D 61440
Oberursel - http://www.alte-leipziger.de

It's a little known fact that the Dark Ages were caused by unresolved Y1K
issues.
<Prev in Thread] Current Thread [Next in Thread>