ADSM-L

Re: error

1997-08-20 04:55:19
Subject: Re: error
From: Christo Heuer <christoh AT ABSA.CO DOT ZA>
Date: Wed, 20 Aug 1997 10:55:19 +0200
Hi Michael,

I know others on the list have already responded to your
mail, but there might be something else you must look
at - If you eliminate your disk pools in total and are writing
directly to tape, you should look at your mountlimit in the
device class for the tapes. If I remember correctly you will
need a tape drive for each client that backs up - so if you
have 20 clients kicking in at the same time you'll need
20 tape drives. The above might not be exactly your problem,
but it is something that you must keep in mind.
I think Timothy Pittson has given the answer to your problem
already regarding the copygroup still pointing to your diskpool.

Shout if you need more info regarding this.

Regards

Christo Heuer
Johannesburg
South Africa
Christoh AT absa.co DOT za


>         Hello all... I have moved all my disk storage pools off to the tape
> storage pools.We are terminating the contract on some 3380's we were using
> for adsm disk storage. I am getting failures on my nightly backups. Here is
> the message I receive:
>
> 08/18/1997 04:59:33  ANS4329S Server out of data storage space
> 08/18/1997 04:59:33  ANS4847E Scheduled event 'UTEPDNS' failed.  Return
> code = 4.
>
> I added some more tapes to both storage pools,nut they still failed. Has
> anyone encountered this problem? When I did the migration from disk to tape
> I use the move data command. What should I do?
>
> Thanks!!!
>
>
>
> Michael A.Castillo
> Software Systems Specialist
> Ph:747-5256
> Fax:747-5067
> Beeper :546-3756
> E-Mail: mcastill AT utep DOT edu
>
> -----BEGIN PGP PUBLIC KEY BLOCK-----
> Version: 2.6.2
>
> mQCNAzKCcMoAAAEEALoY5olyDXeMDrfnWs3uLopL21RZTcWvoKkPMK4mEiJvP245
> FiLGQHvJ8+C1Shdf5IZXcnFrl7GwOJGZaLnpl06DCTAyRsfyZFjHBaKyMDn0Yq9Z
> ckIF3jFtSzWJtD9Fafu+5Rxxuvup9q5GJT620wj35dFdQp5Fs4OYzdgV7bM9AAUT
> tAhtY2FzdGlsbA==
> =xcl7
> -----END PGP PUBLIC KEY BLOCK-----
<Prev in Thread] Current Thread [Next in Thread>