ADSM-L

Re: ADSM Tape Usage.

1998-05-26 09:57:24
Subject: Re: ADSM Tape Usage.
From: "Lambelet,Rene,VEVEY,FC-SIL/INF." <Rene.Lambelet AT NESTLE DOT COM>
Date: Tue, 26 May 1998 15:57:24 +0200
Hello,

in the admin client, GUI mode, you can display the storage used by a
node on the server by:

opening NODES,

after selecting one or more nodes, click on File.../Show storage
usage...

Regards,

René Lambelet - (3543 - *A581 
Nestec SA - 55, Av. Nestlé - CH-1800 Vevey
Tel: ++41/21/924'35'43 / Fax: ++41/21/924'45'89
E-Mail: rene.lambelet AT nestle DOT com



>-----Original Message-----
>From:  Hilton Tina [SMTP:HiltonT AT TCE DOT COM]
>Sent:  Tuesday, May 26, 1998 3:03 PM
>To:    ADSM-L AT VM.MARIST DOT EDU
>Subject:       Re: ADSM Tape Usage.
>
>Try the "q occ * * " admin command.  That will show the # files and
>amount of space used per file-system for each node, for each storage
>pool.
>
>Tina
>
>> -----Original Message-----
>> From: Cohn, Grant [SMTP:Grant.Cohn AT SAPREF DOT COM]
>> Sent: Tuesday, May 26, 1998 7:13 AM
>> To:   ADSM-L AT VM.MARIST DOT EDU
>> Subject:      ADSM Tape Usage.
>>
>> Hello All
>>
>> Can anyone tell me if there is a query that I can give to find out how
>> much space is being used in ADSM for each node that we are backing up?
>> We seem to be running out of tapes which I find hard to believe and I
>> want to know where all our tape space is being used.
>>
>> If possible it would also be good to know right down to the
>> file-system
>> level how much we are backing up.
>> The 'FILE SPACES' part of the GUI shows me ALL file-systems on the
>> machine, even if we are not backing them up - so that is no good to
>> look
>> at.
>>
>> We are running ADSM server 2.1.5.15
>>                        ADSM client  2.1.10.7
>> on AIX 4.1.5
>>
>> Nodes are AIX 4.1.5 and NT 4
>>
>> Thanks in advance!
>>
>>
>> Grant Cohn
>>
>> Shell & BP South African Petroleum Refineries
>> Durban, South Africa.
>> Tel     :   (+27)  (0)31 - 480 1610
>> e-mail :   grant .cohn AT sapref DOT com
<Prev in Thread] Current Thread [Next in Thread>