ADSM-L

[ADSM-L] Volume query

2008-12-16 01:51:25
Subject: [ADSM-L] Volume query
From: "Cheung, Richard" <Richard.Cheung AT SANTOS DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 16 Dec 2008 17:20:13 +1030
Hello.. 

 

I have a query about some volumes in my library.. 

 

One of them is listed as below: 

 

Volume Name: S01553L3

Storage Pool Name: COPYPOOL

Device Class Name: LTO

Estimated Capacity: 667.0 G

Scaled Capacity Applied:

Pct Util: 0.0

Volume Status: Full

Access: Offsite

Pct. Reclaimable Space: 100.0

Scratch Volume?: Yes

In Error State?: No

Number of Writable Sides: 1

Number of Times Mounted: 4

Write Pass Number: 1

Approx. Date Last Written: 01/16/2008 06:58:57

Approx. Date Last Read: 01/12/2008 12:37:27

Date Became Pending:

Number of Write Errors: 0

Number of Read Errors: 0

Volume Location: VAULT

Volume is MVS Lanfree Capable : No

Last Update by (administrator): admin

Last Update Date/Time: 01/21/2008 20:28:37

Begin Reclaim Period:

End Reclaim Period:

Drive Encryption Key Manager:

 

In other words, it is showing that it should have 100% space
reclaimable, that it is a scratch volume...  but for some reason the
system haven't asked me to bring it back from offsite...  

 

Why? 

 

Richard

 


<html>
<body>
<font face="arial" color=#808080 size="-2"><img
alt="Santos Logo" src="http://www.santos.com/library/logo.gif";>
<br>Santos Ltd A.B.N. 80 007 550 923<br>
Disclaimer: The information contained in this email is intended only for the 
use of the person(s) to whom it is addressed and may be confidential or contain 
privileged information. 
If you are not the intended recipient you are hereby notified that any perusal, 
use, distribution, copying or disclosure is strictly prohibited. 
If you have received this email in error please immediately advise us by return 
email and delete the email without making a copy.</font>
<font face="arial" color=#008000 size="-2">Please consider the environment 
before printing this email</font>
</body>
</html>

<Prev in Thread] Current Thread [Next in Thread>