Re: Help needed to free slots in a 3583 library

2004-04-07 12:48:55
Subject: Re: Help needed to free slots in a 3583 library
From: "Prather, Wanda" <Wanda.Prather AT JHUAPL DOT EDU>
Date: Wed, 7 Apr 2004 12:48:18 -0400
If these tapes are marked FULL, then the tapes ARE being filled efficiently.
It's just that, as you said, your data is expiring at different rates.

Because TSM doesn't expire files based on date, this is a natural side
effect; tapes don't all free up at the same time.
This is normal, and I've seen it with every TSM server I've worked on.

If you set your reclamation threshold to 60, you can, over time, end up with
an apparently infinite number of tapes sitting around with 59% free space.

Your choices are

(1) occasionally bite the bullet and do a lot of forced reclaims or MOVE
DATAs (which is a waste of your time, which is a lot more expensive than a
$45 LTO1 cartridge).

(2) upgrade your robot capacity (as you said, in your case the best way to
do that is to upgrade the drives & cartridges)


Deleting unused filespaces WILL immediately free up space on your other
tapes.  It may free some entirely, or send the %reclaimable up very high so
that you can reclaim them quickly.

The other thing you can do is review your management class retention
periods.  Are you keeping ALL data for the same number of versions/length of
time?  Can you change the management class of some directories (like the
winnt directory) to keep fewer versions?

It may help to issue "q occ", and look at the LOGICAL SPACE OCCUPIED for
each filespace.  If it is noticeably larger than the physical drive, or if
one is way out of whack with the others, you may be keeping more copies of
the data on that drive than you need.

Also, make sure you have EXCLUDED the TSM server storage pool and data base
volumes (normally have a .dsm suffix) from the backups.

If you don't find anything else you can get rid of, your management has a
choice:  Change the backup coverage requirements, or fork over the money for
new drives and media.

Hope that helps...

Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)

-----Original Message-----
From: Copperfield Adams [mailto:Copperfield.Adams AT WWAVRC.CO DOT UK]
Sent: Wednesday, April 07, 2004 11:22 AM
Subject: Help needed to free slots in a 3583 library

Hi all,

I have a query regarding tape usage:

Analysis: We currently have TSM 5.2 installed on a WIN2k Server platform
backing up appx. 50 nodes to 2 disk pools and then migrating to LTO tapes in
a 3583 library, one copy for offsite vault and one copy to remain onsite
within the tape library. We have 60 available slots for LTO volumes as we
leave the I/O drawer free to take volumes in and out of the library. We are
using 100gb native LTO volumes. We have 3 nodes that are configured to
backup using collocation.

Problem: We are constantly running at full capacity within the library
(always 59 Private volumes) leaving only one slot for scratch. Because we
have  3 collocated nodes and produce a dbbackup everyday to go offsite we
need a minimum of 5 free slots (assuming all other backed up data for other
nodes  does not exceed 100Gb). When I issue 'q eve * * begind=-1' to check
the previous evenings backups I am usually shown that most nodes have
completed  but often have only 2 tapes to go offsite (inc. the dbbackup)
suggesting that collocation is not happening. Aside from this, when I issue
a script to  check what space I am able to relaim from the onsite tapes (in
order to free up some slots for scratch volumes) I am presented with the
following (I  have omitted the offsite copy LTOs):


------------------     ------------------     -----------
BACKUP_TAPE            000411L1                      43.8     READWRITE

BACKUP_TAPE            000466L1                      43.5     READWRITE

BACKUP_TAPE            000362L1                      43.0     READWRITE

BACKUP_TAPE            000382L1                      42.6     READWRITE

BACKUP_TAPE            000380L1                      39.6     READWRITE

BACKUP_TAPE            000529L1                      39.1     READWRITE

BACKUP_TAPE            000632L1                      38.7     READWRITE

BACKUP_TAPE            000563L1                      38.4     READWRITE

BACKUP_TAPE            000418L1                      35.4     READWRITE

BACKUP_TAPE            000403L1                      34.6     READWRITE

BACKUP_TAPE            000572L1                      33.7     READWRITE

BACKUP_TAPE            000364L1                      31.4     READWRITE

BACKUP_TAPE            000564L1                      28.8     READWRITE

BACKUP_TAPE            000402L1                      28.5     READWRITE

BACKUP_TAPE            000495L1                      25.8     READWRITE

BACKUP_TAPE            000391L1                      25.6     READWRITE

BACKUP_TAPE            000412L1                      22.9     READWRITE

BACKUP_TAPE            000453L1                      22.8     READWRITE

BACKUP_TAPE            000501L1                      21.3     READWRITE

BACKUP_TAPE            000413L1                      16.6     READWRITE

BACKUP_TAPE            000612L1                      16.1     READWRITE

BACKUP_TAPE            000470L1                      14.8     READWRITE

BACKUP_TAPE            000618L1                      11.4     READWRITE

BACKUP_TAPE            000395L1                      11.1     READWRITE

BACKUP_TAPE            000607L1                       6.3     READWRITE

BACKUP_TAPE            000388L1                       3.2     READWRITE

BACKUP_TAPE            000422L1                       2.4     READWRITE

BACKUP_TAPE            000570L1                       2.2     READWRITE

BACKUP_TAPE            000516L1                       0.0     READWRITE

BACKUP_TAPE_COL        000463L1                      44.7     READWRITE

BACKUP_TAPE_COL        000600L1                      42.9     READWRITE

BACKUP_TAPE_COL        000575L1                      41.4     READWRITE

BACKUP_TAPE_COL        000530L1                      40.3     READWRITE

BACKUP_TAPE_COL        000540L1                      37.7     READWRITE

BACKUP_TAPE_COL        000605L1                      35.7     READWRITE

BACKUP_TAPE_COL        000417L1                      34.7     READWRITE

BACKUP_TAPE_COL        000457L1                      33.0     READWRITE

BACKUP_TAPE_COL        000437L1                      32.4     READWRITE

BACKUP_TAPE_COL        000644L1                      29.5     READWRITE

BACKUP_TAPE_COL        000415L1                      28.4     READWRITE

BACKUP_TAPE_COL        000581L1                      28.3     READWRITE

BACKUP_TAPE_COL        000583L1                      28.3     READWRITE

BACKUP_TAPE_COL        000601L1                      27.7     READWRITE

BACKUP_TAPE_COL        000452L1                      26.3     READWRITE

BACKUP_TAPE_COL        000414L1                      24.9     READWRITE

BACKUP_TAPE_COL        000464L1                      23.1     READWRITE

BACKUP_TAPE_COL        000477L1                      22.4     READWRITE

BACKUP_TAPE_COL        000416L1                      22.2     READWRITE

BACKUP_TAPE_COL        000502L1                      20.6     READWRITE

BACKUP_TAPE_COL        000545L1                      19.2     READWRITE

BACKUP_TAPE_COL        000360L1                      15.0     READWRITE

BACKUP_TAPE_COL        000379L1                       7.7     READWRITE

BACKUP_TAPE_COL        000515L1                       4.6     READWRITE

BACKUP_TAPE_COL        000596L1                       4.5     READWRITE

BACKUP_TAPE_COL        000559L1                       1.9     READWRITE

BACKUP_TAPE_COL        000442L1                       0.6     READWRITE

By looking at the 'PCT_Reclaim' I can see that 000516L1 has 0% of
reclaimable space going up to 43.8% for the least used volume (000411L1).
What I  have tried to do to get some more usage out of my LTO volumes is
issue a 'MOVE DATA <VOL_NAME> STGP=<DISK_POOL>' to move all the data from
the least  used volumes (making it READONLY before starting the process so
TSM does not reuse the volume when it is returned to 'Scratch' status when
empty) back  to the disk pool to be migrated onto other onsite volumes. In
effect I am running manual reclamations but as you can see from the output
above it does  not look like tape space is being managed as efficiently as
it could be. I'm not sure whether this is to do with the way TSM has been
configured or  whether it is natural to see volumes used to differing
percentages because of expirations, etc.
Does anyone know how to get onsite LTO volumes to use closer to 100% of
their capacity or are we expecting too much of our setup - do we need to
start using 200Gb tapes (this will presumably mean we will need to replace
our 4 existing drives)? Also, there are 27 onsite volumes for collocated
nodes but only 3 collocated nodes - is there something I can  check to
ensure this is correct?
Also, I am planning to delete some unused filespaces that will remove appx.
1.5Tb of data from our backed up data. Will deleting these filespaces
invoke TSM to automatically purge the associated data that resides within
the onsite/offsite volumes and will this mean that we will free up some
slots within the library for scratch media?

Hope someone can offer some suggestions.

Regards, C. Adams.

C. Adams
IT Support Analyst
WRC Holdings Limited

This email and any files transmitted with it are confidential and intended
solely for the individual(s) or entity(s) to whom it is addressed.  Any
views or opinions presented or expressed herein, are those of the author(s),
and may not necessarily represent those of WWAV Rapp Collins Group or it's
affiliates and representatives.  No representation is given nor any
liability accepted for the accuracy or completeness of any information
contained in this email unless expressly stated to the contrary. Neither is
any liability accepted for any damage or financial or personal loss
howsoever caused as a result of using the information or data contained
within this email.

If you are not the intended recipient(s) and/or have received this email in
error, you may not use, disseminate, store, forward, print or copy it, in
any method or form, either digital or otherwise.  You must delete it
completely from your system(s) and notify the sender that you received it,
so we can ensure it does not happen again.

Registered Office: 1 Riverside, Manbre Road, London UK, W6 9WA.  Registered
in England No. 1581935