Re: Help needed to free slots in a 3583 library

2004-04-07 11:57:29
Subject: Re: Help needed to free slots in a 3583 library
From: Mike Bantz <mbantz AT RSINC DOT COM>
Date: Wed, 7 Apr 2004 09:56:56 -0600
I was in your same boat. It sucks.

We have since turned colocation off since we're more of a backup shop, not
into tons of restores. Also, we will leave the DRM tapes in the library to
get filled more rather than sending them offsite every day (we do not have
daily tape delivery. Yet.)

We currently have our reclamation set to 30% to bump up that utilization,
but we run that on two admin sechdules.

Here's an example:

At 12:00, we have an admin schedule that issues "update stg copypool
At 16:00, we have an admin schedule that issues "update stg copypool

This is what our "q vol" shows (keep in mind that a whole ton of these are
scratch tapes right now)
As you can see, the tape utilization is a lot higher.

00001-L1                  COPYPOOL     LTOCLASS    205,859.4   97.3    Full

00002-L1                  COPYPOOL     LTOCLASS    106,113.6   86.0  Filling

00003-L1                  TAPEPOOL     LTOCLASS    201,242.6   92.2    Full

00006-L1                  TAPEPOOL     LTOCLASS    216,692.6   99.9    Full

00007-L1                  COPYPOOL     LTOCLASS    213,599.4  100.0    Full

00008-L1                  COPYPOOL     LTOCLASS    201,831.4   83.1    Full

00010-L1                  COPYPOOL     LTOCLASS          0.0    0.0   Empty

00014-L1                  COPYPOOL     LTOCLASS    178,820.2   87.3  Filling

00015-L1                  COPYPOOL     LTOCLASS    102,400.0   76.1  Filling

00016-L1                  TAPEPOOL     LTOCLASS    102,400.0   24.9  Filling

00022-L1                  TAPEPOOL     LTOCLASS    205,063.7   84.3    Full

00025-L1                  TAPEPOOL     LTOCLASS    201,604.4   71.9    Full

00027-L1                  COPYPOOL     LTOCLASS    239,662.7   82.1    Full

00029-L1                  COPYPOOL     LTOCLASS    214,230.8   77.2    Full

00030-L1                  TAPEPOOL     LTOCLASS    215,340.7   76.3    Full

00031-L1                  TAPEPOOL     LTOCLASS    102,400.0   43.3  Filling

00033-L1                  COPYPOOL     LTOCLASS    111,865.1   73.7  Filling

00038-L1                  TAPEPOOL     LTOCLASS    208,595.1   71.6    Full

00047-L1                  COPYPOOL     LTOCLASS    173,673.4   92.8  Filling

00050-L1                  TAPEPOOL     LTOCLASS    211,247.5  100.0    Full

00057-L1                  TAPEPOOL     LTOCLASS    241,950.5   85.2    Full

00066-L1                  COPYPOOL     LTOCLASS    214,408.9   77.8    Full

00067-L1                  COPYPOOL     LTOCLASS    205,609.1   95.8    Full

00070-L1                  TAPEPOOL     LTOCLASS    217,336.3   93.2    Full

00095-L1                  TAPEPOOL     LTOCLASS    159,527.3   83.4    Full

00097-L1                  TAPEPOOL     LTOCLASS    163,106.7   92.1    Full

00100-L1                  TAPEPOOL     LTOCLASS    211,595.7   85.7    Full

00101-L1                  COPYPOOL     LTOCLASS    178,355.2   81.8    Full

00102-L1                  COPYPOOL     LTOCLASS    161,291.2   93.3    Full

00103-L1                  COPYPOOL     LTOCLASS    211,708.7   91.3  Filling

00106-L1                  TAPEPOOL     LTOCLASS    233,198.6   79.2    Full

00107-L1                  TAPEPOOL     LTOCLASS    224,637.1   80.9    Full

00108-L1                  COPYPOOL     LTOCLASS          0.0    0.0   Empty

00110-L1                  TAPEPOOL     LTOCLASS    221,950.0   84.1    Full

00111-L1                  COPYPOOL     LTOCLASS    210,298.6   99.9    Full

00113-L1                  COPYPOOL     LTOCLASS    242,915.3   89.4    Full

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf Of
Copperfield Adams
Sent: Wednesday, April 07, 2004 9:22 AM
Subject: Help needed to free slots in a 3583 library

Hi all,

I have a query regarding tape usage:

Analysis: We currently have TSM 5.2 installed on a WIN2k Server platform
backing up appx. 50 nodes to 2 disk pools and then migrating to LTO tapes in
a 3583 library, one copy for offsite vault and one copy to remain onsite
within the tape library. We have 60 available slots for LTO volumes as we
leave the I/O drawer free to take volumes in and out of the library. We are
using 100gb native LTO volumes. We have 3 nodes that are configured to
backup using collocation.

Problem: We are constantly running at full capacity within the library
(always 59 Private volumes) leaving only one slot for scratch. Because we
have  3 collocated nodes and produce a dbbackup everyday to go offsite we
need a minimum of 5 free slots (assuming all other backed up data for other
nodes  does not exceed 100Gb). When I issue 'q eve * * begind=-1' to check
the previous evenings backups I am usually shown that most nodes have
completed  but often have only 2 tapes to go offsite (inc. the dbbackup)
suggesting that collocation is not happening. Aside from this, when I issue
a script to  check what space I am able to relaim from the onsite tapes (in
order to free up some slots for scratch volumes) I am presented with the
following (I  have omitted the offsite copy LTOs):


------------------     ------------------     -----------
BACKUP_TAPE            000411L1                      43.8     READWRITE

BACKUP_TAPE            000466L1                      43.5     READWRITE

BACKUP_TAPE            000362L1                      43.0     READWRITE

BACKUP_TAPE            000382L1                      42.6     READWRITE

BACKUP_TAPE            000380L1                      39.6     READWRITE

BACKUP_TAPE            000529L1                      39.1     READWRITE

BACKUP_TAPE            000632L1                      38.7     READWRITE

BACKUP_TAPE            000563L1                      38.4     READWRITE

BACKUP_TAPE            000418L1                      35.4     READWRITE

BACKUP_TAPE            000403L1                      34.6     READWRITE

BACKUP_TAPE            000572L1                      33.7     READWRITE

BACKUP_TAPE            000364L1                      31.4     READWRITE

BACKUP_TAPE            000564L1                      28.8     READWRITE

BACKUP_TAPE            000402L1                      28.5     READWRITE

BACKUP_TAPE            000495L1                      25.8     READWRITE

BACKUP_TAPE            000391L1                      25.6     READWRITE

BACKUP_TAPE            000412L1                      22.9     READWRITE

BACKUP_TAPE            000453L1                      22.8     READWRITE

BACKUP_TAPE            000501L1                      21.3     READWRITE

BACKUP_TAPE            000413L1                      16.6     READWRITE

BACKUP_TAPE            000612L1                      16.1     READWRITE

BACKUP_TAPE            000470L1                      14.8     READWRITE

BACKUP_TAPE            000618L1                      11.4     READWRITE

BACKUP_TAPE            000395L1                      11.1     READWRITE

BACKUP_TAPE            000607L1                       6.3     READWRITE

BACKUP_TAPE            000388L1                       3.2     READWRITE

BACKUP_TAPE            000422L1                       2.4     READWRITE

BACKUP_TAPE            000570L1                       2.2     READWRITE

BACKUP_TAPE            000516L1                       0.0     READWRITE

BACKUP_TAPE_COL        000463L1                      44.7     READWRITE

BACKUP_TAPE_COL        000600L1                      42.9     READWRITE

BACKUP_TAPE_COL        000575L1                      41.4     READWRITE

BACKUP_TAPE_COL        000530L1                      40.3     READWRITE

BACKUP_TAPE_COL        000540L1                      37.7     READWRITE

BACKUP_TAPE_COL        000605L1                      35.7     READWRITE

BACKUP_TAPE_COL        000417L1                      34.7     READWRITE

BACKUP_TAPE_COL        000457L1                      33.0     READWRITE

BACKUP_TAPE_COL        000437L1                      32.4     READWRITE

BACKUP_TAPE_COL        000644L1                      29.5     READWRITE

BACKUP_TAPE_COL        000415L1                      28.4     READWRITE

BACKUP_TAPE_COL        000581L1                      28.3     READWRITE

BACKUP_TAPE_COL        000583L1                      28.3     READWRITE

BACKUP_TAPE_COL        000601L1                      27.7     READWRITE

BACKUP_TAPE_COL        000452L1                      26.3     READWRITE

BACKUP_TAPE_COL        000414L1                      24.9     READWRITE

BACKUP_TAPE_COL        000464L1                      23.1     READWRITE

BACKUP_TAPE_COL        000477L1                      22.4     READWRITE

BACKUP_TAPE_COL        000416L1                      22.2     READWRITE

BACKUP_TAPE_COL        000502L1                      20.6     READWRITE

BACKUP_TAPE_COL        000545L1                      19.2     READWRITE

BACKUP_TAPE_COL        000360L1                      15.0     READWRITE

BACKUP_TAPE_COL        000379L1                       7.7     READWRITE

BACKUP_TAPE_COL        000515L1                       4.6     READWRITE

BACKUP_TAPE_COL        000596L1                       4.5     READWRITE

BACKUP_TAPE_COL        000559L1                       1.9     READWRITE

BACKUP_TAPE_COL        000442L1                       0.6     READWRITE

By looking at the 'PCT_Reclaim' I can see that 000516L1 has 0% of
reclaimable space going up to 43.8% for the least used volume (000411L1).
What I  have tried to do to get some more usage out of my LTO volumes is
issue a 'MOVE DATA <VOL_NAME> STGP=<DISK_POOL>' to move all the data from
the least  used volumes (making it READONLY before starting the process so
TSM does not reuse the volume when it is returned to 'Scratch' status when
empty) back  to the disk pool to be migrated onto other onsite volumes. In
effect I am running manual reclamations but as you can see from the output
above it does  not look like tape space is being managed as efficiently as
it could be. I'm not sure whether this is to do with the way TSM has been
configured or  whether it is natural to see volumes used to differing
percentages because of expirations, etc.
Does anyone know how to get onsite LTO volumes to use closer to 100% of
their capacity or are we expecting too much of our setup - do we need to
start using 200Gb tapes (this will presumably mean we will need to replace
our 4 existing drives)? Also, there are 27 onsite volumes for collocated
nodes but only 3 collocated nodes - is there something I can  check to
ensure this is correct?
Also, I am planning to delete some unused filespaces that will remove appx.
1.5Tb of data from our backed up data. Will deleting these filespaces
invoke TSM to automatically purge the associated data that resides within
the onsite/offsite volumes and will this mean that we will free up some
slots within the library for scratch media?

Hope someone can offer some suggestions.

Regards, C. Adams.

C. Adams
IT Support Analyst
WRC Holdings Limited

This email and any files transmitted with it are confidential and intended
solely for the individual(s) or entity(s) to whom it is addressed.  Any
views or opinions presented or expressed herein, are those of the author(s),
and may not necessarily represent those of WWAV Rapp Collins Group or it's
affiliates and representatives.  No representation is given nor any
liability accepted for the accuracy or completeness of any information
contained in this email unless expressly stated to the contrary. Neither is
any liability accepted for any damage or financial or personal loss
howsoever caused as a result of using the information or data contained
within this email.

If you are not the intended recipient(s) and/or have received this email in
error, you may not use, disseminate, store, forward, print or copy it, in
any method or form, either digital or otherwise.  You must delete it
completely from your system(s) and notify the sender that you received it,
so we can ensure it does not happen again.

Registered Office: 1 Riverside, Manbre Road, London UK, W6 9WA.  Registered
in England No. 1581935