ADSM-L

Re: / /OREF:CPT444C5 TSM - Too many tapes being used

2003-10-18 04:48:16
Subject: Re: / /OREF:CPT444C5 TSM - Too many tapes being used
From: Zlatko Krastev <acit AT ATTGLOBAL DOT NET>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sat, 18 Oct 2003 11:43:07 +0300
The are more than one statements in your mail I cannot understand/accept:
--> I am not using collocation
If you are doing backups to *one* storage pool hierarchy (one disk pool
migrating to one tape pool) and the tape pool is not collocated, you ought
to fill with those 300 GB no more than 3 LTO-1 tapes or 2 LTO-2 tapes.
Even if we assume you have three nodes with 100.1 GB each, doing backups
in parallel, you may end up with 3 tapes filled with 100 GB and 3 tapes
with 0.1 GB. Total of 6 tapes and with no collocation remaining nodes
should append to latter 3 tapes instead of 7-th scratch.
Other possibility might be COMPRESSAlways=No and some large
non-compressable files on node(s). File is written to tape "compressed"
and resent "to save space". As result first write is discarded and there
is a wasted space on the tape.
Is the DB backup tape counted in those 7?
Output of "q stg f=d" might shed some light on the problem. Output of "q
v" for those 7 volumes also might reveal something. A SQL select might
also help - "select distinct node_name,stgpool_name,volume_name from
volumeusage"

Are those nodes starting backups simultaneously? I can imagine some odd
sequence:
- all seven nodes start within a short period
- each is requesting via Storage Agent to mount a tape
- server is defining 7 scratches to the pool and is providing the names to
each storage agent
- first three mount requests are satisfied by the server and backups are
done
- rest four storage agents are waiting for their already designated tapes
to be mounted as the requests are already made
- when one of first three nodes finishes its backup a tape drive is freed
- mount request of another storage agent is satisfied and designated tape
is mounted instead of partially full tape from previous node
Digging the actlog when backups start might prove or reject the guess. If
it is true, it will mean each node goes to its own tape. Therefore the
tape with 0,7% utilization should contain only 700 MB from one node. That
node is very good candidate for LAN backup vs. LAN-free. After the server
mounts the volume, the storage agent still needs to open the device.
During that time a LAN backup will have already finished. Diskpool will
further improve backup time.


--> I am not using ... a copy pool.
Aren't you afraid of media failure. Even if LTO is very reliable, its
reliability can never be 100%!


--> SANergy is not supported by IBM.
Definitely incorrect. Product is still sold by IBM and as any other IBM
Software product is delivered with 1 year of support.
Limited use SANergy license is also part of ITSM for SAN license and
allows backups to SAN-shared *file* pools. SAN-sharing of random access
disk pool is not possible, but is supported for sequential file pools.

Zlatko Krastev
IT Consultant






Cecily Hewlett <chewlett AT ZA.SAFMARINE DOT COM>
Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
17.10.2003 18:29
Please respond to "ADSM: Dist Stor Manager"


        To:     ADSM-L AT VM.MARIST DOT EDU
        cc:
        Subject:        / /OREF:CPT444C5 TSM - Too many tapes being used


Please help me.

I am backing up 7 nodes, running TSM 5.1 on AIX 5.1 and AIX 4.3.3.
Total data backed up = +- 300 GB.
I am using an LTO3583 library, with 3 x 3580 drives, across a SAN.

Backup times are great, but TSM is being v. wasteful with tapes
using up to 7 tapes every night, some of them only have 0.7%
utilization.

I am not using collocation, or a copy pool.
I tried to use a diskpool on my shark and then move the data to tape,
but , SANergy is not supported by IBM.

Does anyone have any suggestions.???

Cecily Hewlett