ADSM-L

Re: What ever happened to Group Collocation ?

2004-05-04 19:46:51
Subject: Re: What ever happened to Group Collocation ?
From: Joerg Pohlmann <jpohlman AT CA.IBM DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 4 May 2004 16:44:08 -0700
The most likely reason for multiple "filling" tapes for the same node is
that a filling tape is in use by reclamation or storage pool backup and
migration is starting to run. Migration then cannot add to the filling
tape for the node and instead gets itself a new scratch tape. You now have
two filling tapes for the same node, and if the node was small to begin
with, two filling tapes with low percent used. You can list all tapes in
use with the following select statement (update to exclude your copy
storage pool(s)):

select distinct node_name, volume_name, stgpool_name from volumeusage
where stgpool_name stgpool_name not like'%COPY%'

When you have the listing, check the client nodes that have two or more
tapes listed; then do a move data on the tape that is least filling.

Joerg Pohlmann
604-535-0452







Zoltan Forray/AC/VCU <zforray AT VCU DOT EDU>
Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
2004-05-04 13:13
Please respond to "ADSM: Dist Stor Manager"


        To:     ADSM-L AT VM.MARIST DOT EDU
        cc:
        Subject:        Re: What ever happened to Group Collocation ?



I have already thought of this idea. I was hoping for GROUP COLLOCATION.

The problem with this idea/design is I have to essential duplicate
*EVERYTHING*, such as admin processes, operator training, etc.  Also, this
means I have to set aside disk storage (which is very limited on my zOS
system) to dedicate for each pool, that can't be shared, and hope I don't
guess wrong on how much each pool/group needs.

Thanks for the suggestions.  Unfortunately, this is probably the way I
will have to go.

One confusion is why do I have sooooo many partially filled tapes ?   I
don't have this many nodes ?


"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU> wrote on 05/04/2004
01:10:17 PM:

> Zoltan, why not set up a SMALL_SERVERS policy domain which has MC
> definitions the same as your normal server PD with one difference - send
> the data to a different hierarchy where the tape storage pool is not
> collocated. The client nodes that are candidates for this PD are nodes
> that have small quantities of data in TSM's storage. You can do a q
> auditoccupancy to zero in on the nodes that are small server candidates
> that currently reside in the PD where the data is collocated. Small
> quantities are in my opinion nodes with less than 20-40GB reported by q
> auditocc. The q auditocc data has to be interpreted as "1/2 onsite, 1/2
> offsite" if you have a normal DR environment where the data has been
> backed up to a copy pool. Then copy the appropriate client schedules to
> the new PD, update the nodes to the new PD, then associate the nodes
with
> the new schedules. If your server is pre-5.2.x.x and you are running
> schedmode prompted, you will need to restart the client acceptor or
> scheduler service/daemon/nlm as the server does not remember the
client's
> IP address prior to 5.2.x.x servers. Then do a move nodedata for all the
> nodes in question. I have a customer with >175 nodes where 50 nodes are
> SMALL_SERVERS nodes residing on 2, sometimes 3 tapes. The large nodes
> reside on collocated tapes. This approach gives you the best of both
> worlds - a fast restore capability for a small node as only 2 sometimes
3
> tapes need to be mounted maximum, and good tape usage as the slot
> occupancy in the library is minimized (optimized). I also segregate
AGENTS
>  (TSM for Databases, Mail) into a different PD where the data flows into
a
> non-collocated tape pool such that reclamation activity is minimized and
> tape occupancy is maximized, that is the number of tapes in use is
> minimized. Agents typically send large blobs (the database) where
> expiration often causes entire tapes to become empty. Workstations,
> laptops in my implementations live in the STANDARD PD where the data
flows
> into a non-collocated tape pool.
>
> Joerg Pohlmann
> 604-535-0452
>
>
>
>
>
> Zoltan Forray/AC/VCU <zforray AT VCU DOT EDU>
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
> 2004-05-04 07:54
> Please respond to "ADSM: Dist Stor Manager"
>
>
>         To:     ADSM-L AT VM.MARIST DOT EDU
>         cc:
>         Subject:        What ever happened to Group Collocation ?
>
>
>
> Any one have any idea on what the target is for this much needed feature
?
> I thought it was originally targeted for 5.2.2 (unless I missed it
> somewhere !).
>
> I have 125 tapes with less than 10% used, due to collocation and a bunch
> of small nodes and I really need to reduce that number !