Re your line " None of them has data from more than one client"
Has collocation been turned on by accident?
Thanks & Regards
Paul
>
> So, I've got an offsite machine which exists to accept remote virtual
> volumes. For years, now, the filling volumes have behaved in a way I
> thought I understood.
>
> The tapes are collocated by node. There are about 20 server nodes which
> write to it.
>
> My number of filling volumes has rattled around 50-60 for years; I
> interpret this as basic node collocation, plus occasional additional
> tapes allocated when more streams than tapes are writing at a time. So
> some of the servers have just one filling tape, some have two, and the
> busiest of them might have as many as 6 (my drive count).
>
> Add a little error for occasionally reclaiming a still-filling volume,
> and that gives me a very clear sense of what's going on, and I can just
> monitor scratch count.
>
> Right now, I have 190 filling volumes.
>
> None of them has data from more than one client.
>
> I have some volumes RO and filling, and am looking into that, but it's
> 20 of them, not enough to account for this backlog. Those are also the
> only vols in error state.
>
> I've been rooting through my actlogs looking for warnings or errors, but
> I've never had occasion to introspect about how TSM picks which tape to
> call for, when it's going to write. It's always Just Worked.
>
>
> Does this ring any bells for anyone? Any dumb questions I've
> forgotten to ask? I don't hold much hope for getting a good experience
> out of IBM support on this.
>
>
> - Allen S.Rout
ANL - Regional Carrier of the Year 2011 - Containerisation International
ANL DISCLAIMER
This e-mail and any file attached is confidential, and intended solely to the
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you
received
this e-mail in error, please immediately notify the sender by return e-mail
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or
disclose its
contents to any person.
|