ADSM-L

Re: migration processes with collocation groups

2006-04-23 14:59:48
Subject: Re: migration processes with collocation groups
From: Roger Deschner <rogerd AT UIC DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sun, 23 Apr 2006 13:52:50 -0500
I have this problem, and it's not a happy one. I have one client that is
so huge that migration takes all day and all night, with one process and
one tape drive. It is already getting very dicey to schedule things such
that migration does not happen during client backup, but that involves
some ugly compromises like running DB backup while this client backs up.
This is a collocation group size of 1. (i.e. old-fashioned collocation)

The only solution is policy. (Quick! Grab the Aspirin!) Put it in its
own copygroup etc, or even its own server image, and then turn off
collocation completely for it, so that it can migrate on more than one
drive at once, while still maintaining the desired collocation effect. I
have discovered (in a real disaster) that, despite collocation, such a
client is perfectly happy restoring from more than one tape drive at
once, which is a good thing. Made for a very fast restore of an entire
huge filesystem, much faster than an image restore of the same
filesystem. The problem is migration.

We laughed a couple years ago when a guy from California Edison did a
talk at Share in New York about using TSM policy to achieve what he
really wanted to do with collocation, but now I'm starting to think he
was priescent and revisit what he did. His solution of using policy is
much more flexible than collocation groups. Perhaps he wasn't quite as
crazy as we thought at the time.

Properly deployed, TSM policy should let you do what you really want -
define a collocation group that can have multiple migration processes.
Just beware - it won't be easy!

Sometimes, the more complicated solution actually is the right solution.
If only TSM policy weren't such a headache! Every time I deal with it I
feel like I'm having a hangover without the fun of drinking first.
Perhaps what we really need here is a few more brave people like that
guy at Cal Ed to show us all how and begin to establish a body of common
knowledge about how to fully exploit Policy, so that it isn't quite so
frightening. Once again, perhaps the original designers of WDSF had the
right idea, even though many of us have cursed the complexity of Policy
ever since.

Roger Deschner      University of Illinois at Chicago     rogerd AT uic DOT edu
"Have you ever, like, tried to put together a bicycle in public? Or a
grill?" Astronauts David Wolf and Piers Sellers, explaining the
difficulties encountered in attaching equipment to the Space Station


On Sat, 22 Apr 2006, Allen S. Rout wrote:

>>> On Fri, 21 Apr 2006 12:47:12 -0700, "Gill, Geoffrey L." <GEOFFREY.L.GILL AT 
>>> SAIC DOT COM> said:
>
>> I'm a little confused as to how data is migrated to tape with
>> collocation groups configured. Let's say I have 2 collocation groups
>> in a single domain each with 10 computers, and I have configured the
>> disk pool to migrate using 8 processes, how many processes should I
>> see migrating data?
>
>You'd see 2; one for each group.
>
>
>> What I seem to see is just 2 processes so I'm wondering what is the
>> best way to configure the system to migrate quickly yet keep groups
>> of computers together. I don't want to collocate each node on it's
>> own tape but I do want to take advantage of migrating data with as
>> many drives as possible.
>
>
>You're going to have to find where your sweet spot is between
>utilization and collocation.   But consider:
>
>Say your nodes are A0-A9 and B0-B9 (2 groups, 10 nodes each)
>
>If you want the 10 A nodes to use multiple drives, you are also
>saying, in a way, that you don't want them collocated (on the minimum
>number of tapes).
>
>Perhaps what this means is that you want 4 groups, or 5.
>
>
>
>
>I'm working on a considered theory of collocation group membership,
>but the best I've got so far is trying to make the groups' total
>occupancy tend towards about 2 volumes' size.
>
>
>
>- Allen S. Rout
>

<Prev in Thread] Current Thread [Next in Thread>