ADSM-L

Re: [ADSM-L] FILE devclass tactics?

2007-12-07 14:47:50
Subject: Re: [ADSM-L] FILE devclass tactics?
From: Matthew Glanville <matthew.glanville AT KODAK DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 7 Dec 2007 14:47:03 -0500
I have the exact same complaints about the 'FILE' devclass...

If only they would implement a different concept for it's hi/low/migration
settings and migrate things better.
I wish it would use a volume per 'connection/node/filespace/group'
depending on collocation setting, utilizing your 'size' that you specify.
But then utilize the true underlying free/full disk space to determine the
high/low migration settings instead of the # of volumes.
And of course migrate all volumes for one 'node or filespace' at once to
avoid that tape swapping issue.
It sounds too easy to do.

Matt G.

"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU> wrote on 12/07/2007
12:26:45 PM:

> Anybody who's using FILE devclasses to accept backups willing to
> discuss their config, how they got there, and what they like and
> dislike?
>
> I'm working out how to use FILE devclasses effectively, and am a
> little exasperated.  I started off with fairly small maxcap (2G) , and
> a large maxscratch. My migration was hammered because the migration
> process switched source volume without particularly considering what
> other FILE volumes might be emptied onto the tape it had
> mounted. Reading the docs, this seems to be a core architectural
> choice, and I can understand how they got there from the serial media
> way of treating volumes.  But it hurts a lot, in this environment.
>
> My biggest bottleneck is tape drive hours: Every tape change looks to
> me like some 11G of data not moved (3min x 60M/min) ; Even if I've got
> only 500 of my 1100-mumble nodes affected in a day's work, even if
> it's just one additional mount per node, that's 25 tape-drive-hours
> eaten up in unneccesary mounts.  And all of those estimate numbers are
> conservative.
>
> Ick. Ick ickety ick.
>
>
> At this moment I've made fairly large volumes (50G) sized to be larger
> than the second standard deviation of a single node's nightly backup,
> and set node collocation on the FILE devclass, plus a maxscratch
> significantly larger than the node count.  I was anticipating this
> would leave me with homogenous volumes.  D'oh, forgot about
> resourceutilization.  Next I intend to set maxscratch as higher than
> nodecount*5.
>
> So, I'm going to define a stgpool with maxscratch 750, 50G volumes:
> titularly assigning 37.5 TB of data. Hah.
>
> Using this strategy makes a hash of any kind of quota or cap on a
> given TSM server's use of a shared disk resource.
>
> Of course, it's still possible that if I have 4 FILE volumes occupied
> with only FOONODE data, I'll still do them out of order, and have to
> mount FOONODE's tape volume four times.  In this case, I'm not winning
> anything by going from nodecount to nodecount*5.  It appears, from the
> docs, that this is what will happen.  But I'm going to test and make
> sure.
>
> It seems that this would be a non-issue if the migration process were
> sensible to the fact that it's extremely low cost to remount FILE
> volumes.  So, what am I doing wrong?  Is there a big red button marked
> "Don't Be An Idiot" which I'm failing to push?
>
> If I can't fix these issues, I will have to ditch the FILE devclass
> notion; I can't justify spending an additional $20-30K on drives...
>
>
>
> MMmm.  Maybe I should have a _smaller_ disk stgpool migrating into the
> FILE devclass, .... Eeek. Double the I/Os in a night?  I like to
> over-engineer, but that's a little much even for me.
>
>
> - Allen S. Rout

<Prev in Thread] Current Thread [Next in Thread>