>> On Mon, 2 Apr 2007 12:34:55 -0600, Kelly Lipp <lipp AT STORSERVER DOT COM>
> OK, I did volunteer. I'll take a crack at these issues...
> 1) reusedelay. In circumstances where I'm thinking of these FILE vols
> as replacing DISK vols, I don't want to have to hold on to that
> space for a few days before it becomes available again. Just
> reusedelay=0, and view the data as ephemeral?
> Make sure that if you do restore a database that you audit fix=yes
> all of the volumes.
With my current DISK pools, if they die, then it's "Too bad, so sad".
So I'd probably plan that the FILE devclasses be similar. So I'd mark
them all destroyed, attempt to restore what I could from copies.
Cool, good advice.
> 2) Performance management. [...]
> Clearly this is of great concern. [ .. lucid analysis ]
OK, I like: and the disk-to-FILE is a really good idea.
> Back to my earlier point: I don't like scratch volumes. But one
> could try it. However, why have more than one pool anyway? But if
> you do, the scratch and trigger mechanisms available for file device
> class might work well, but I have not experimented with it much.
> When I did play with it, I struggled with understanding the
> principles involved and opted to pre-define.
I'm failing to communicate; I wasn't suggesting scratch FILE volumes.
Let me try again.
I've got a library server, with defined tape volumes. These are
served to library clients, of which I've got 10 and counting running
on the same hardware.
In imitation of that, I'm considering a library of LIBT FILE, with
predefined FILE volumes on paths accessible to all of the client
instances. Then the library manager hands out individual volumes,
which are (promptly, I hope) backed up, migrated off to tape, and then
returned to the lib mgr.
Right now, I've got a big DISK pool which is pegged once a week and
empty most of the rest: It's inconvenient for me to redistribute that
space: if I could let the library manager do so in (say) 5G aliquots,
that would be superfine. :)
- Allen S. Rout