Networker

Re: [Networker] Confused about parallelism

2009-11-13 11:17:17
Subject: Re: [Networker] Confused about parallelism
From: Stephanie Finnegan <sfinnega AT AIP DOT ORG>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 13 Nov 2009 11:15:41 -0500
My setup might help you...
I'm one of the fools with multiple pools and drive assignments.  I have five
pools - I do this because I have a large amount of data that gets backed up to a
VTL and then cloned off to physical tape.  I reduce the amount of time spent
"waiting for resources" by spreading out the larger groups across the multiple
pools.   I also assign specific drives to each pool.  But, I overlap these drive
assignments to reduce the likelihood of having idle drives.  In other words,
hypothetically Pool One gets drives 1 - 5, Pool Two gets drives 3 - 7, then 5 -
10,then 7 - 12,  all the way through Pool Five which gets drives 9,10 and 1,2.

>>> On Friday, November 13, 2009 at 10:10 AM, MIchael Leone
<Michael.Leone AT PHA.PHILA DOT GOV> wrote:
Davina Treiber <Davina.Treiber AT PeeVRo.co DOT uk> wrote on 11/13/2009 
03:25:48 
AM:

> MIchael Leone wrote:
> > Assuming 2 pools, and 8 tape drives - I want to limit the number of 
> > concurrent drives that any 1 pool can use to 5. I thought setting the 
> > library max parallelism to 5 would accomplish that for me. But that 
turns 
> > out not to be the case.
> 
> True. Library parallelism is about how many jukebox operations can run
> simultaneously, such as mounts, unmounts, labelling operations etc.
> 
>  I've re-read about client, server and savegrp
> > paralleism, but they all seem to refer to numbers of savesets running 
at 
> > the same time. But I want to limit the max number of devices that a 
pool 
> > can write to at any one time.; I'm not so worried about the number of 
> > savesets running at the same time.
> 
> The closest you will get to this is savegrp parallelism. This is usually
>  enough to control the sessions in use.

Hrmmm ... So I have to change the parallelism of each group, in each pool? 


> > 
> > I could, of course, assign specific tape drives to specific pools, But 
I 
> > thought for sure there was a way to do without hard-coding specific 
drives 
> > to specific pools. 
> > 
> > Pointer, anyone?
> > 
> 
> I always try to avoid assigning drives to pools. It is very easy to find
> yourself in a situation where drives are being under-utilised, and once
> you start down this road of assigning drives to pools you can't really
> stop. You end up with an environment that is difficult to manage. This
> applies to both small and very large environments.

It seems I need to assign drives to pools. 

Ideally, I wanted a way to tell the server "use at most up to 6 drives for 
any pool", thereby always leaving at least 2 drives available for the 
other pool. So that when NOT CLONE starts early in the night, it can use 
up to 6 drives, therby leaving at least 2 drives available for CLONE 
(which starts later in the night). I don't particularly which which 2, as 
long as there are at least 2. And when the NOT CLONE jobs finish, those 
tapes are unloaded, and the CLONE tapes loaded, and then CLONE can 
continue to use up to 6 drives at once.

Is this doable? If so, how? (without resorting to assigning specific 
drives to a specific pool).

> 
> I also avoid creating lots of pools. 

I have 3 pools - one for BOOTSTRAP (savegrp -O), one for NOT CLONE, and 
one for CLONE (from AFTD devices, in my case).

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type
"signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER