Networker

Re: [Networker] Confused about parallelism

2009-11-13 11:03:50
Subject: Re: [Networker] Confused about parallelism
From: "Goslin, Paul" <pgoslin AT CINCOM DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 13 Nov 2009 11:00:09 -0500
You can also set target sessions per tape drive....(under drive
properties)... 

We have 2 pools, each for a different retention/browse period with 5
LTO-2 tape drives. Each pool has selected 3 target drives (one drive is
shared between both pools). We have two primary groups with over 15
clients each, and we limit the group parallelism on those to 12. We
normally don't change the default client setting of 4 sessions..
Between group parallelism and target sessions per drive (7), we manage
to run multiple groups concurrently with some overlap and get everything
done overnight and we average about a terabyte each day (more on the
weekends since we do more full-saves then)... We have approx 100
clients. 

The possible combinations and permutations of all these settings are
quite a few and I would think could accommodate most situations... 

> -----Original Message-----
> From: EMC NetWorker discussion 
> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of MIchael Leone
> Sent: Friday, November 13, 2009 10:11 AM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Confused about parallelism
> 
> Davina Treiber <Davina.Treiber AT PeeVRo.co DOT uk> wrote on 
> 11/13/2009 03:25:48
> AM:
> 
> > MIchael Leone wrote:
> > > Assuming 2 pools, and 8 tape drives - I want to limit the 
> number of 
> > > concurrent drives that any 1 pool can use to 5. I thought setting 
> > > the library max parallelism to 5 would accomplish that 
> for me. But 
> > > that
> turns 
> > > out not to be the case.
> > 
> > True. Library parallelism is about how many jukebox 
> operations can run 
> > simultaneously, such as mounts, unmounts, labelling operations etc.
> > 
> >  I've re-read about client, server and savegrp
> > > paralleism, but they all seem to refer to numbers of savesets 
> > > running
> at 
> > > the same time. But I want to limit the max number of 
> devices that a
> pool 
> > > can write to at any one time.; I'm not so worried about 
> the number 
> > > of savesets running at the same time.
> > 
> > The closest you will get to this is savegrp parallelism. This is 
> > usually  enough to control the sessions in use.
> 
> Hrmmm ... So I have to change the parallelism of each group, 
> in each pool? 
> 
> 
> > > 
> > > I could, of course, assign specific tape drives to 
> specific pools, 
> > > But
> I 
> > > thought for sure there was a way to do without 
> hard-coding specific
> drives 
> > > to specific pools. 
> > > 
> > > Pointer, anyone?
> > > 
> > 
> > I always try to avoid assigning drives to pools. It is very easy to 
> > find yourself in a situation where drives are being under-utilised, 
> > and once you start down this road of assigning drives to pools you 
> > can't really stop. You end up with an environment that is 
> difficult to 
> > manage. This applies to both small and very large environments.
> 
> It seems I need to assign drives to pools. 
> 
> Ideally, I wanted a way to tell the server "use at most up to 
> 6 drives for any pool", thereby always leaving at least 2 
> drives available for the other pool. So that when NOT CLONE 
> starts early in the night, it can use up to 6 drives, therby 
> leaving at least 2 drives available for CLONE (which starts 
> later in the night). I don't particularly which which 2, as 
> long as there are at least 2. And when the NOT CLONE jobs 
> finish, those tapes are unloaded, and the CLONE tapes loaded, 
> and then CLONE can continue to use up to 6 drives at once.
> 
> Is this doable? If so, how? (without resorting to assigning 
> specific drives to a specific pool).
> 
> > 
> > I also avoid creating lots of pools. 
> 
> I have 3 pools - one for BOOTSTRAP (savegrp -O), one for NOT 
> CLONE, and one for CLONE (from AFTD devices, in my case).
> 
> To sign off this list, send email to 
> listserv AT listserv.temple DOT edu and type "signoff networker" in 
> the body of the email. Please write to 
> networker-request AT listserv.temple DOT edu if you have any 
> problems with this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or via RSS 
> at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER