Networker

Re: [Networker] Balancing load between storage nodes

2006-05-09 09:42:48
Subject: Re: [Networker] Balancing load between storage nodes
From: "Faidherbe, Thierry" <Thierry.Faidherbe AT HP DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 9 May 2006 15:40:57 +0200
Yes and no : It depends how you set the save mount timeout
and save lockout from device ressource :

 Save Mount Timeout - Describes the timeout value of an initial 
 save mount request for the storage node on which this device resides.
>If the request is not satisfied within the number of minutes 
>specified in this attribute, the storage node is locked from 
>receiving save assignments for the number of minutes assigned to 
>Save Lockout. The function provided by this attribute only applies to 
 the initial save volume on a remote device. You can use this 
 attribute for local devices as well, but you cannot change the 
 default value of zero in the Save Lockout attribute in this case. 
 This means that local devices cannot be locked out from receiving 
 save requests.

 Save Lockout - Describes how long (in minutes) a storage node is 
 locked from receiving save assignments, after the storage node times 
 out from a save mount request. A value of zero means that the node 
 will not be locked if the value of Save Mount Timeout is reached. 
 If the device is a local device, you cannot change the value of Save 
 Lockout from the default value of zero.

Cheers,

Th



Kind regards - Bien cordialement - Vriendelijke groeten,

Thierry FAIDHERBE

HP Services - Storage Division
Tru64 Unix and Legato Enterprise Backup Solutions Consultant
                                   
 *********       *********   HEWLETT - PACKARD
 *******    h      *******   1 Rue de l'aeronef/Luchtschipstraat
 ******    h        ******   1140 Bruxelles/Brussel/Brussels
 *****    hhhh  pppp *****   
 *****   h  h  p  p  *****   100/102 Blv de la Woluwe/Woluwedal
 *****  h  h  pppp   *****   1200 Bruxelles/Brussel/Brussels
 ******      p      ******   BELGIUM
 *******    p      *******                              
 *********       *********   Phone :    +32 (0)2  / 729.85.42   
                             Mobile :   +32 (0)498/  94.60.85 
                             Fax :      +32 (0)2  / 729.88.30   
     I  N  V  E  N  T        Email/MSN : thierry.faidherbe(at)hp.com
                             Internet  : http://www.hp.com/ 
-----Original Message-----
From: Gatti [mailto:xy.0815 AT GMX DOT NET] 
Sent: Tuesday, May 09, 2006 3:35 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU; Faidherbe, Thierry
Subject: Re: Balancing load between storage nodes

Hi Thierry and all,

because I missed the RTFM (M = Maillinglist),
I opened a quite similar thread below.

I'd like to discuss the following a little bit more in depth:
> Storage node affinity is based on a top-down preference,
> Legato selecting next entry from the list
> if mount request cannot be satisfied.

IMHO Legato only selects the next entry in the storage node affinity
list,
if the storage node is not reachable, etc.
>From my POV, that's not the case, if the "mount request cannot be 
satisfied" (i.e. all drives on storage node are busy).
If all drives on the 1st storage node are busy, the mount request
will be queued for one drive on THIS storage node and not rerouted
to the second storage node, etc.

Unfortunately I would like to have the other
(uses next storage node if 1st is busy) behavior,
but have not tested this yet 
(additional Storage node will go in production end of month)

Pls. correct me if I'm wrong.

Thx -sg-
--
Steffen Gattert; VISIOplant Hamburg

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER