Networker

Re: [Networker] Balancing load between storage nodes

2006-05-08 10:06:25
Subject: Re: [Networker] Balancing load between storage nodes
From: Jim Ruskowsky <jimr AT JEFFERIES DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 8 May 2006 09:58:59 -0400
Thierry -

The multiple paths was set up simple because they are there and visible 
and we have the DDS licenses.
Eventually we will be using the DDS as we migrate larger clients to 
dedicated storage nodes.

My thoughts were to first try to see if legato deals with multiple path 
choices to the same devices in an
intelligent fashion, or at least pick the next device randomly (which 
would, on average, give a proper
load balance).

Thanks for your input.

"Faidherbe, Thierry" <Thierry.Faidherbe AT hp DOT com> wrote on 05/08/2006 
09:50:36 AM:

> Storage node affinity is based on a top-down preference, Legato
> selecting next entry from the list if mount request cannot
> be satisfied.
> 
> Now, my 2 cents ... I am just curious about such a setting :
> you defined twice a host the same physical device as 2 
> different logical ones for Legato being hardware-shared ? 
> and the same across 2 hosts (BS+SN) ? Why ?
> 
> Why DDS-sharing all of your 10 physical devs (as 40 legato devices)
> in this case to load balance the backups ? Just define 5 devices 
> on the backup server and 5 on the storage node (static lib sharing)
> and I am sure you will avoid a lot of problems ! Define 2 devs on a HBA
> and 3 on the second one for load ballancing. Imagine nsrd overhead 
> to select a device !
> 
> HTH
> 
> Th 
> 
> 
> 
> 
> 
> Kind regards - Bien cordialement - Vriendelijke groeten,
> 
> Thierry FAIDHERBE
> 
> HP Services - Storage Division
> Tru64 Unix and Legato Enterprise Backup Solutions Consultant
> 
>  *********       *********   HEWLETT - PACKARD
>  *******    h      *******   1 Rue de l'aeronef/Luchtschipstraat
>  ******    h        ******   1140 Bruxelles/Brussel/Brussels
>  *****    hhhh  pppp ***** 
>  *****   h  h  p  p  *****   100/102 Blv de la Woluwe/Woluwedal
>  *****  h  h  pppp   *****   1200 Bruxelles/Brussel/Brussels
>  ******      p      ******   BELGIUM
>  *******    p      ******* 
>  *********       *********   Phone :    +32 (0)2  / 729.85.42 
>                              Mobile :   +32 (0)498/  94.60.85 
>                              Fax :      +32 (0)2  / 729.88.30 
>      I  N  V  E  N  T        Email/MSN : thierry.faidherbe(at)hp.com
>                              Internet  : http://www.hp.com/ 
> -----Original Message-----
> From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT 
> EDU]
> On Behalf Of Jim Ruskowsky
> Sent: Monday, May 08, 2006 3:41 PM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Balancing load between storage nodes
> 
> Thanks Brian - 
> 
> I already have both nsrserverhost and jcnetworker2 (the storage node) 
> listed for each client.  Should I just switch the order on half my 
> clients?
> 
> "Brian Narkinsky" <BNarkinsky AT cclaflorida DOT org> wrote on 05/08/2006 
> 09:36:53 AM:
> 
> > You would need to specify a storage node in the client config.  This
> > will force those clients to backup to the storage node. 
> > 
> > -----Original Message-----
> > From: Legato NetWorker discussion
> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
> > On Behalf Of Jim Ruskowsky
> > Sent: Monday, May 08, 2006 9:28 AM
> > To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> > Subject: [Networker] Balancing load between storage nodes
> > 
> > Hello list
> > 
> > Maybe somebody can offer some advice on the following situation.
> > 
> > The setup....
> > 
> > We have 10 tape drives in a single library attached to a fibre switch.
> > 
> > We have two networker servers (master and storage node) attached to
> that
> > same switch - each server has two paths to that same switch.
> > 
> > The fabric is configured so that each tape drive can be seen on both
> > paths on each networker server.  We have dynamic drive sharing
> licensed
> > for all 10 drives.
> > 
> > So for example, drive #0 has four distinct paths according to
> networker
> >         /dev/rmt/0cbn   (master server, first fibre channel)
> >         /dev/rmt/10cbn  (master server, second fibre channel)
> >         rd=jcnetworker2:/dev/rmt/0cbn   (storage node, first fibre 
> > channel)
> >         rd=jcnetworker2:/dev/rmt/10cbn  (storage node, second fibre
> > channel)
> > 
> > The tape pool "DAILY" is set up as default with no specific devices
> > checked off (so all drives should be used)
> > 
> > The problem....
> > 
> > When a savegroup runs, the server only uses the drives attached to the
> > master server - ignoring the existence of the storage node.  I ended
> up
> > trying to write to 10 LTO3 drives down a single fibre channel.  What
> is
> > the best way to load balance between all my paths.  I've tried
> > specifically checking off specific paths to devices in the tape pool
> > setup, but then it just picks a path to the master server and ignores
> > the rest.
> > 
> > Thanks for any help.
> > 
> > Jim
> > 
> > 
> > 
> > 
> > Jefferies archives and reviews outgoing and incoming e-mail.  It may
> be
> > produced at the request of regulators or in connection with civil
> > litigation. 
> > Jefferies accepts no liability for any errors or omissions arising as
> a
> > result of  transmission. Use by other than intended recipients is
> > prohibited.
> > 
> > To sign off this list, send email to listserv AT listserv.temple DOT edu and
> > type "signoff networker" in the body of the email. Please write to
> > networker-request AT listserv.temple DOT edu if you have any problems wit
> this
> > list. You can access the archives at
> > http://listserv.temple.edu/archives/networker.html or via RSS at
> > http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> > 
> 
> 
> 
> 
> 
> Jefferies archives and reviews outgoing and incoming e-mail.  It may be
> produced at the request of regulators or in connection with civil
> litigation. 
> Jefferies accepts no liability for any errors or omissions arising as a
> result of  transmission. Use by other than intended recipients is
> prohibited.
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
> type "signoff networker" in the
> body of the email. Please write to networker-request AT listserv.temple DOT 
> edu
> if you have any problems
> wit this list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 





Jefferies archives and reviews outgoing and incoming e-mail.  It may be 
produced at the request of regulators or in connection with civil litigation. 
Jefferies accepts no liability for any errors or omissions arising as a result 
of  transmission. Use by other than intended recipients is prohibited.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER