Networker

Re: [Networker] Using multiple storage nodes for same client?

2005-03-08 14:16:59
Subject: Re: [Networker] Using multiple storage nodes for same client?
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 8 Mar 2005 14:14:29 -0500
Excellent point!

BTW I did verify that you can have dynamic storage node selection for
both the backup client data and the index *IF* you hard code the devices
for the given pools. So, if I have devices /dev/nst0 and nst1 on library
1 on snode 1 selected for my full pool and /dev/nst2 and nst3 on library
2 on snode 2 selected for say my incr pool, and I then list:

snode1
snode2

under Storage nodes field on the affected clients, including the primary
server, and launch a backup, it will use whichever snode applies to that
pool for both the client's data and the index and will load more tapes
on the devices for that pool as need be. This works great but does force
me to dedicate certain devices to certain pools. To get real balancing,
I think one would need to either dedicate half of the devices on each
library to each of our 2 pools or dedicate certain clients to one snode
and the others to the other snode, and allow all devices to use all
pools.  Alternatively, I suppose I could dedicate one library to one
pool and the other to the other pool, but that seems more restrictive.

I guess I'll just dedicate certain clients to snode1 and the others to
snode2 and keep all the devices open for all the pools, making it a
"free for all". Just have to keep enough required tapes in both
libraries. Up until recently, when both libraries were on the same
snode, I was able to get away with the smaller library lapsing a bit on
available tapes since both libraries could be used and it could always
fall back to the larger one.

George



"Wood, R A (Bob)" wrote:
> 
> Just as an aside, when working out where data should go it is worth
> noting who 'owns' the data. The client data is coming from the client so
> it will obey the client settings. The client indexes, however, belong to
> the Networker server and, so, will follow the server settings.
> 
> :)
> 
> -----Original Message-----
> From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT 
> EDU]
> On Behalf Of George Sinclair
> Sent: Tuesday, March 08, 2005 3:29 PM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Using multiple storage nodes for same client?
> 
> Thanks for your response and sorry to be so long winded. I think the
> utter frustration forced it out of me. LOL!
> 
> What then is the purpose or point of listing more than one snode? I have
> discovered that in the case of the primary server -- and this is
> probably no surprise, I'm sure -- it will use whichever is listed first
> to send the client's index to. So, for example, if a client lists the
> following under Storage Nodes field: snode2 snode1
> 
> But the primary server lists:
> snode1
> snode2
> 
> then the client will insist on using snode2 to backup data, and then its
> index will go to snode1. Kind of a hard coded decision here. If I wanna
> vary it, I have to change it; otherwise, always the same. Seems weird
> that NetWorker can't load balance things or maybe I should say allow for
> dynamic snode selection.
> 
> Seems, therefore, that we have three choices:
> 1. Hard code the clients that we want to use snode1 versus snode2 and
> make sure to have enough of the required pool tapes in both libraries,
> assuming we're going to use two snodes and not dedicate any one of them
> to a specific pool.
> 
> 2. Hard code the pools to use certain devices and then which ever
> snode's devices are available for the pool then that's the one it will
> use. I'm thinking this would work, but does force you to dedicate the
> devices to certain pools.
> 
> 3. Move both libraries to the new storage node rather than having one
> library on the old snode and the other on the new snode.
> 
> Still seems odd that NetWorker will use dynamic sharing of devices
> between libraries on the same snode but not between said devices on
> different snodes. Hmm ...
> 
> George
> 
> > Riaan Louwrens wrote:
> >
> > Hi George,
> >
> > The way I understand it is that you cannot use two different SN's at
> > the same for backup from a single client (even though they are in the
> > same pool).
> >
> > The client will try and connect to the Storage Node Host (as you know)
> 
> > and once that requirement has been met (with the first available one
> > going down the list), then that attribute is set, and only that host
> > will be used. Meaning, if half way through the tape on that node gets
> > full or the device fails (etc), that backup will fail (or will be
> > pending).
> >
> > So your understanding, is the same as I understand it - as your tests
> > would have shown.
> >
> > I am not sure if there is another way around it - apart from having to
> 
> > manually re-do the storage node affinity list every time you want the
> > priority to change.
> >
> > Regards,
> > Riaan
> >
> > -----Original Message-----
> > From: Legato NetWorker discussion
> > [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]On Behalf Of George Sinclair
> > Sent: Tuesday, March 08, 2005 1:17 AM
> > To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> > Subject: [Networker] Using multiple storage nodes for same client?
> >
> > --
> > No virus found in this outgoing message.
> > Checked by AVG Anti-Virus.
> > Version: 7.0.308 / Virus Database: 266.6.4 - Release Date: 3/7/05
> >
> 
> --
> Note: To sign off this list, send a "signoff networker" command via
> email to listserv AT listserv.temple DOT edu or visit the list's Web site at
> http://listserv.temple.edu/archives/networker.html where you can also
> view and post messages to the list. Questions regarding this list should
> be sent to stan AT temple DOT edu
> =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
> 
> --
> Note: To sign off this list, send a "signoff networker" command via email
> to listserv AT listserv.temple DOT edu or visit the list's Web site at
> http://listserv.temple.edu/archives/networker.html where you can
> also view and post messages to the list. Questions regarding this list
> should be sent to stan AT temple DOT edu
> =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listserv.temple DOT edu or visit the list's Web site at
http://listserv.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=