Re: [Networker] Using multiple storage nodes for same client?
2005-03-08 14:30:24
George Sinclair wrote:
Thanks for your response and sorry to be so long winded. I think the
utter frustration forced it out of me. LOL!
What then is the purpose or point of listing more than one snode?
There are several reasons that you may do this, depending on your
environment. I hate to say RFTM to a seasoned NetWorker user such as
yourself, but all of this is covered in the admin guide.
In overview, the second or subsequent values in the storage node
affinity list are used when there are no devices available to write to
on the first storage node listed. There may be a variety of reasons that
a device is not available on the first node, such as:
(1) The first node is unreachable.
(2) None of the devices on the first node are currently enabled.
(3) None of the devices on the first node are currently available FOR
USE IN THIS POOL. This can be useful in certain circumstances, enabling
you to be writing to different storage nodes simultaneously by virtue of
the fact that the backups are going to different pools. OK I know that
we all advise not to place too many restrictions on pools, but this is a
powerful tool when used carefully and selectively.
(4) Finally, and I expect this is what you have overlooked, when a mount
request is not satisfied by the first storage node within a specified
time. This time is defined by the "save mount timeout" and "save
lockout" attributes of the device resource. Read the admin guide
carefully so that you don't get any surprises when using these. This is
detailed under the section cunningly and obscurely ;-) entitled: "How
to set timeouts for storage node devices".
I hope that this removes your frustration. It all seems quite logical to me.
--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listserv.temple DOT edu or visit the list's Web site at
http://listserv.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
|
|
|