Networker

Re: [Networker] Ballancing traffic?

2002-12-30 14:36:40
Subject: Re: [Networker] Ballancing traffic?
From: "Faidherbe, Thierry" <Thierry.Faidherbe AT HP DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Mon, 30 Dec 2002 20:36:37 +0100
Up to now, File devices are just used like tape devices,
read or write at a time.

To achieve what you want to do, yes, there is a solution :

Networker uses a value called "Target Session" to balance
the sessions to the backup devices (tape and file). Once the
target session is reached, Networker starts new save sessions
and looks at another backup device to assign save 
sessions until he cannot fork a new backup session because 
the per storage node save session limit -depending of Server 
license (power/network edition)- has been reached, because 
Server parallelism prevented Networker to do it or 
because of a group parallelism has been used. When no other 
device are available but the above limintations are not yet reached, 
then NetWorker start overwritting the Target Session
for each matching backup device (using pool criteria).

So, in your case, decrease the "Target Session" for each file/tape 
device to the "starting ballancing value" you want Networker to 
start balancing the save sessions accros your 6 other files 
devices.


Hope that helps,

Thierry

Kind regards - Bien cordialement - Vriendelijke groeten,

Thierry FAIDHERBE

HPCI - Storage & Server Integration Practice 
Tru64 Unix and Legato EBS Consultant
                                   
 *********       *********   HEWLETT - PACKARD
 *******    h      *******   1 Rue de l'aeronef/Luchtschipstraat
 ******    h        ******   1140 Bruxelles/Brussel/Brussels
 *****    hhhh  pppp *****   
 *****   h  h  p  p  *****   100/102 Blv de la Woluwe/Woluwedal
 *****  h  h  pppp   *****   1200 Bruxelles/Brussel/Brussels
 ******      p      ******   BELGIUM
 *******    p      *******                              
 *********       *********   Phone :    +32 (0)2  / 729.85.42   
                             Mobile :   +32 (0)498/  94.60.85 
                             Fax :      +32 (0)2  / 729.88.30   
     I  N  V  E  N  T        Email :    thierry.faidherbe AT hp DOT com
                             Internet : http://www.hp.com/
________________________________________________________________________

MOBISTAR SA/NV 

SYSTEM Team Charleroi, Mermoz 2 Phone : +32 (0)2  / 745.75.81  
Avenue Jean Mermoz, 32          Fax :   +32 (0)2  / 745.89.56  
6041 GOSSELIES                  Email : tfhaidhe AT mail.mobistar DOT be
BELGIUM                         Web :   http://www.mobistar.be/
________________________________________________________________________

  


-----Original Message-----
From: Robert L. Harris [mailto:Robert.L.Harris AT RDLG DOT NET] 
Sent: Monday, December 30, 2002 2:51 PM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: [Networker] Ballancing traffic?


  I've got 6 "file" devices that are used for onsite backups.  It seems
networker likes to completely fill them up in order like tapes.  Is
there any way to balance them or have the system spread the traffic out?


:wq!
------------------------------------------------------------------------
---
Robert L. Harris                     | PGP Key ID: FC96D405

DISCLAIMER:
      These are MY OPINIONS ALONE.  I speak for no-one else.
FYI:
 perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

--
Note: To sign off this list, send a "signoff networker" command via
email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>