Barb:
This will happen since NetBackup's whole mission in
life is to backup the data as quickly as it possibly
can. However, in this case it sounds like there is
something up.
You can do a few things to control / troubleshoot this
situation.
1. Max Jobs Per Class=99 is the default. You may want
to pull this back to say four, then bump it up to six,
eight, etc. See how the performance changes.
2. Are you Multiplexing any of these backup? if so
you can also control the number of multiplexed streams
that can be active per drive (STU configuration change).
3. Did you use the ALL_LOCAL_DRIVES directive? If
not, then perhaps you are somehow backing up some NFS
mount points inadvertantly. Check your class attibutes
to see if in fact you have cross mount points and
follow nfs selected. Incidentally a file list entry of
/* and Cross Mount Points=Yes, Follow NFS=NO has had
some questionable results at some of my client sites.
I have actually seen it backup NFS mount points in the
past.
4. You can use Throttling or Bandwidth Limiting. This
is a feature of NetBackup that restricts the bandwidth
a NetBackup client may consume.
5. Are you positive you are at 100 FULL on the
client. If you have a solaris client and you have that
set to AUTO-Negotiate, its been know to have problems.
I'd do a dmesg | grep hme just to make sure that it is
indeed FULL DUPLEX. I had this happen at one of my
client sites and it kept us troubleshooting for days
until I decided to check everything top to bottom.
Good Luck Barb,
David Chapa
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
David A. Chapa 847 413 1144 p
Consulting Practice Mgr. 847 413 1168 f
DataStaff, Inc. http://www.datastaff.com
Quoting "Cote, Barbara L."
<Barbara_L_Cote AT tvratings DOT com>:
> HELP! If anyone has experienced the following or
similar network problem
> while using Multiple Data Streams, we are anxious to
hear of you experience
> and if you have found a solution!
>
> We have recently started to test using multiple data
streams and so far it
> has been a network nightmare. We tested using one
class which had one
> client and 15 filesystems defined in the file list
totaling approximately
> 203 GB of data. We did not use the NEW_STREAM
directive but let each path
> in the file list become a separate stream which
created 15 separate jobs.
> The problem is that when these 15 backups are
started, the network is
> adversely affected until it is basically brought to
its knees. Network
> pings drop approximately 50% of packets. And it
appears the network gets
> progressively worse the longer the backups run.
After approximately 20
> minutes, we must kill all 15 backups to recover the
network. These are the
> only backups running at the time so the amount of
data should not be an
> issue as we have pushed much more data than this at a
given time. Our
> NetBackup master server is running with a gigabit
ethernet. The client is
> 100 baseT full duplex.
>
> Thanks for any insight to this problem.
>
> Barb Cote'
> UNIX System Administrator
> Nielsen Media Research
> _______________________________________________
> Veritas-bu maillist - Veritas-
bu AT mailman.eng.auburn DOT edu
>
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-
bu
>
|