Veritas-bu

[Veritas-bu] NetBackup Performance Tuning

2005-08-24 15:31:25
Subject: [Veritas-bu] NetBackup Performance Tuning
From: william.d.brown AT gsk DOT com (william.d.brown AT gsk DOT com)
Date: Wed, 24 Aug 2005 20:31:25 +0100
I believe the Veritas recommendation is that the Windows client network 
buffer size should be the same as the media server network buffer size + 
1k.   So we use 65k on the Windows clients.  I have to say that on newer 
clients I can't see any reason not to go higher still, and raise all the 
network buffer sizes based on the tape buffer size; however we tend to 
have a mix of tape drives in the environment, and (older) DLT drives 
really don't seem to like tape buffers above 64k.

>From what you say you are definitely not getting data in fast enough from 
the clients.  You may have to multiplex more streams together too.   If 
the bptm is waiting, chances are you tape drives are not streaming 
either...and you will get really poor performance.

You don't need more buffers until you start to get 'waited for empty' 
count rising.   How many buffers you can afford depends on system (shared 
memory) RAM.

William D L Brown




"Larsen, Errin M HMMA/IT" <errinlarsen AT hmmausa DOT com> 
Sent by: veritas-bu-admin AT mailman.eng.auburn DOT edu
24-Aug-2005 19:49
 
To
veritas-bu AT mailman.eng.auburn DOT edu
cc

Subject
[Veritas-bu] NetBackup Performance Tuning






Hi Everyone,

  I'm trying to tune my NetBackup Master server to be more efficient.  I
found some older docs that described examining my bptm logs looking for
"waited for empty" (WFE) and "waited for full" (WFF) entries.  These
docs described that there are only a few possibilites:

The WFE > WFF
The WFE = WFF, but both are very large
The WFE = WFF, but both are relatively small
The WFE < WFF

Ok, so, I've been watching and it seems that my "Waited for Empty"
numbers are ALWAYS much, much lower than my "Waited for Full" numbers.
So, it seems that the parent bptm process is constantly waiting for a
full buffer.  It seemed to me that I needed to tweak my buffer settings.

What I'm really looking for is what any of you might recommend for
shared buffer sizes, number of buffers and size of network buffers.

Currently, I have a Solaris 9, NBU 5.1 Master server.  I have many
Solaris clients and many Windows clients.  The master server is fiber
connected to an L180 with 8 LTO Ultrium 1 tape drives.  All clients are
connected to the network with 1 Gbps connections.  My shared buffers are
set to 262144, there are 16 Shared buffers configured.  My network
buffer is set to 65536 and my Windows Communications Buffer Size is set
at 16k.

NET_BUFFER_SZ = 65536
SIZE_DATA_BUFFERS = 262144
NUMBER_DATA_BUFFERS = 16

(on the windows clients)
Communications Buffer SZ = 16k (or, 16384)

The NET_BUFFER_SZ on all my Solaris clients is also set to 65536.

Any advice?  Any links to find good guides on tuning this stuff?  Any
rules-of-thumb for LTO Ultrium 1 Fiber attached and the
SIZE_DATA_BUFFERS setting?

Thanks,

--Errin Larsen

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu





<Prev in Thread] Current Thread [Next in Thread>