Veritas-bu

[Veritas-bu] NetBackup Performance Tuning

2005-08-24 14:49:21
Subject: [Veritas-bu] NetBackup Performance Tuning
From: errinlarsen AT hmmausa DOT com (Larsen, Errin M HMMA/IT)
Date: Wed, 24 Aug 2005 13:49:21 -0500
Hi Everyone,

  I'm trying to tune my NetBackup Master server to be more efficient.  I
found some older docs that described examining my bptm logs looking for
"waited for empty" (WFE) and "waited for full" (WFF) entries.  These
docs described that there are only a few possibilites:

The WFE > WFF
The WFE = WFF, but both are very large
The WFE = WFF, but both are relatively small
The WFE < WFF

Ok, so, I've been watching and it seems that my "Waited for Empty"
numbers are ALWAYS much, much lower than my "Waited for Full" numbers.
So, it seems that the parent bptm process is constantly waiting for a
full buffer.  It seemed to me that I needed to tweak my buffer settings.

What I'm really looking for is what any of you might recommend for
shared buffer sizes, number of buffers and size of network buffers.

Currently, I have a Solaris 9, NBU 5.1 Master server.  I have many
Solaris clients and many Windows clients.  The master server is fiber
connected to an L180 with 8 LTO Ultrium 1 tape drives.  All clients are
connected to the network with 1 Gbps connections.  My shared buffers are
set to 262144, there are 16 Shared buffers configured.  My network
buffer is set to 65536 and my Windows Communications Buffer Size is set
at 16k.

NET_BUFFER_SZ = 65536
SIZE_DATA_BUFFERS = 262144
NUMBER_DATA_BUFFERS = 16

(on the windows clients)
Communications Buffer SZ = 16k (or, 16384)

The NET_BUFFER_SZ on all my Solaris clients is also set to 65536.

Any advice?  Any links to find good guides on tuning this stuff?  Any
rules-of-thumb for LTO Ultrium 1 Fiber attached and the
SIZE_DATA_BUFFERS setting?

Thanks,

--Errin Larsen


<Prev in Thread] Current Thread [Next in Thread>