Veritas-bu

[Veritas-bu] bptm log: buffer settings

2004-07-18 21:18:37
Subject: [Veritas-bu] bptm log: buffer settings
From: rfang AT coke.umuc DOT edu (Rongsheng Fang)
Date: Sun, 18 Jul 2004 21:18:37 -0400
Hi,

We are running a Sun StorEdge L700 tape library which is connected to a
Sun Fire V880 (master+media server) directly via fibre. There are 6 IBM
Ultrium (Gen 2) tape drives installed in the L700. Solaris 9 is running
on V880 along with Netbackup DataCenter 4.5MP6. I have set the following
settings based on the performance guide:

On Master/Media: 

NET_BUFFER_SZ: 256K
SIZE_DATA_BUFFERS: 256K
NUMBER_DATA_BUFFERS: 256

tcp_recv_hiwat: 256K
tcp_xmit_hiwat: 256K

On clients:

NET_BUFFER_SZ: 256K


But from the bptm log below, it looks like the network buffer was still
set to 64240 bytes, not 256k:

...
18:08:49.727 [9884] <2> io_set_recvbuf: setting receive network buffer to 
262144 bytes
18:08:49.727 [9884] <2> io_set_recvbuf: receive network buffer is 64240 bytes
18:08:50.417 [9713] <2> tapelib: SIGUSR1 from bpbrm while waiting for tape mount
...
18:08:50.418 [9713] <2> io_init: using 262144 data buffer size
18:08:50.418 [9713] <2> io_init: CINDEX 6, sched Kbytes for monitoring = 20000
18:08:50.418 [9713] <2> io_init: using 256 data buffers
18:08:50.418 [9713] <2> io_init: child delay = 20, parent delay = 30 
(milliseconds)
18:08:50.419 [9713] <2> getsockconnected: host=backupsrv service=bpdbm
address=10.15.0.51 protocol=tcp non-reserved port=13 721
18:08:50.419 [9713] <2> bind_on_port_addr: bound to port 59831
18:08:50.420 [9713] <2> logconnections: BPDBM CONNECT FROM 10.15.0.20.59831 TO 
10.15.0.20.13721
18:08:50.422 [9713] <2> check_authentication: no authentication required
18:08:50.491 [9713] <2> mpx_setup_shm: buf control for CINDEX 6 is 0xfed89018
...

Did I miss anything in configuration (both Solaris kernel side and NB
side)? Or there is a OS limit on the size of the receive network buffer?
What needs to be done to ensure NetBackup can set and use 256K receive
network buffer?

Thanks for your help,

Rongsheng

<Prev in Thread] Current Thread [Next in Thread>