Veritas-bu

Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 19:23:55
Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
From: Justin Piszcz <jpiszcz AT lucidpixels DOT com>
To: "Peters, Devon C" <Peters.Devon AT con-way DOT com>
Date: Wed, 21 Nov 2007 19:04:37 -0500 (EST)
Has anyone here done benchmarks to see what type of potential speed up is 
gained with the NUMBER_DATA_BUFFERS_RESTORE directive?

On Wed, 21 Nov 2007, Peters, Devon C wrote:

> I just did a test, and it looks like the duplication process uses
> NUMBER_DATA_BUFFERS for both read and write drives.  I'm guessing that
> there's just a single set of buffers used by both read and write
> processes, rather than a separate set of buffers for each process...
>
> Config on the test system:
>
> # cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
> 256
> # cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_RESTORE
> 128
>
>
> Here's the bptm io_init info from the duplication - PID 22020 is the
> write process,  PID 22027 is the read process:
>
> 10:43:20.523 [22020] <2> io_init: using 262144 data buffer size
> 10:43:20.523 [22020] <2> io_init: CINDEX 0, sched Kbytes for monitoring
> = 20000
> 10:43:20.524 [22020] <2> io_init: using 256 data buffers
> 10:43:20.524 [22020] <2> io_init: child delay = 20, parent delay = 30
> (milliseconds)
> 10:43:20.524 [22020] <2> io_init: shm_size = 67115012, buffer address =
> 0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800
> 10:43:21.188 [22027] <2> io_init: using 256 data buffers
> 10:43:21.188 [22027] <2> io_init: buffer size for read is 262144
> 10:43:21.188 [22027] <2> io_init: child delay = 20, parent delay = 30
> (milliseconds)
> 10:43:21.188 [22027] <2> io_init: shm_size = 67115060, buffer address =
> 0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800, res_cntl =
> 0xf79b9804
>
>
> Also, there are no lines in the bptm logfile showing
> "mpx_setup_restore_shm" for these PIDs...
>
> -devon
>
> ________________________________
>
> From: Mike Andres [mailto:mandres AT Brocade DOT COM]
> Sent: Wednesday, November 21, 2007 9:49 AM
> To: Justin Piszcz
> Cc: Peters, Devon C; VERITAS-BU AT mailman.eng.auburn DOT edu
> Subject: RE: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
> Thanks.  I guess my question could be more specifically stated as "does
> the duplication process utilize NUMBER_DATA_BUFFERS_RESTORE or
> NUMBER_DATA_BUFFERS."   I don't have a system in front of me to test.
>
> ________________________________
>
> From: Justin Piszcz [mailto:jpiszcz AT lucidpixels DOT com]
> Sent: Wed 11/21/2007 8:58 AM
> To: Mike Andres
> Cc: Peters, Devon C; VERITAS-BU AT mailman.eng.auburn DOT edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Buffers in memory to disk would be dependent on how much cache the raid
> controller has yeah?
>
> Justin.
>
> On Wed, 21 Nov 2007, Mike Andres wrote:
>
>> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication
> performance as well.  Anybody know this definitively?
>>
>> ________________________________
>>
>> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu on behalf of Peters,
> Devon C
>> Sent: Tue 11/20/2007 1:32 PM
>> To: VERITAS-BU AT mailman.eng.auburn DOT edu
>> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>>
>>
>>
>> Chris,
>>
>> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is
> about all we ever got out of 1Gb fibre back in the day).  Are there any
> ISL's between your tape drive, your switch, and your server's HBA?
> Also, have you verified that your tape drives have negotiated onto the
> fabric as 2Gb and not 1Gb?
>>
>> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single
> drive toped out around 160MB/s.  When we upgraded the drives to 4Gb
> LTO-3, throughput to a single drive went up to 260MB/s.  Our data is
> very compressible, and these numbers are what I assume to be the
> limitation of the IBM tape drives.
>>
>> Regarding buffer settings, my experience may not apply directly since
> we're doing disk (filesystems on fast storge) to tape backups, rather
> than VTL to tape.  With our setup we see the best performance with a
> buffer size of 1048576 and 512 buffers.  For us these buffer sizes are
> mostly related to the filesystem performance, since we get better disk
> throughput with 1MB I/O's than with smaller ones...
>>
>> I'm also curious if anyone knows whether the
> NUMBER_DATA_BUFFERS_RESTORE parameter is used when doing duplications?
> I would assume it is, but I don't know for sure.  If it is, then the
> bptm process reading from the VTL would be using the default 16 (?)
> buffers, and you might see better performance by using a larger number.
>>
>>
>> -devon
>>
>>
>> -------------------------------------
>> Date: Fri, 16 Nov 2007 10:00:18 -0800
>> From: Chris_Millet <netbackup-forum AT backupcentral DOT com>
>> Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3
>> To: VERITAS-BU AT mailman.eng.auburn DOT edu
>> Message-ID: <1195236018.m2f.181149 AT www.backupcentral DOT com>
>>
>>
>> I'm starting to experiment with the use of T2000 for media servers.
> The backup server is a T2000 8 core, 18GB system.  There is a Qlogic
> QLE2462 PCI-E dual port 4Gb adapter in the system that plugs into a
> Qlogic 5602 switch.  From there, one port is zoned to a EMC CDL 4400
> (VTL) and a few HP LTO3 tape drives.  The connectivity is 4Gb from host
> to switch, and from switch to the VTL.  The tape drive is 2Gb.
>>
>> So when using Netbackup Vault to copy a backup done to the VTL to a
> real tape drive, the backup performance tops out at about 90MB/sec.  If
> I spin up two jobs to two tape drives, they both go about 45MB/sec.   It
> seems I've hit a 90MB/sec bottleneck somehow.  I have v240s performing
> better!
>>
>> Write performance to the VTL from incoming client backups over the WAN
> exceeds the vault performance.
>>
>> My next step is to zone the tape drives on one of the HBA ports, and
> the VTL zoned on the other port.
>>
>> I'm using:
>> SIZE_DATA_BUFFERS = 262144
>> NUMBER_DATA_BUFFERS = 64
>>
>> Any other suggestions?
>>
>>
>
>
>
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu