Veritas-bu

[Veritas-bu] Fast backup to tape but slow backup to disk on NBU 5.1MP3

2005-08-15 17:25:43
Subject: [Veritas-bu] Fast backup to tape but slow backup to disk on NBU 5.1MP3
From: jimh AT federaledge DOT com (Jim Horalek)
Date: Mon, 15 Aug 2005 14:25:43 -0700
My observation is if you group your DSSUs together as a storage unit group,
Netbackup will use them in a circular(round-robin) fashion for each job.
Hence if your backing up 3 clients and you have 3 dssu. Each job for each
client will hit a different dssu. (5.1mp3)

I haven't tested this fully or what the ramifications are when writing to
tape. 

jim

-----Original Message-----
From: veritas-bu-admin AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-admin AT mailman.eng.auburn DOT edu] On Behalf Of Paul 
Keating
Sent: Monday, August 15, 2005 11:41 AM
To: Tim Berger
Cc: veritas-bu AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] Fast backup to tape but slow backup to disk on NBU
5.1MP3


So I suppose with Disk based storage, netbackup tries to organize things in
such a way that data from a given job is kept packed in one location?? It
seems that if it can take data from 4 clients and pack it into a single
stream to throw at a tape drive, why couldn't it send a single stream of
data to a disk voume in the same way.....

It seems that in order to replace one tape drive, I would need 4 DSSU
volumes....since I can currently multiplex 4 jobs to a tape. Now if I have 6
tape drives, that means 24 DSSU's....administrative
nightmare.....loadbalancing jobs between DSSUs, etc.

For some slower, older clients on 100Mb/s ethernet, I'm multiplexing at that
level, since clients are not fast enough for fewer jobs to stream an LTO2
drive....

Paul


> -----Original Message-----
> From: Tim Berger [mailto:tim.berger AT gmail DOT com]
> Sent: August 15, 2005 2:33 PM
> To: Paul Keating
> Cc: veritas-bu AT mailman.eng.auburn DOT edu
> Subject: Re: [Veritas-bu] Fast backup to tape but slow backup 
> to disk on NBU 5.1MP3
> 
> 
> This is not production yet - just trying to figure out what raid 
> configs can approach LTO3 demands.  Since hard drives an only do one 
> thing at a time, I don't think it would be good to throw multiple 
> streams at it unless you have really slow clients.  Heads will thrash
> and overall write performance will be worse.   Once it reaches
> production, I'll share what I find.
> 
> Hardware, by the way, is a Soverign 4870 from ASL.  Nice box. 
> http://www.aslab.com/products/storage/sovereign4870.html
> 
> On 8/15/05, Paul Keating <pkeating AT bank-banque-canada DOT ca> wrote:
> > How does that throughput value change if you increase the
> number of jobs
> > righting to the volume?
> > 
> > Paul
> > 
> > > -----Original Message-----
> > > From: veritas-bu-admin AT mailman.eng.auburn DOT edu
> > > [mailto:veritas-bu-admin AT mailman.eng.auburn DOT edu] On Behalf Of Tim 
> > > Berger
> > > Sent: August 15, 2005 2:02 PM
> > > To: Eric Ljungblad
> > > Cc: Dean; Matt Clausen; veritas-bu AT mailman.eng.auburn DOT edu
> > > Subject: Re: [Veritas-bu] Fast backup to tape but slow backup to 
> > > disk on NBU 5.1MP3
> > >
> > >
> > > For a 6 drive 10 raid, I got about 140MB/sec reads &
> 95MB/sec writes.
> > > It's a shame that it takes so many disks to get good
> write performance
> > > on a redundant raid.
> > >
> > > These are all 400GB SATA disks.
> > >
> > > On 8/14/05, Eric Ljungblad <Eric.Ljungblad AT copleypress DOT com> wrote:
> > > >
> > > >
> > > >
> > > > Good testing,
> > > >
> > > >
> > > >
> > > > Have you tried RAID 10 / (1/0)  or tried  (0+1) ?
> > > >
> > > >
> > > >
> > > >  ________________________________
> > > >
> > > >
> > > > From: veritas-bu-admin AT mailman.eng.auburn DOT edu
> > > > [mailto:veritas-bu-admin AT mailman.eng.auburn DOT edu] On Behalf Of 
> > > > Dean
> > > >  Sent: Sunday, August 14, 2005 5:15 AM
> > > >  To: Tim Berger
> > > >  Cc: Matt Clausen; veritas-bu AT mailman.eng.auburn DOT edu
> > > >  Subject: Re: [Veritas-bu] Fast backup to tape but slow
> > > backup to disk on
> > > > NBU 5.1MP3
> > > >
> > > >
> > > >
> > > >
> > > > "Matt, writing multiple concurrent streams to the same set
> > > of disks may
> > > >  be hurting performance.  One at a time may yield
> better results."
> > > >
> > > >  I believe Tim's got it right. SATA is best at serial
> > > writes. If you feed it
> > > > two or more streams, that is effectively random writes, and
> > > performance
> > > > suffers badly.
> > > >
> > > >
> > > >
> > > >
> > > > On 8/12/05, Tim Berger <tim.berger AT gmail DOT com> wrote:
> > > >
> > > > Matt, writing multiple concurrent streams to the same set
> > > of disks may
> > > >  be hurting performance.  One at a time may yield
> better results.
> > > >
> > > >  I'm in the process of building out some staging servers
> > > myself for nbu
> > > >  5.1 - been doing a bunch of bonnie++ benchmarks with
> > > various configs
> > > >  for Linux using a sata 3ware controller.
> > > >
> > > >  On fedora core 3 (I know it's not supported):
> > > >
> > > >  Raid5, 5 disks I got ~30MB/sec writes & 187MB/sec reads.
> > > Raid 50 with
> > > >  striping over 3 4-disk raid5's got 49MB/sec writes, 120
> > > MB/sec reads.
> > > >  For raid0, w/10 disks, got a nice 158 MB/sec writes,
> and 190MB/sec
> > > >  reads.
> > > >
> > > >  I'm partial to raid5 for high availability even with poor write  
> > > > performance..  I need to stream to lto3, which tops out at
> > > 180 MB/sec.
> > > >  If I went with raid0 and lost a disk, then a media server
> > > would take a
> > > >  dive, backups would fail, and I'd have to figure out what
> > > data failed
> > > >  to make it off to tape.  I'm not sure how I'd
> reconcile a lost dssu
> > > >  with netbackup.  If I wanted to to use the dssu's for
> > > doing synthetic
> > > >  fulls, then that further complicates things if a staging
> > > unit is lost.
> > > >
> > > >  Any thoughts on what the netbackup fallout might be on
> a dssu loss?
> > > >
> > > >  Even though it's not supported yet, I was thinking of
> trying out
> > > >  redhat enterprise linux 4, but I'm seeing really horrible disk  
> > > > performance (eg. 100MB/sec reads for raid5 vs the
> > > 187MB/sec on fc3).
> > > >
> > > >  Maybe I should try out the supported rhel3 distribution. ;-)  I  
> > > > don't have high hopes of that improving performance at
> the moment.
> > > >
> > > >  On 8/10/05, Ed Wilts <ewilts AT ewilts DOT org > wrote:
> > > >  > On Wed, Aug 10, 2005 at 12:43:39PM -0400, Matt Clausen wrote:  
> > > > > > Yet when I do a backup to disk, I see decent performance  > 
> > > > > on one stream (about 8,000KB/s or so) but the other
> > > streams will drop
> > > > to
> > > >  > > around 300-500KB/s.
> > > >  > >
> > > >  > > NUMBER_DATA_BUFFERS = 16
> > > >  > > NUMBER_DATA_BUFFERS_DISK = 16
> > > >  > >
> > > >  > > SIZE_DATA_BUFFERS = 262144
> > > >  > > SIZE_DATA_BUFFERS_DISK = 1048576
> > > >  > >
> > > >  > > and I see this performance on both the master server
> > > disk pool AND a
> > > >  > > media server disk pool. The master server is a VxVM
> > > concat volume set
> > > > of
> > > >  > > 3x73GB 10,000RPM disks and the media server is an
> > > external raid 5
> > > > volume
> > > >  > > of 16x250GB SATA disks.
> > > >  >
> > > >  > I don't believe you're going to get good performance on
> > > a 16 member
> > > >  > RAID5 set of SATA disk.  You should get better with a
> > > pair of 8 member
> > > >  > raid sets, but SATA is not fast disk and large raid 5
> > > sets kill you on
> > > >  > write performance.  If you're stuck with the SATA
> > > drives, configure them
> > > >  > as 3 4+1 RAID5 sets and use the 16th member as a hot
> > > spare.  You'll have
> > > >  > 3TB of disk staging instead of about 3.8TB but it will
> > > perform a lot
> > > >  > better.
> > > >  >
> > > >  > --
> > > >  > Ed Wilts, Mounds View, MN, USA
> > > >  > mailto:ewilts AT ewilts DOT org
> > > >  > _______________________________________________
> > > >  > Veritas-bu maillist  - Veritas-bu AT mailman.eng.auburn DOT edu
> > > >  >
> > > > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> > > >  >
> > > >
> > > >
> > > >  --
> > > >  -Tim
> > > >
> > > >  _______________________________________________
> > > >  Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> > > >  http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> > > >
> > > >
> > >
> > >
> > > --
> > > -Tim
> > >
> > > _______________________________________________
> > > Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> > > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> > >
> > 
> > _______________________________________________
> > Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> > 
> 
> 
> -- 
> -Tim
> 

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu