Veritas-bu

[Veritas-bu] Fast backup to tape but slow backup to disk on NBU 5.1MP3

2005-08-14 08:14:42
Subject: [Veritas-bu] Fast backup to tape but slow backup to disk on NBU 5.1MP3
From: dean.deano AT gmail DOT com (Dean)
Date: Sun, 14 Aug 2005 22:14:42 +1000
------=_Part_13920_9949494.1124021682308
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"Matt, writing multiple concurrent streams to the same set of disks may
be hurting performance. One at a time may yield better results."

I believe Tim's got it right. SATA is best at serial writes. If you feed it=
=20
two or more streams, that is effectively random writes, and performance=20
suffers badly.=20


On 8/12/05, Tim Berger <tim.berger AT gmail DOT com> wrote:
>=20
> Matt, writing multiple concurrent streams to the same set of disks may
> be hurting performance. One at a time may yield better results.
>=20
> I'm in the process of building out some staging servers myself for nbu
> 5.1 - been doing a bunch of bonnie++ benchmarks with various configs
> for Linux using a sata 3ware controller.
>=20
> On fedora core 3 (I know it's not supported):
>=20
> Raid5, 5 disks I got ~30MB/sec writes & 187MB/sec reads. Raid 50 with
> striping over 3 4-disk raid5's got 49MB/sec writes, 120 MB/sec reads.
> For raid0, w/10 disks, got a nice 158 MB/sec writes, and 190MB/sec
> reads.
>=20
> I'm partial to raid5 for high availability even with poor write
> performance.. I need to stream to lto3, which tops out at 180 MB/sec.
> If I went with raid0 and lost a disk, then a media server would take a
> dive, backups would fail, and I'd have to figure out what data failed
> to make it off to tape. I'm not sure how I'd reconcile a lost dssu
> with netbackup. If I wanted to to use the dssu's for doing synthetic
> fulls, then that further complicates things if a staging unit is lost.
>=20
> Any thoughts on what the netbackup fallout might be on a dssu loss?
>=20
> Even though it's not supported yet, I was thinking of trying out
> redhat enterprise linux 4, but I'm seeing really horrible disk
> performance (eg. 100MB/sec reads for raid5 vs the 187MB/sec on fc3).
>=20
> Maybe I should try out the supported rhel3 distribution. ;-) I
> don't have high hopes of that improving performance at the moment.
>=20
> On 8/10/05, Ed Wilts <ewilts AT ewilts DOT org> wrote:
> > On Wed, Aug 10, 2005 at 12:43:39PM -0400, Matt Clausen wrote:
> > > Yet when I do a backup to disk, I see decent performance
> > > on one stream (about 8,000KB/s or so) but the other streams will drop=
=20
> to
> > > around 300-500KB/s.
> > >
> > > NUMBER_DATA_BUFFERS =3D 16
> > > NUMBER_DATA_BUFFERS_DISK =3D 16
> > >
> > > SIZE_DATA_BUFFERS =3D 262144
> > > SIZE_DATA_BUFFERS_DISK =3D 1048576
> > >
> > > and I see this performance on both the master server disk pool AND a
> > > media server disk pool. The master server is a VxVM concat volume set=
=20
> of
> > > 3x73GB 10,000RPM disks and the media server is an external raid 5=20
> volume
> > > of 16x250GB SATA disks.
> >
> > I don't believe you're going to get good performance on a 16 member
> > RAID5 set of SATA disk. You should get better with a pair of 8 member
> > raid sets, but SATA is not fast disk and large raid 5 sets kill you on
> > write performance. If you're stuck with the SATA drives, configure them
> > as 3 4+1 RAID5 sets and use the 16th member as a hot spare. You'll have
> > 3TB of disk staging instead of about 3.8TB but it will perform a lot
> > better.
> >
> > --
> > Ed Wilts, Mounds View, MN, USA
> > mailto:ewilts AT ewilts DOT org
> > _______________________________________________
> > Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> >
>=20
>=20
> --
> -Tim
>=20
> _______________________________________________
> Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>

------=_Part_13920_9949494.1124021682308
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

&quot;Matt, writing multiple concurrent streams to the same set of disks ma=
y<br>
be hurting performance. &nbsp;One at a time may yield better results.&quot;=
<br>
<br>
I believe Tim's got it right. SATA is best at serial writes. If you
feed it two or more streams, that is effectively random writes, and
performance suffers badly. <br>
<br><br><div><span class=3D"gmail_quote">On 8/12/05, <b class=3D"gmail_send=
ername">Tim Berger</b> &lt;<a href=3D"mailto:tim.berger AT gmail DOT 
com">tim.berg=
er AT gmail DOT com</a>&gt; wrote:</span><blockquote class=3D"gmail_quote" 
style=
=3D"border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; p=
adding-left: 1ex;">
Matt, writing multiple concurrent streams to the same set of disks may<br>b=
e hurting performance.&nbsp;&nbsp;One at a time may yield better results.<b=
r><br>I'm in the process of building out some staging servers myself for nb=
u<br>5.1
 - been doing a bunch of bonnie++ benchmarks with various configs<br>for Li=
nux using a sata 3ware controller.<br><br>On fedora core 3 (I know it's not=
 supported):<br><br>Raid5, 5 disks I got ~30MB/sec writes &amp; 187MB/sec r=
eads.&nbsp;&nbsp;Raid 50 with
<br>striping over 3 4-disk raid5's got 49MB/sec writes, 120 MB/sec reads.<b=
r>For raid0, w/10 disks, got a nice 158 MB/sec writes, and 190MB/sec<br>rea=
ds.<br><br>I'm partial to raid5 for high availability even with poor write
<br>performance..&nbsp;&nbsp;I need to stream to lto3, which tops out at 18=
0 MB/sec.<br>If I went with raid0 and lost a disk, then a media server woul=
d take a<br>dive, backups would fail, and I'd have to figure out what data =
failed
<br>to make it off to tape.&nbsp;&nbsp;I'm not sure how I'd reconcile a los=
t dssu<br>with netbackup.&nbsp;&nbsp;If I wanted to to use the dssu's for d=
oing synthetic<br>fulls, then that further complicates things if a staging =
unit is lost.<br>
<br>Any thoughts on what the netbackup fallout might be on a dssu loss?<br>=
<br>Even though it's not supported yet, I was thinking of trying out<br>red=
hat enterprise linux 4, but I'm seeing really horrible disk<br>performance =
(eg. 100MB/sec reads for raid5 vs the 187MB/sec on fc3).
<br><br>Maybe I should try out the supported rhel3 distribution. ;-)&nbsp;&=
nbsp;I<br>don't have high hopes of that improving performance at the moment=
.<br><br>On 8/10/05, Ed Wilts &lt;<a href=3D"mailto:ewilts AT ewilts DOT 
org">ewil=
ts AT ewilts DOT org
</a>&gt; wrote:<br>&gt; On Wed, Aug 10, 2005 at 12:43:39PM -0400, Matt Clau=
sen wrote:<br>&gt; &gt; Yet when I do a backup to disk, I see decent perfor=
mance<br>&gt; &gt; on one stream (about 8,000KB/s or so) but the other stre=
ams will drop to
<br>&gt; &gt; around 300-500KB/s.<br>&gt; &gt;<br>&gt; &gt; NUMBER_DATA_BUF=
FERS =3D 16<br>&gt; &gt; NUMBER_DATA_BUFFERS_DISK =3D 16<br>&gt; &gt;<br>&g=
t; &gt; SIZE_DATA_BUFFERS =3D 262144<br>&gt; &gt; SIZE_DATA_BUFFERS_DISK =
=3D 1048576
<br>&gt; &gt;<br>&gt; &gt; and I see this performance on both the master se=
rver disk pool AND a<br>&gt; &gt; media server disk pool. The master server=
 is a VxVM concat volume set of<br>&gt; &gt; 3x73GB 10,000RPM disks and the=
 media server is an external raid 5 volume
<br>&gt; &gt; of 16x250GB SATA disks.<br>&gt;<br>&gt; I don't believe you'r=
e going to get good performance on a 16 member<br>&gt; RAID5 set of SATA di=
sk.&nbsp;&nbsp;You should get better with a pair of 8 member<br>&gt; raid s=
ets, but SATA is not fast disk and large raid 5 sets kill you on
<br>&gt; write performance.&nbsp;&nbsp;If you're stuck with the SATA drives=
, configure them<br>&gt; as 3 4+1 RAID5 sets and use the 16th member as a h=
ot spare.&nbsp;&nbsp;You'll have<br>&gt; 3TB of disk staging instead of abo=
ut 3.8TB but it will perform a lot
<br>&gt; better.<br>&gt;<br>&gt; --<br>&gt; Ed Wilts, Mounds View, MN, USA<=
br>&gt; mailto:<a href=3D"mailto:ewilts AT ewilts DOT org">ewilts AT ewilts DOT 
org</a><b=
r>&gt; _______________________________________________<br>&gt; Veritas-bu m=
aillist&nbsp;&nbsp;-&nbsp;&nbsp;
<a href=3D"mailto:Veritas-bu AT mailman.eng.auburn DOT edu">Veritas-bu AT 
mailman DOT eng=
.auburn.edu</a><br>&gt; <a href=3D"http://mailman.eng.auburn.edu/mailman/li=
stinfo/veritas-bu">http://mailman.eng.auburn.edu/mailman/listinfo/veritas-b=
u
</a><br>&gt;<br><br><br>--<br>-Tim<br><br>_________________________________=
______________<br>Veritas-bu maillist&nbsp;&nbsp;-&nbsp;&nbsp;<a href=3D"ma=
ilto:Veritas-bu AT mailman.eng.auburn DOT edu">Veritas-bu AT mailman.eng.auburn 
DOT edu</=
a><br><a href=3D"http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu"=
>
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu</a><br></blockquo=
te></div><br>

------=_Part_13920_9949494.1124021682308--