Veritas-bu

[Veritas-bu] Command or clicky box to see how many times a tape has been writt en to?

2003-08-01 14:25:51
Subject: [Veritas-bu] Command or clicky box to see how many times a tape has been writt en to?
From: Aren.Parisi AT wwireless DOT com (Parisi, Aren)
Date: Fri, 1 Aug 2003 11:25:51 -0700
HP-UX 11i master/media server, NB-DC 3.4.1, Adic Scalar 10k IBM Ultrium LTO
I know there is a maximum number of times an LTO tape can be written to
before it begins to degrade, first of all, does anybody know what the max
number of writes on an Ebtec 100/200GB and Fuji 100/200GB, our database
backups rotate between two tapes, as configured, but we have used the same
two tapes for a year, does anyone have a recommendation?  Thanks all!

-----Original Message-----
From: veritas-bu-request AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-request AT mailman.eng.auburn DOT edu] 
Sent: Friday, August 01, 2003 10:05 AM
To: veritas-bu AT mailman.eng.auburn DOT edu
Subject: Veritas-bu digest, Vol 1 #2409 - 2 msgs


Send Veritas-bu mailing list submissions to
        veritas-bu AT mailman.eng.auburn DOT edu

To subscribe or unsubscribe via the World Wide Web, visit
        http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
or, via email, send a message with subject or body 'help' to
        veritas-bu-request AT mailman.eng.auburn DOT edu

You can reach the person managing the list at
        veritas-bu-admin AT mailman.eng.auburn DOT edu

When replying, please edit your Subject line so it is more specific than
"Re: Contents of Veritas-bu digest..."


Today's Topics:

   1. Tape Drive Speed (J Glacius)
   2. Re: Tape Drive Speed (Karl.Rossing AT Federated DOT CA)

--__--__--

Message: 1
Date: Fri, 1 Aug 2003 09:34:59 -0700 (PDT)
From: J Glacius <backup_acct_101 AT yahoo DOT com>
To: VERITAS-BU <veritas-bu AT mailman.eng.auburn DOT edu>
Subject: [Veritas-bu] Tape Drive Speed

--0-661612454-1059755699=:76724
Content-Type: text/plain; charset=us-ascii

I'm curious to know what others are experiencing with their tape drive
speeds.
 
We have an array of tape drives, but the ones I am most interested in right
now are:
 
LTO-1
LTO-2
 
I'm getting what I think is rather poor performance from my LTO-2 drives.
NetBackup is running in its "stock" condition because some of the media
servers are application servers as well and they don't want me to negatively
affect the application servers performance.
 
I'm seeing between 4 and 9 MB/sec.  
 
Any ideas?
J


---------------------------------
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
--0-661612454-1059755699=:76724
Content-Type: text/html; charset=us-ascii

<DIV>I'm curious to know what others are experiencing with their tape drive
speeds.</DIV> <DIV>&nbsp;</DIV> <DIV>We have an array of tape drives, but
the ones I am most interested in right now are:</DIV> <DIV>&nbsp;</DIV>
<DIV>LTO-1</DIV> <DIV>LTO-2</DIV> <DIV>&nbsp;</DIV> <DIV>I'm getting what I
think is rather poor performance from my LTO-2 drives.&nbsp; NetBackup is
running in its "stock" condition because some of the media servers are
application servers as well and they don't want me to negatively affect the
application servers performance.</DIV> <DIV>&nbsp;</DIV> <DIV>I'm seeing
between 4 and 9 MB/sec.&nbsp; </DIV> <DIV>&nbsp;</DIV> <DIV>Any ideas?</DIV>
<DIV>J</DIV><p><hr SIZE=1> Do you Yahoo!?<br> <a
href="http://us.rd.yahoo.com/evt=10469/*http://sitebuilder.yahoo.com";>Yahoo!
SiteBuilder</a> - Free, easy-to-use web site design software
--0-661612454-1059755699=:76724--

--__--__--

Message: 2
To: J Glacius <backup_acct_101 AT yahoo DOT com>
Cc: VERITAS-BU <veritas-bu AT mailman.eng.auburn DOT edu>
Subject: Re: [Veritas-bu] Tape Drive Speed
From: Karl.Rossing AT Federated DOT CA
Date: Fri, 1 Aug 2003 11:46:26 -0500

This is a multipart message in MIME format.
--=_alternative 005C23B886256D75_=
Content-Type: text/plain; charset="US-ASCII"

veritas-bu-admin AT mailman.eng.auburn DOT edu wrote on 08/01/2003 11:34:59 AM:

> 
> I'm seeing between 4 and 9 MB/sec.
> 

Me Too!

I'm guessing that your not using multiplexing/multistreaming (IE: writting 
2 or more jobs to the same tape drive at the same time).

Which then leads one into the performance tuning of netbackup. Which is a 
fun learning curve.

Larry Kingery posted the following yesterday which opened my eyes(I liked 
the part about "Avoid the common mistake of trying to "tune the buffers"")

> policy to 4.  Each mount point has about 175gb of data to be backed
> up. The policy kicks off 4 jobs as it should, but is taking 10 to 12
> hours to complete.  I looked in my bptm logs and am seeing the
> following:
>
> 05:00:51.795 [26600] <2> write_data: waited for full buffer 114308 
times,
> delayed 131449 times 
> 
> 05:06:11.186 [26608] <2> write_data: waited for full buffer 117920 
times,
> delayed 138141 times 
> 

These messages tell us that the tape drive had to stop writing a
number of times and wait for the input to catch up.  You can't really
quantify the amount of time based only on the above, except to say it
was at least an hour in total.  This gives us a clue as to where to
start looking to improve performance.

You don't mention whether this is a media server or client, network
speed, use of mpx, etc.  I suppose the place to start then would be to
measure how quickly the data can be read from disk.

# time ./bpbkar -dt 0 -r 8888 -nocont DISK-PATH > /dev/null

You'll probably want to try running this in various combinations
(e.g. one for each filesystem, concurrently and separately) to find
the best performance combination.  Also you might compare this to
using dd to read raw disk to analyze filesystem overhead.

>From there, assuming you can get better than 10-12 hours, move on to
analyze network, etc.

Avoid the common mistake of trying to "tune the buffers".  For
example, you COULD decrease the buffer size and probably make these
numbers go down.  However, all that would be doing is slowing the tape
drives down to the level of input.  This would make no performance
difference, which is really what you're after.
--=_alternative 005C23B886256D75_=
Content-Type: text/html; charset="US-ASCII"


<br>
<br><font size=2><tt>veritas-bu-admin AT mailman.eng.auburn DOT edu wrote on
08/01/2003
11:34:59 AM:<br>
<br>
&gt; &nbsp;</tt></font>
<br><font size=2><tt>&gt; I'm seeing between 4 and 9 MB/sec.
&nbsp;</tt></font>
<br><font size=2><tt>&gt; &nbsp;</tt></font>
<br>
<br><font size=2><tt>Me Too!</tt></font>
<br>
<br><font size=2><tt>I'm guessing that your not using
multiplexing/multistreaming
(IE: writting 2 or more jobs to the same tape drive at the same
time).</tt></font>
<br>
<br><font size=2><tt>Which then leads one into the performance tuning of
netbackup. Which is a fun learning curve.</tt></font>
<br>
<br><font size=1 face="sans-serif"><b>Larry Kingery</b></font><font
size=2><tt>
posted the following yesterday which opened my eyes(I liked the part about
&quot;Avoid the common mistake of trying to &quot;tune the
buffers&quot;&quot;)</tt></font>
<br>
<br><font size=2><tt>&gt; policy to 4. &nbsp;Each mount point has about
175gb of data to be backed<br>
&gt; up. The policy kicks off 4 jobs as it should, but is taking 10 to
12<br>
&gt; hours to complete. &nbsp;I looked in my bptm logs and am seeing the<br>
&gt; following:<br>
&gt;<br>
&gt; 05:00:51.795 [26600] &lt;2&gt; write_data: waited for full buffer
114308 times,<br>
&gt; delayed 131449 times <br>
&gt; <br>
&gt; 05:06:11.186 [26608] &lt;2&gt; write_data: waited for full buffer
117920 times,<br>
&gt; delayed 138141 times <br>
&gt; <br>
<br>
These messages tell us that the tape drive had to stop writing a<br>
number of times and wait for the input to catch up. &nbsp;You can't
really<br>
quantify the amount of time based only on the above, except to say it<br>
was at least an hour in total. &nbsp;This gives us a clue as to where to<br>
start looking to improve performance.<br>
<br>
You don't mention whether this is a media server or client, network<br>
speed, use of mpx, etc. &nbsp;I suppose the place to start then would be
to<br>
measure how quickly the data can be read from disk.<br>
<br>
# time ./bpbkar -dt 0 -r 8888 -nocont DISK-PATH &gt; /dev/null<br>
<br>
You'll probably want to try running this in various combinations<br>
(e.g. one for each filesystem, concurrently and separately) to find<br>
the best performance combination. &nbsp;Also you might compare this to<br>
using dd to read raw disk to analyze filesystem overhead.<br>
<br>
>From there, assuming you can get better than 10-12 hours, move on to<br>
analyze network, etc.<br>
<br>
Avoid the common mistake of trying to &quot;tune the buffers&quot;.
&nbsp;For<br>
example, you COULD decrease the buffer size and probably make these<br>
numbers go down. &nbsp;However, all that would be doing is slowing the
tape<br>
drives down to the level of input. &nbsp;This would make no performance<br>
difference, which is really what you're after.</tt></font>
--=_alternative 005C23B886256D75_=--



--__--__--

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


End of Veritas-bu Digest

<Prev in Thread] Current Thread [Next in Thread>