Networker

Re: [Networker] st.conf

2007-11-03 09:32:09
Subject: Re: [Networker] st.conf
From: Vernon Harris <harriv00 AT YAHOO DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Sat, 3 Nov 2007 06:28:49 -0700
Paul,
The native st.conf should support LTO2 drives.  Are
these drives scsi or fibre.  If fibre, check your
hba's driver levels, also verify your tape drive and
library firmware  levels.  


--- Paul messner <paul_messner AT STARKEY DOT COM> wrote:

> Hi,
> 
> This is my first post.
> 
> I have just moved to a new backup server, a Sun V445
> running Solaris 10, 
> and using Legato 7.2.2.  I have noticed that when a
> new tape is being 
> labeled and when a tape becomes full I get I/O
> errors.  Last night a tape 
> filled gave an I/O error and then my backup jobs
> hung.  Legato and Sun 
> believe it is the st.conf file not being configured
> properly.  Our current 
> st.conf is the default.  Every time I have add
> options I got a error 
> running the update_drv command.  Sorry I don't have
> the error available at 
> the moment.  Our tape Library is a L700e with LTO2
> IBm drives.  I was 
> wondering if any one had an example of st.conf with
> the options it.  Or 
> have any other ideas that could fix my IO errors.
> 
> Our old backup server is an SGI Origin 300. MOved to
> a new OS for our 
> netwoker env.
> 
> Tape is being labeled:
> 11/01/07 17:05:54 nsrd: media info: loading volume -
> into /dev/rmt/S0cbn
> 11/01/07 17:05:54 nsrd: media info: loading volume
> 207379 
> into /dev/rmt/S5cbn
> 11/01/07 17:07:02 nsrd: /dev/rmt/S0cbn Verify label
> operation in progress
> 11/01/07 17:07:04 nsrd: media warning:
> /dev/rmt/S0cbn reading: I/O error
> 11/01/07 17:07:04 nsrd: media warning:
> /dev/rmt/S0cbn reading: Tape label 
> read for volume ? in pool ?, is not recognised by
> Networker: I/O error
> 11/01/07 17:07:05 nsrd: /dev/rmt/S0cbn Label without
> mount operation in 
> progress
> 11/01/07 17:07:05 nsrd: media info: LTO Ultrium-2
> tape  will be over-written
> 11/01/07 17:07:13 nsrd: /dev/rmt/S0cbn Mount
> operation in progress
> 
> Got this error when the backup jobs hung:
> 11/02/07 21:28:30 nsrd: media notice: LTO Ultrium-2
> tape 207142 
> on /dev/rmt/S9cbn is full
> 11/02/07 21:28:30 nsrd: media notice: LTO Ultrium-2
> tape 207142 used 322 GB 
> of 200 GB capacity
> 11/02/07 21:29:30 nsrd: media warning:
> /dev/rmt/S9cbn reading: I/O error
> 11/02/07 21:29:53 nsrd:
> bearcat.starkey.com:RMAN:FULL_GLP_62j03853 done 
> saving to pool 'Ora Apps' (207129) 4403 MB
> 11/02/07 21:30:00 nsrd: savegroup info: starting 
> Netware_02 (with 2 client
> (s))
> 11/02/07 21:30:02 nsrd: media notice: Volume
> "207142" on 
> device "/dev/rmt/S9cbn": Block size is 50331649
> bytes not 67108864 bytes. 
> Verify the device configuration
> . Tape positioning by record is disabled.
> 
> I do have a ticket open with legato.  That person
> thought it was a bunch of 
> bad tapes in the library.  The ticket is keep open
> until Monday.  We saw 
> some block size messages and then recommend me to
> add the following line 
> to /etc/init.d/networker file:
>  
> 
> To sign off this list, send email to
> listserv AT listserv.temple DOT edu and type "signoff
> networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have
> any problems with this list. You can access the
> archives at
> http://listserv.temple.edu/archives/networker.html
> or
> via RSS at
>
http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>