I would be very hesitant about creating this file. I am running VNB 3.4_4 and I
was told to create this file, in a response to an RMAN
error 54 problem; The RMAN bkups ran fine, but the MASTER froze up. It turns
out that this file disables the governing mechanism for the system resources,
as it pertains to VNB. When I spoke to VNB
systems supt., they were horrified that I was told to do this ( by their
brother dept. in RMAN support ). Unless 4.5 handles this differently, I would
get verification before your create this file.
tx
john
-----Original Message-----
From: Johnie Stafford [mailto:stafforj AT core.afcc DOT com]
Sent: Tuesday, June 17, 2003 3:12 PM
To: Donaldson, Mark
Cc: Veritasbu (E-mail)
Subject: Re: [Veritas-bu] Error 134
>>> On Thu, 12 Jun 2003 11:19:15 -0600, "Donaldson, Mark" <Mark.Donaldson AT
>>> experianems DOT com> said:
dm> Sol 8, NB v4.5 MP4 SSO with four media servers.
dm> I've been getting error 134's. Here's the description from bperror:
dm> # bperror -S 134 -r
dm> unable to process request because the server resources are busy
dm> Status code 134 is an informational message indicating that all drives in
dm> the storage unit are currently in use. If this occurs, NetBackup
dm> automatically tries another storage unit; if one is not available,
NetBackup
dm> requeues the job with a status of 134 and retries it later.
dm> Disable automatic retry using another storage unit and create the following
dm> file on the NetBackup media server prior to running the backups:
dm> /usr/openv/volmgr/DISABLE_RESOURCES_BUSYIf you have already attempted the
dm> backup and see this error, then create the file and rerun the backups.
dm> OK - here's the question. Why is the job running at all if the storage
dm> unit's drives are not available? In a non-SSO environment, the job will
dm> simply queue if the drives are busy. Why is this job even going active?
We are having real problems with this today. We run a Sol8, NB4.5 MP3
with SSO. We only have one media server, with 5 dedicated 9840b's and
5 SSO 9840b's. 3 of the SSO drives are idle. The media server is 98%
idle. Yet we've got jobs that appear to be stuck in a loop of 134's.
The tries are failing so fast that often the try end time shows as 1
second before the try start time. We had servers getting 100+ 134's
(for 4 to 8 hours).
Anybody have a clue what's going on here?
Johnie
_______________________________________________
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
|