Veritas-bu

Re: [Veritas-bu] same job keeps hanging

2007-07-14 20:05:34
Subject: Re: [Veritas-bu] same job keeps hanging
From: rarmstr0 AT att DOT net
To: VERITAS-BU AT mailman.eng.auburn DOT edu
Date: Sat, 14 Jul 2007 23:50:16 +0000
Aaron,
Looks like compression may be the killer ... clear compression (and TIR) in the policy and crash the car again.   The idea being to simplify the processing as much as possible as compression (and TIR) add overhead.  Since you're running this job on the master to a local tape drive (which also does compression), I don't see anywhere for any gain by doing client compression.
 
If it still fails, then create this empty file on the system where bpbkar runs
  # touch /usr/openv/netbackup/bpbkar_path_tr
It will add a message to the bpbkar log for the start of processing for each file via a SelectFile message.
 
bp.conf VERBOSE = 5 triggers the bpbkar PrintFile messages indicating it's done handling the file.
 
Make sure you have bpbrm logging enabled, too.
 
When the job ends with status 41, then look in the bpbrm log for the timestamp when your 3600 second CLIENT_READ_TIME expires.  Then take a look at the bpbkar log and see what files were being handled around that time.  Look for time gaps betwen SelectFile and PrintFile, then see if there is something special about that file (big, open, locked, active database, sparse, etc).
 
When you ran the interactive bpbkar to /dev/null, you weren't doing compression, and it completed in just over an hour, while your scheduled run with compression was nearly 5 hours.  It's the "-Z" on the bpbkar call that tells bpbkar to do compression.  Compression is a double edged sword.  I personally prefer to let the tape drive deal with it in most situations, although I might enable client compression to an undersized disk or disk staging storage unit.
 
--- TTFN
_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
<Prev in Thread] Current Thread [Next in Thread>