We started experiencing this issue when we
moved the catalog from local disk to SAN attached LUN. Now when we backup the catalog
using a policy and indeed the buffer tuning, it flies to tape in no time.
However when using NBU catalog backup it still
takes forever and day, well not literally but sure feels like it.
What I have decided to do is present 2 new
LUNS and write the catalog to disk, and then fire of backup jobs of this disk to
another netbackup environment once complete.
I have started to write a script to do all
this for me, but when running bpbackupdb command line option the catalog backup
completes in a timely fashion however the job stays active in the activity monitor
for ages and then dies with a status 50 ( client process aborted) , anyone seen
this ? why is it not straightforward like the unix commands manual says it it -J
Dave
From: Dennis Naidoo
[mailto:dnaidoo AT stortech.co DOT za]
Sent: 10 July 2007 07:49
To: Clooney, David;
veritas-bu AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] Catalog
performence to tape
The reason the catalog backup doesn’t
use the buffer sizes, is in the event of you recovering your catalog to a new
Netbackup install (DR scenario) you may not have your buffer files created. If
you don’t have the SIZE_DATA_BUFFERS file created, Netbackup will use the
default block which is 32k. If you had written the catalog in 128k for example,
this would create problems restoring the catalog.
So Netbackup always uses the default block
sizes to backup your catalog, to allow easy recovery. The buffer files would
also be restored with the catalog, as it is in /usr/openv/netbackup/db/config
directory.
If you want to increase the catalog backup
performance, you can try removing the /usr/openv/netbackup/db directory from
the catalog path, and add just add the following directories instead
“/usr/openv/netbackup/db/images/<Master server name>” &
“/usr/openv/netbackup/db/config”.
Create a normal file system policy for your
master server and back up the /usr/openv/netbackup/db directory with it.
Schedule this policy to run in the morning when your backups are done, and then
let your catalog backup run after this policy.
You will then benefit from improved
performance on your images directory using larger block sizes.
In the event of a disaster, you can recover
your catalog as per normal, which will bring the catalog info of your images
backup and your buffer files, and you can then do a selective restore as per
normal, of your /usr/openv/netbackup/db directory.
From: Kevin Whittaker [mailto:Kevin.Whittaker AT syniverse DOT com]
Sent: 09 July 2007 21:04
To: Clooney, David;
veritas-bu AT mailman.eng.auburn DOT edu
Subject: Re: [Veritas-bu] Catalog
performence to tape
I wish had an answer for you, but alas I have almost the
same issue.
I am running Solaris 9, NB 5.1 MP6 with a catalog size of
40GB. The catalog backup takes around 2.5 hours to backup. I even
turned up logging and found it spent all of it's time in the images directory.
I am even running SAN attached 9940B tape drives.
If I can not find an answer, I am considering using SRDF
from EMC to duplicate the file system of the catalog images. I just have
to figure out how to pause backups long enough to break the SRDF connection and
then restart them. Then I can backup the duplicate that will be located
in our DR site.
Anyway, does anybody have any idea how to speed up the
backups of the catalog?
From:
veritas-bu-bounces AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Clooney, David
Sent: Wednesday, July 04, 2007
9:34 AM
To:
veritas-bu AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] Catalog
performence to tape
Hi
all
Apologies
for not participating in this list as much as I used to however I have been
totally inundated.
Was
wondering whether someone could shed some light on an issue I am currently
faced with.
One
of our Netbackup environments currently residing on Solaris 8 64 bit NBU
5.1 MP6 has a catalog size of approx 50 Gb
The
NBU catalog backup is taken over three hours to complete which is madness.
What
I have discovered is that when the catalog backup runs the use of
SIZE_DATA_BUFFERS and NUMBER_OF_DATA_BUFFERS is excluded. I can understand this
as the catalog should be outside the realms of NBU essentially.
When
I create a policy based backup of the catalog mount point the backup flies
through in no time at all, as I presume the buffer settings are utilised.
For
a further test I have used Solaris’s tar to backup the same mount point
to tape and this too takes forever and a day.
So
I'm sort of concluding that the OS in some way, shape or form is letting me
down.
1.
Can anyone think of anything or point me in the right direction
as to tune the OS so I can increase throughput to our SAN attached 9940B's ?
2.
On another topic I was going to try and setup rsync to take
copies of the catalog, would it be detrimental to the NBU environment if I gave
the rsync’s user group read access to the catalog mount point ?
Thanks
in advance
Dave
Notice to recipient:
The information in this internet e-mail and any attachments is confidential and
may be privileged. It is intended solely for the addressee. If you are not the
intended addressee please notify the sender immediately by telephone. If you
are not the intended recipient, any disclosure, copying, distribution or any
action taken or omitted to be taken in reliance on it, is prohibited and may be
unlawful.
When addressed to external clients any opinions or advice contained in this
internet e-mail are subject to the terms and conditions expressed in any
applicable governing terms of business or client engagement letter issued by
the pertinent Bank of America group entity.
If this email originates from the U.K. please note that Bank of
America, N.A., London Branch and Banc of America Securities Limited are
authorised and regulated by the Financial Services Authority.