Veritas-bu

[Veritas-bu] question regarding netbackup catalogue sizes and offsite backups

2002-07-02 00:40:04
Subject: [Veritas-bu] question regarding netbackup catalogue sizes and offsite backups
From: mpreston AT soe.sony DOT com (Preston, Mark)
Date: Mon, 1 Jul 2002 21:40:04 -0700
Hi Andrew,

thanks for your insight, that's very helpful. I do have a couple of follow
on questions though if you don't mind.

Yes, I was talking about expiring images from the netbackup database, to
free up space. I thought there was a fixed limit with 3.4 at least that says
the whole netbackup database HAS to fit onto a single tape in order for the
automatic netbackup daily db dump to complete successfully (although I guess
you could dump it to a filesystem partition and then ufsdump that onto a
number of tapes if need be).

I'm particularly interested in the import process, does it work with
multiplexed tapes? and NDMP backup tapes? and how slow is slow? heh.

I sounds like keeping the database information online for the life cycle of
the tape is the way to go - although i'm gonna need a bigger boat.

thanks again.

- Mark

-----Original Message-----
From: Fabbro, Andrew P [mailto:Fabbro.Andrew AT cnf DOT com]
Sent: Monday, July 01, 2002 9:03 PM
To: 'Preston, Mark'; 'veritas-bu AT mailman.eng.auburn DOT edu'
Subject: RE: [Veritas-bu] question regarding netbackup catalogue sizes
and offsite backups


Depends what you mean by "very large" - some people's catalogs are over 100
Gb ;)

I don't think you can selectively delete parts of it (as of 3.4).  You could
expire 
images which would free space, but if you ever needed to use those images,
you would 
need to import the tapes, which takes a looooong time.  If you ship a
database backup 
tape with it, then you're stuck with loading that database tape, and wiping
out your 
live database in the process (i.e., effectively reverting to that point in
time).

You could try playing with the "Delay to compress database" setting.  It's
specified 
in days.  If you set it to, say, 30 days, then the data for images older
than 30 days
would be compressed in the catalog.  The downside is that if you need to
browse those
images for restores, then Netbackup has to uncompress that information,
which can be
slow (depending on your master's disk, cpus, etc.)  I suppose you could set
it very
low (maybe to one day) to compress like a fiend if you rarely restore (or
don't mind
taking 3-4x as long to browse and setup a restore job).

I image that the index level adds space (see the NB Performance Tuning Guide
- it's
/usr/openv/netbackup/db/INDEXLEVEL in Unix).  More indices = more disk space
to hold
them, I would guess.  More indices again equals faster browsing for
restores.

NB 4.5 uses a binary format database which is supposed to be faster and
smaller.

I don't think there's an easy answer...I think just mathematically, your
catalogs are
going to be big.  For every file on every image, you have to store a lot of
metadata
(owner, permissions, last mod time, etc.), plus the overhead of indexing it
all.

--
Drew Fabbro
fabbro.andrew AT cnf DOT com 
Desk: 503-450-3374    Cell: 503-701-0469
"There is no such word as 'maturity'.  There
is only 'maturing' and 'dead'." -- Bruce Lee