--=====================_89748882==_.ALT
Content-Type: text/plain; charset="us-ascii"; format=flowed
Mark:
I've used this approach many times. The only caveat is that you need to
build enough intelligence into
your script *not* to expire images that did not successfully duplicate.
I have some non-critical backups of a slow machine that I send to disk
because it isn't fast enough
to stream the drive. This is my quick-and-dirty destage script, but it
should give you a basic idea
of how such a script should work (note that this script is not intelligent
per my above definition... you'll
need to modify it to suit.)
I run this from cron once a day. It also means my drive is only needed
once a day even though backups
are once an hour... hence less wear and tear. :)
--PLB
#!/bin/ksh
PATH=/usr/bin\
:/sbin\
:/usr/sbin\
:/usr/ccs/bin\
:/usr/ucb\
:/usr/local/bin\
:/opt/sfw/bin\
:/usr/openwin/bin\
:/usr/dt/bin\
:/opt/openv/netbackup/bin\
:/opt/openv/netbackup/bin/goodies\
:/opt/openv/netbackup/bin/admincmd\
:/opt/openv/netbackup/vault/scripts\
:/opt/openv/volmgr/bin\
:/opt/openv/volmgr/bin/goodies\
:.
SPOOLDIR=/iotk/spool
BIDFILE=/tmp/destage.$$
EXPIREFILE=/tmp/expire.$$
DSTUNIT=L280
DSTPOOL=NetBackup
OUTPUTFILE=/tmp/bpduplicate.$$
ls $SPOOLDIR | sed 's/_C1.*//' | sort | uniq | tee $BIDFILE $EXPIREFILE
bpduplicate -dstunit $DSTUNIT -v -Bidfile $BIDFILE -L $OUTPUTFILE -dp $DSTPOOL
for image in $(cat $EXPIREFILE)
do
echo "Expiring $image"
bpexpdate -backupid $image -d 0 -copy 1 -force
done
rm $EXPIREFILE
At 03:12 PM 04/22/2002 -0600, you wrote:
>We had a library failure over the weekend and it underscored the degree to
>which our library is a single point of failure for production
>processes. (Oracle redo logs are archived to tape via netbackup: when the
>library fails, the archive fails, the DB filesystems fill up and the DB
>crashes. Nice, huh?)
>
>Anyway, to compensate, I want to build a disk-based storage unit to be
>used for critical files. It'll only be a couple hundred of gig for our
>most important stuff (like archived redo logs).
>
>I have this idea that I can backup to the disk, then periodically do a
>bpduplicate to tape, rename the duplicate copy to be the primary, then
>delete the disk image. When the library is out of service, I'll let the
>images accumulate on disk until it's available again.
>
>Has anybody tried this? Any caveats or experiences you'd like to share?
>
>-Mark
>
>-----------------------------------------------------------------------
> Mark Donaldson - Sr. Systems Engineer
> Experian EMS - Denver Colorado
>-----------------------------------------------------------------------
> Linux? Wasn't he the kid with the blanket?
>-----------------------------------------------------------------------
--=====================_89748882==_.ALT
Content-Type: text/html; charset="us-ascii"
<html>
<br>
Mark:<br>
<br>
I've used this approach many times. The only caveat is that you
need to build enough intelligence into<br>
your script *not* to expire images that did not successfully
duplicate.<br>
<br>
I have some non-critical backups of a slow machine that I send to disk
because it isn't fast enough<br>
to stream the drive. This is my quick-and-dirty destage script, but
it should give you a basic idea<br>
of how such a script should work (note that this script is not
intelligent per my above definition... you'll<br>
need to modify it to suit.)<br>
<br>
I run this from cron once a day. It also means my drive is only
needed once a day even though backups<br>
are once an hour... hence less wear and tear. :)<br>
<br>
--PLB<br>
<br>
#!/bin/ksh<br>
PATH=/usr/bin\<br>
:/sbin\<br>
:/usr/sbin\<br>
:/usr/ccs/bin\<br>
:/usr/ucb\<br>
:/usr/local/bin\<br>
:/opt/sfw/bin\<br>
:/usr/openwin/bin\<br>
:/usr/dt/bin\<br>
:/opt/openv/netbackup/bin\<br>
:/opt/openv/netbackup/bin/goodies\<br>
:/opt/openv/netbackup/bin/admincmd\<br>
:/opt/openv/netbackup/vault/scripts\<br>
:/opt/openv/volmgr/bin\<br>
:/opt/openv/volmgr/bin/goodies\<br>
:.<br>
<br>
SPOOLDIR=/iotk/spool<br>
BIDFILE=/tmp/destage.$$<br>
EXPIREFILE=/tmp/expire.$$<br>
DSTUNIT=L280<br>
DSTPOOL=NetBackup<br>
OUTPUTFILE=/tmp/bpduplicate.$$<br>
ls $SPOOLDIR | sed 's/_C1.*//' | sort | uniq | tee $BIDFILE
$EXPIREFILE<br>
bpduplicate -dstunit $DSTUNIT -v -Bidfile $BIDFILE -L $OUTPUTFILE -dp
$DSTPOOL<br>
for image in $(cat $EXPIREFILE)<br>
do<br>
echo "Expiring $image"<br>
bpexpdate -backupid $image -d 0 -copy 1 -force<br>
done<br>
rm $EXPIREFILE<br>
<br>
At 03:12 PM 04/22/2002 -0600, you wrote:<br>
<br>
<blockquote type=cite class=cite cite><font size=2>We had a library
failure over the weekend and it underscored the degree to which our
library is a single point of failure for production processes.
(Oracle redo logs are archived to tape via netbackup: when the library
fails, the archive fails, the DB filesystems fill up and the DB
crashes. Nice, huh?)<br>
</font><br>
<font size=2>Anyway, to compensate, I want to build a disk-based storage
unit to be used for critical files. It'll only be a couple hundred
of gig for our most important stuff (like archived redo logs).
<br>
</font><br>
<font size=2>I have this idea that I can backup to the disk, then
periodically do a bpduplicate to tape, rename the duplicate copy to be
the primary, then delete the disk image. When the library is out of
service, I'll let the images accumulate on disk until it's available
again.<br>
</font><br>
<font size=2>Has anybody tried this? Any caveats or experiences
you'd like to share?</font> <br>
<br>
<font size=2>-Mark</font> <br>
<br>
<font
size=2>-----------------------------------------------------------------------</font>
<br>
<font size=2> Mark Donaldson - Sr. Systems Engineer</font> <br>
<font size=2> Experian EMS - Denver Colorado</font> <br>
<font
size=2>-----------------------------------------------------------------------
</font><br>
<font size=2> Linux? Wasn't he the kid with the
blanket?</font> <br>
<font
size=2>-----------------------------------------------------------------------</font>
<br>
</blockquote></html>
--=====================_89748882==_.ALT--
|