Networker

Re: [Networker] Backup to Disk and then tape

2009-08-31 12:24:21
Subject: Re: [Networker] Backup to Disk and then tape
From: Len Philpot <Len.Philpot AT CLECO DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 31 Aug 2009 11:20:04 -0500
> bghatora 
> 
> Well, I need to backup to disk first and then to tape and also 
> configure staging so that I can reset the disk space from my /backup 
directory. 
> 
> anyone can give me the steps on how to do it or point me to some 
direction?

I'm not trying to be flippant, but read the documentation. We're just now 
getting our feet wet with disk backups and staging, and there are indeed 
many questions. But there's no substitute for reading the Networker docs, 
as somewhat fragmented as as they are, IMO. I realize 1000+ pages of admin 
guide is no small read, but it's in there. Check out chapters 3, 10 and 11 
in particular.

What you're asking would be well beyond just a simple answer here on the 
list, IMO (but maybe I'm just dense :-). There are a number of steps, from 
creating your file device(s), disk pool/volumes, stage pool, stage policy, 
etc., etc. Just too many to go into detail here, since they're all covered 
in the docs, in one location or another. Usually my biggest challenge has 
been finding all the separate sections to read before I get the 
process(es) down, since EMC/Legato has never been too interested in 
recipe-style docs (i.e., "to do task X, here are the 47 steps from start 
to finish").

But, I got disk backups running (in a test environment) and my driver's 
license doesn't read "A. Einstein", so it can be done.  :-)

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>