Networker

Re: [Networker] RFE for cloning - your opinion, please

2003-03-20 09:43:12
Subject: Re: [Networker] RFE for cloning - your opinion, please
From: Robert Maiello <robert.maiello AT MEDEC DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Thu, 20 Mar 2003 09:43:12 -0500
Terry,

We do a lot of cloning. I must say the cloning is not ideal.  The feature I
always wanted to nsrclone was a simple -p option that would tell nsrclone to
read x tapes at time and write to x drives.. ie.  I'd like Legato to do the
work of parallel cloning.   I think in Networker 7 if one backs up to a disk
device it will clone the disk to several tapes at once??

For your situation I would think you need many groups and many clone pools
(for each of your retentions) if you want to use Legato's model.  I'd
probably have several clone pools (with auto media management and the pools
sharing tapes) and scripts like your's to find the data.   I'd try to cut
down the number of retentions you have though (at the cost of tape I realize).

It would be really nice if one ran a group and then the cloning sent each
client to a different clone pool (based on retention).  Sounds like a pretty
complex feature to build in though...


Robert Maiello
Thomson Healthcare

On Wed, 19 Mar 2003 17:11:40 -0500, Terry Lemons <lemons_terry AT EMC DOT COM> 
wrote:

>Hi
>
>I'm considering submitting a Request for Enhancement (RFE) to Legato for
>NetWorker's cloning function.  Several years ago, I ended up writing a
>rather complex script to handle our group's cloning needs.  At the time (and
>now), I thought that this could (maybe should) be built into the standard
>NetWorker product.
>
>Our group did backups of ~160 servers into a single backup pool (the Default
>pool, in fact).  We used a single pool for the obvious reasons of
>simplifying our NetWorker setup, ensuring the best utilization of the tape
>media (if you have multiple pools, you have a greater chance of one pool
>running out of media, while another pool had plenty, etc).
>
>Our site had a data retention strategy that dictated 5 different retention
>periods (1 year, 2 year, 5 year, 7 year, and 10 year, I think), based on the
>criticality of the data, legal, governmental, and corporate requirements,
>etc.  So, we could not use NetWorker's automatic cloning mechanism, because
>it places all savesets in the same backup clone pool.  In almost all cases,
>the retention period for all data backed up from any one system was the
>same.
>
>Our cloning strategy was to clone all full backups, and to clone these full
>backup savesets to clone pool media that matched the retention period.
>Because of the volume of data saved and the price of media, we decided it
>wasn't cost-effective to clone all data and send it away for 10 years (the
>maximum retention period), since most of the data would expire after 1 to 2
>years.
>
>So, if system FOO's data needed to be saved for 2 years, then we would clone
>FOO's full savesets to the '2_Year' backup clone pool.
>
>Also, we didn't want to have any more machinery than necessary (tables
>outside of NetWorker, self-modifying scripts, etc.).
>
>So, here's what the script that I wrote did:
>
>*       created backup clone pools called '1_Year', '2_Year', '5_Year',
>'7_Year' and '10_Year', and labeled media into them;
>*       created savegroups with the same names as the backup clone pools
>above.  These savegroups were never executed; rather, they were just
>''markers' we used to assign a backup clone pool name to a client.
>*       assigned one of these savegroups to every backup client [because
>this is the only way I could think of to equate a backup client with a
>particular clone pool; more on that latter];
>*       wrote a single ksh script that:
>*       ran every morning at the conclusion of nightly backups;
>*       used mminfo to determine which full backups had not yet been cloned,
>and which backup clients had created these savesets;
>*       used nsradmin to determine which savegroup with 'Year' in the name
>was assigned to each of these backup clients
>*       loop:
>*       for every retention period, clone those savesets belonging to the
>clients with the savegroup equal to the 'n_Year' currently being processed
>to the corresponding backup clone pool
>
>This ran very well for many months, until I left that job.  But creating
>this machinery and writing the script took a long time, and it was very
>difficult for anyone other than me to understand.
>
>So, here's the question:  is this a useful idea, so useful that NetWorker
>should be enhanced to do it?  Do you need to clone to different clone pools?
>Does your data have different retention periods?
>
>Comments, please!
>
>Thanks
>tl
>
>Terry Lemons
>> CLARiiON Applications Integration Engineering
>        EMC²
>where information lives
>
>4400 Computer Drive, MS D239
>Westboro MA 01580
>Phone: 508 898 7312
>Email: Lemons_Terry AT emc DOT com
>
>
>
>Terry Lemons
>> CLARiiON Applications Integration Engineering
>        EMC²
>where information lives
>
>4400 Computer Drive, MS D239
>Westboro MA 01580
>Phone: 508 898 7312
>Email: Lemons_Terry AT emc DOT com
>
>--
>Note: To sign off this list, send a "signoff networker" command via email
>to listserv AT listmail.temple DOT edu or visit the list's Web site at
>http://listmail.temple.edu/archives/networker.html where you can
>also view and post messages to the list.
>=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=