Networker

Re: [Networker] backup scheme

2006-09-27 10:20:51
Subject: Re: [Networker] backup scheme
From: Matt Temple <mht AT RESEARCH.DFCI.HARVARD DOT EDU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 27 Sep 2006 10:12:28 -0400
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Bill,

If you /really/ never delete data, you might consider mirroring to a live
server.   One thing, though, is that you haven't indicated how much
data is being backed up, from how many clients, to what sort of library.
In short, are there logistical reasons for why you don't perform full
backups more
frequently?
 
We have about 30 TB of data and we  use a scheme similar to yours,
except we
perform full backups quarterly.   But we also mirror our servers using
rsync
on a frequent basis, so there is always an on-line current copy of our
assets.
One of the biggest forks in the road when considering backup methods has
to do with whether you need to be able to retrieve the history of your
site or
if you just need the current state of your site.  

Saying that you "never delete data" is also a meaningful claim.   I
assume you
mean that you never /deliberately/ delete data.   What would happen
if, after running
your site for 5 years, someone accidentally deletes data that is four
years old.   In your
current regime, you would have to go back to a tape that is a year
old.   Now to be
sure, I've used 5-year-old tapes to help people prove that  DNA
sequencing was
completed at a particular date.   But maybe you don't need that sort
of thing.

By the way, even with rsync, you have to make decisions about how far
back you want
to be able to go.   Much of the response from this group comes from
hard experience.

                                                                Matt
Temple

William M. Fennell wrote:
> Hello, This scheme may sound risky, but the alternative is
> outsourcing to a group that takes one full. They do incremental
> backups every day.  They never do another full. Maybe I could do a
> full every six months. Our particular issue is we never delete
> data.  It just stays on the network in case it is ever needed.
> That means we continually backup chunks of data that never change
> once added to the network.
>
> Bill
>
>
> Conrad Macina wrote:
>
>> Davina is right. You could mitigate the risk somewhat by cloning
>> the annual full, but a year is an awfully long time for tapes to
>> be lost, damaged or just "go bad".
>>
>> Conrad Macina Pfizer, Inc.
>>
>>
>> On Wed, 27 Sep 2006 11:56:54 +0100, Davina Treiber
>> <DavinaTreiber AT PEEVRO.CO DOT UK> wrote:
>>
>>
>>
>>> William M. Fennell wrote:
>>>
>>>
>>>
>>>> Hi,
>>>>
>>>> We're thinking of doing full backups once yearly, level 1
>>>> monthly, level 5 weekly and level 9 nightly. Are there any
>>>> Networker gotchas that would present a problem with this
>>>> scheme?
>>>>
>>>>
>>>>
>>>>
>>>>
>>> From a NetWorker point of view, your scheme would work. You
>>> might also want to consider assigning different retention
>>> periods for these level backups, and putting them in pools that
>>> correspond to the different retention period. Suddenly your
>>> config has got a whole lot more complicated.
>>>
>>>
>>> From a practical point of view, I would NEVER do this. If I was
>>> your boss I would fire you for this.  ;-) You would be placing
>>> a huge reliance on one full backup. If a tape goes bad from
>>> your full backup you have lost the capability to restore all
>>> your backups for up to a year. I get jittery about having full
>>> backups less frequently than weekly, but on problem clients
>>> (such as perhaps those backing up over slow WAN links) I might
>>> consider a schedule with a monthly full and a weekly level 5 or
>>> similar. I would never risk anything less than a monthly full.
>>> Most companies' data is far too precious.
>>>
>>>
>>> Good luck - you might be needing it.
>>>
>>> To sign off this list, send email to
>>> listserv AT listserv.temple DOT edu and type
>>>
>>>
>> "signoff networker" in the
>>
>>
>>> body of the email. Please write to
>>> networker-request AT listserv.temple DOT edu if
>>>
>>>
>> you have any problems
>>
>>
>>> wit this list. You can access the archives at
>>>
>>>
>> http://listserv.temple.edu/archives/networker.html or
>>
>>
>>> via RSS at
>>> http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>>
>>>
>> To sign off this list, send email to listserv AT listserv.temple DOT edu
>> and type "signoff networker" in the body of the email. Please
>> write to networker-request AT listserv.temple DOT edu if you have any
>> problems wit this list. You can access the archives at
>> http://listserv.temple.edu/archives/networker.html or via RSS at
>> http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>
>>
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu
> and type "signoff networker" in the body of the email. Please write
> to networker-request AT listserv.temple DOT edu if you have any problems
> wit this list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or via RSS at
> http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER


- --
=============================================================
Matthew Temple                Tel:    617/632-2597
Director, Research Computing  Fax:    617/582-7820
Dana-Farber Cancer Institute  mht AT research.dfci.harvard DOT edu
44 Binney Street,  LG300/300  http://research.dfci.harvard.edu
Boston, MA 02115              Choice is the Choice!
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFFGobMzU3UBZgqMzoRAsZeAKCebcTK/AlHfC50dC2RKoSmmk/BjACeK+tP
zY+iko4gx0lGIR6G0rbPfho=
=sS+E
-----END PGP SIGNATURE-----

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>