Veritas-bu

Re: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.

2008-04-21 10:14:14
Subject: Re: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.
From: Jim Hall <james_f_hall AT yahoo DOT com>
To: veritas-bu AT mailman.eng.auburn DOT edu
Date: Mon, 21 Apr 2008 06:55:17 -0700 (PDT)
What if I NDMP to a DSU on a NetBackup Media server?
Is that still unavailable for synths?

With SnapMirror, I believe that requires us investing
more money with NetApp.

Thanks,

Jim

--- "Tharp, Trey" <Trey.Tharp AT allstate DOT com> wrote:

> Short answer = SnapMirror, then backup that mirror
> filer for days/weeks
> if you need to. Once your initial mirror is done,
> it's only the
> block-level changes from that point. Also, if you
> are using NDMP direct
> backups, where the filer has tape connectivity, then
> synthetic backups
> are not available.
> 
> NDMP is a horrible protocol and once your filers
> start to grow to these
> sizes and beyond backups become difficult at best.
> SnapMirror works
> great and once you get it to that cheaper-disk
> destination filer, you
> can spin to tape.
> 
> -Trey
> 
> -----Original Message-----
> From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
> [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu]
> On Behalf Of Jim Hall
> Sent: Friday, April 18, 2008 8:38 AM
> To: veritas-bu AT mailman.eng.auburn DOT edu
> Subject: [Veritas-bu] NetBackup protecting 30TB -
> 60TB NetApp over NDMP.
> 
> I am a "newb" to the list. I have some experience
> with NetBackup 5.1
> MP4.
> 
> Anyways, I have an interesting problem. I have a
> NetApp GX system that
> needs to be protected. It looks like the best method
> would be to backup
> the unit using NDMP. The problem we are running
> into, theoretically at
> this moment as we design the backup system, is the
> limitation of the
> cluster interconnect and the ability to move up to
> 30TB, and in time
> 60TB of data across a 2GB FC link to a tape storage
> library. Doing the
> math, we would never be able to use the netapp for
> what it was designed
> for as the cluster-interconnects would be
> continuously saturated. The
> reason for this (and this is something I inherited)
> is that even though
> there are 4 Heads, 1 Head is the owner of a metadata
> volume that pretty
> much encompasses the entire unit, so 75% of the data
> has to come across
> the cluster interconnect (this is limited to 2Gb). 
> 
> We could re-arch the FS to balance across all 4 Head
> nodes, but we still
> have a 2Gb limitation per node as that is the
> fastest FC card available
> for these guys.
> 
> My idea is to implement some kind of synthetic full
> strategy. That is,
> move as much data as possible for an initial full to
> tape (we have a two
> week outage coming up in a couple months), then
> create a disk stage
> where we can store incrementals. As long as the
> daily change rate allows
> us to move the incrementals in say 8hrs or so
> (across Gbe or FC), I
> think we would be fine. The question I have for
> everyone is, how long
> should I expect it to take for 4 LTO4 drives in
> combination with
> incrementals on say a thumper, to generate a weekly
> full to tape (lets
> be harsh and say we have a 10% change rate
> throughout the week). I want
> to start at 30TB today and scale to 60TB over the
> next 18-24mos. 
> 
> Anyone have similar experiences?
> 
> What I am looking at is possibly using a x4500 as a
> combined media
> server and disk storage unit.
> 
> Thanks,
> 
> Jim
> 
> 
> _______________________________________________
> Veritas-bu maillist  - 
> Veritas-bu AT mailman.eng.auburn DOT edu
>
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> 

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu