Veritas-bu

Re: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.

2008-04-23 13:48:22
Subject: Re: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.
From: Raymond Wong <Raymond.Wong AT efi DOT com>
To: "Staub, Doug" <rstaub AT amgen DOT com>, "Tharp, Trey" <Trey.Tharp AT allstate DOT com>, Jim Hall <james_f_hall AT yahoo DOT com>, "veritas-bu AT mailman.eng.auburn DOT edu" <veritas-bu AT mailman.eng.auburn DOT edu>
Date: Wed, 23 Apr 2008 10:22:23 -0700
Another problem I run into using NDMP backups is the restores.

You cannot restore a single folder without the restore job reading through the 
entire backup image.
For example, I backup my filers by running a backup stream for each volumes.
So if the volume is 2TB and I try to restore a single 20MB folder in this 
volume, the restore job needs to reads through the entire 2TB backup image 
before it can successfully recover that 20MB folder.
This means a 20MB restore job can take days to complete.

I think this problem is resolved in NBU 6.5 but I haven't tried it yet.
I'm currently running NBU 6.0MP6



-----Original Message-----
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu 
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Staub, 
Doug
Sent: Monday, April 21, 2008 8:18 AM
To: Tharp, Trey; Jim Hall; veritas-bu AT mailman.eng.auburn DOT edu
Subject: Re: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.

NDMP is not "horrible", but I would agree that SnapMirror is the best 
alternative at these sizes.  We have 2 GB FC connects to VTLs enabled with NDMP 
drives for filers and we have seen 3 TB volumes backup in hours via NDMP 
compared to 4-5 days via CIFS...now that is a horrible protocol...

The one caveot with NDMP is you are limited to 16 or so concurrent sessions 
(ONTap version specific) and can severely impact the filer (which is why Trey's 
suggestion of SnapMirror is a good one because you won't degrade your source 
filer trying to back it up.).

-Doug
-----Original Message-----
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu 
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Tharp, 
Trey
Sent: Monday, April 21, 2008 6:19 AM
To: Jim Hall; veritas-bu AT mailman.eng.auburn DOT edu
Subject: Re: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.

Short answer = SnapMirror, then backup that mirror filer for days/weeks
if you need to. Once your initial mirror is done, it's only the
block-level changes from that point. Also, if you are using NDMP direct
backups, where the filer has tape connectivity, then synthetic backups
are not available.

NDMP is a horrible protocol and once your filers start to grow to these
sizes and beyond backups become difficult at best. SnapMirror works
great and once you get it to that cheaper-disk destination filer, you
can spin to tape.

-Trey

-----Original Message-----
From: veritas-bu-bounces AT mailman.eng.auburn DOT edu
[mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Jim Hall
Sent: Friday, April 18, 2008 8:38 AM
To: veritas-bu AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] NetBackup protecting 30TB - 60TB NetApp over NDMP.

I am a "newb" to the list. I have some experience with NetBackup 5.1
MP4.

Anyways, I have an interesting problem. I have a NetApp GX system that
needs to be protected. It looks like the best method would be to backup
the unit using NDMP. The problem we are running into, theoretically at
this moment as we design the backup system, is the limitation of the
cluster interconnect and the ability to move up to 30TB, and in time
60TB of data across a 2GB FC link to a tape storage library. Doing the
math, we would never be able to use the netapp for what it was designed
for as the cluster-interconnects would be continuously saturated. The
reason for this (and this is something I inherited) is that even though
there are 4 Heads, 1 Head is the owner of a metadata volume that pretty
much encompasses the entire unit, so 75% of the data has to come across
the cluster interconnect (this is limited to 2Gb).

We could re-arch the FS to balance across all 4 Head nodes, but we still
have a 2Gb limitation per node as that is the fastest FC card available
for these guys.

My idea is to implement some kind of synthetic full strategy. That is,
move as much data as possible for an initial full to tape (we have a two
week outage coming up in a couple months), then create a disk stage
where we can store incrementals. As long as the daily change rate allows
us to move the incrementals in say 8hrs or so (across Gbe or FC), I
think we would be fine. The question I have for everyone is, how long
should I expect it to take for 4 LTO4 drives in combination with
incrementals on say a thumper, to generate a weekly full to tape (lets
be harsh and say we have a 10% change rate throughout the week). I want
to start at 30TB today and scale to 60TB over the next 18-24mos.

Anyone have similar experiences?

What I am looking at is possibly using a x4500 as a combined media
server and disk storage unit.

Thanks,

Jim


_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu