Networker

Re: [Networker] Netapp NDMP backup setup

2008-12-04 11:43:05
Subject: Re: [Networker] Netapp NDMP backup setup
From: Joel Fisher <jfisher AT WFUBMC DOT EDU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 4 Dec 2008 11:29:20 -0500
We are 100% qtrees...  so I can take a list from qtree status and work
with that to verify.  I'm in the process of writing a script to do that
now.

 

About the over complication part... I don't see any way to do it less
complicated and achieve the same result.  A client or group can have an
associated schedule but as far as I know a saveset inherits the schedule
from the client or group it is assigned to, so I can't have one or two
client entries that can split the savesets across the whole week.... Of
course ignorance on my part is an option, so I'm open to learning if you
tell me how to do it.

 

Thanks!

 

Joel

 

From: Fazil.Saiyed AT anixter DOT com [mailto:Fazil.Saiyed AT anixter DOT com] 
Sent: Thursday, December 04, 2008 10:49 AM
To: EMC NetWorker discussion; Joel Fisher
Cc: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: Netapp NDMP backup setup

 


Hello, 
If your volumes are not large then just avoid qtree backups, if they are
then you need to maintain & review your savesets, perhaps dumping qtree
info from the filer then comparing against your configured
clients\savesets will be needed. 
I only do backups on the qtree, but our environment does not have 100's
of Qtrees. 
you may be able to do a perl script to list qtree and compare it against
configured savesets, i do feel that you are overly complicating your
environment by designing client for each day, at the most configure two
clients for each filer and control the backups via central schedule,
dividing up the qtrees that match with your backup schedule. 
HTH 



Joel Fisher <jfisher AT WFUBMC DOT EDU> 
Sent by: EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU> 

12/04/2008 08:50 AM 

Please respond to
EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>; Please respond
to
Joel Fisher <jfisher AT WFUBMC DOT EDU>

To

NETWORKER AT LISTSERV.TEMPLE DOT EDU 

cc

        
Subject

Netapp NDMP backup setup

 

                




Hey Guys,



We have a FAS3040c configure as below.



Aggr0

               Vol0

                               Qtree0

                               Qtree1

                               Qtree2

                               ...

               Vol1

                               Qtree0

                               Qtree1

                               ...

...



Backups are working just fine, but I would like to set them up in a
safer way. Currently I have a client entry for each day of the week, and
that client has some subset of the qtrees defined in them so they all
don't run fulls on the same day.  I don't do it on the volume level,
because our volumes are very large.



I'd like to do something like an All on each one, then skip certain ones
on each client for a given day.  Unfortunately using All resolves to a
list of volumes not qtrees.



I'm just wondering if anyone has been able to setup qtree level backups
in a way that prevents data loss, in the event that there is a qtree
added, but mistakenly left off of the backup list.



Thanks!


Joel


To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER




To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>