ADSM-L

Re: Backup of large fileservers

2006-06-21 08:36:31
Subject: Re: Backup of large fileservers
From: Orville Lantto <orville.lantto AT GLASSHOUSE DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 21 Jun 2006 08:36:02 -0400
At this location, for the data on the NAS, there are no general monthlies.  
Anything with a retention requirement over 35 days is considered an archive and 
is selectively done.
 
Orville L. Lantto
Glasshouse Technologies, Inc.
 

________________________________

From: ADSM: Dist Stor Manager on behalf of Henrik Wahlstedt
Sent: Wed 6/21/2006 02:33
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] Backup of large fileservers




Hi Orville,

Thanks for the answer. I have one additinaol question.. Dont you do
monthly backups?


//Henrik


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Orville Lantto
Sent: den 20 juni 2006 16:33
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Backup of large fileservers

The slickest way to handle this problem is to use NetApp SnapShots for
your backups.  For offsite, use a remote filer and SnapMirror. 

We manage 39 TB of data on NetApp here.  We use a local cluster and a
remote cluster.  The retention is hourly SnapShots for 24 hours and
daily SnapShots for 35 days.  SnapMirror runs asynchronously at least
once an hour over a GB Ethernet to the remote site.  We run NDMP backups
at the remote site "just in case", but meet our RPO and RTO requirements
with the SnapShots.

SnapShots take TSM out of the picture for these backups and have erased
our former biggest headache, backing up millions of small files.


Orville L. Lantto
Glasshouse Technologies, Inc.


________________________________

From: ADSM: Dist Stor Manager on behalf of Henrik Wahlstedt
Sent: Tue 6/20/2006 02:30
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Backup of large fileservers



Hi,

I have some question regarding backup of large fileservers.

We have 20+ filers with a total of 9 and 15TB of home and common
folders. The problem is when the Filer admins decide to move or split
volumes which give us a new full backup... And after some time it will
be harder to track data down, where was data stored etc.

Currently we backup the qtree ie. on NetApp \\Filername\w0\c00. If the
volume gets full and we need to split the data between
\\Filername\w0\c00 and \\Filername\w0\c01.

However, all (common) folders have DFS links so I can backup
ex.\\Filername\w0\c00\Proj. This will give me some 10000 filespaces. But
I wont need to worry about the splits/moves.


So my question is, how do you handle backup on large filers and data
moves? If any, best practise?


Thanks
Henrik









-------------------------------------------------------------------
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of
the information or copying of this message is prohibited. If you are not
the addressee, please notify the sender immediately by return e-mail and
delete this message.
Thank you.


-------------------------------------------------------------------
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.