Networker

Re: [Networker] Very large filesystems backup

2008-11-10 12:09:56
Subject: Re: [Networker] Very large filesystems backup
From: Fazil Saiyed <Fazil.Saiyed AT ANIXTER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 10 Nov 2008 11:05:32 -0600
Hello,
Besides the snapimage modules and creative ideas abound, look into moving 
the data to a NDMP NAS and backup the data at block level should shorten 
the time it takes to do backups and you can use snapshots resident in the 
nas to do instant restores ( budget for additional diskspace to house the 
snapshots which can take upto 100% disk space in fast changing 
environment, do some auto admistrative scripts to handle out of space 
condition that could occur as a result, backups may fail also if there is 
no open space on the volume,but if you carefully plan the environment that 
would be a rare occurrence)
I would also plan the volume to allow for data migration, archiving & 
separation ( i.e qtree's Netapp) to give some flexibility to your backup 
environments.
HTH



Oscar Olsson <spam1 AT QBRANCH DOT SE> 
Sent by: EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>
11/10/2008 09:13 AM
Please respond to
EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>; Please respond 
to
Oscar Olsson <spam1 AT QBRANCH DOT SE>


To
NETWORKER AT LISTSERV.TEMPLE DOT EDU
cc

Subject
Re: Very large filesystems backup






On 2008-11-10 15:54, Browning, David revealed:

BD> Just curious as to what everyone else does out their for their VERY
BD> large filesystem backups.
BD> 
BD> Our document imaging file server has gradually been increasing in size
BD> over the past year, and is now up to 21+ million files.   Data size is
BD> under 1TB, so size isn't an issue, it's simply the 21 million files - 
it
BD> takes 48 hours to backup.
BD> 
BD> We have a couple of other file servers that are large (3 - 5 million),
BD> but nothing this size.
BD> 
BD> Are people using some kind of snapshot system, or something else?

Well, in essence, there is really no good way to handle this with 
networker. We have used two approaches in the past:

1. Take a snapshot of the filesystem using savepnpc. This requires some 
scripting and its hard to manage errors and ensure that the correct data 
does get backed up in case of hidden failures.
2. Specify several directories. This approach has the drawback that the 
"All" saveset can't be used, which can create troubles later when 
paths/file systems get added/removed/changed, since data in new paths 
doesn't get backed up.

Or, there's our current approach... We recently migrated from Networker to 

Commvault Simpana, and in essence the difference is in the magniture 
between DDR and West Germany in 1985 (to commvaults favour). Commvault 
allows you to use several data readers in the same mount point, which 
really makes things faster for larger file systems. Except for that, there 

are like a million other reasons to switch, but that's outside the scope 
of this list.

//Oscar - has seen the light.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type 
"signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER



To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER