Re: [Networker] Very large filesystems backup
2008-11-10 11:01:19
We solved this problem by having 2 client definitions. The first
client has the save sets listed to be backed up and the directive is a
general one appropriate for the OS and file system. The second client
definition uses the ALL save set, but uses a custom directive that lists
all of the previous save sets with skip. This way the second client
will catch any new save sets that were not specified. This handles
communication issues between the admins and the backup admins, creates
parallelism for backups, and generally covers one's backside. The worst
thing that can happen is that something gets backed up that you don't
want.
Patti Clark
DOE/OSTI
> -----Original Message-----
> From: EMC NetWorker discussion
> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of Matthew Huff
> Sent: Monday, November 10, 2008 10:19 AM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Very large filesystems backup
>
> I'd recommend a solution based on option #2. Use a perl or
> other scripting language to create a mountpoint/directory
> list as the saveset list and pipe it into nsradmin in a job
> that runs before backups. This gives you the advantage of
> having something like a saveset of ALL so that you won't miss
> new structures that get created, and yet it will create
> parallel saveset streams.
>
>
>
> -----Original Message-----
> From: EMC NetWorker discussion
> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of Oscar Olsson
> Sent: Monday, November 10, 2008 10:13 AM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Very large filesystems backup
>
> On 2008-11-10 15:54, Browning, David revealed:
>
> BD> Just curious as to what everyone else does out their for
> their VERY
> BD> large filesystem backups.
> BD>
> BD> Our document imaging file server has gradually been
> increasing in size
> BD> over the past year, and is now up to 21+ million files.
> Data size is
> BD> under 1TB, so size isn't an issue, it's simply the 21
> million files - it
> BD> takes 48 hours to backup.
> BD>
> BD> We have a couple of other file servers that are large (3
> - 5 million),
> BD> but nothing this size.
> BD>
> BD> Are people using some kind of snapshot system, or something else?
>
> Well, in essence, there is really no good way to handle this with
> networker. We have used two approaches in the past:
>
> 1. Take a snapshot of the filesystem using savepnpc. This
> requires some
> scripting and its hard to manage errors and ensure that the
> correct data
> does get backed up in case of hidden failures.
> 2. Specify several directories. This approach has the
> drawback that the
> "All" saveset can't be used, which can create troubles later when
> paths/file systems get added/removed/changed, since data in new paths
> doesn't get backed up.
>
> Or, there's our current approach... We recently migrated from
> Networker to
> Commvault Simpana, and in essence the difference is in the magniture
> between DDR and West Germany in 1985 (to commvaults favour).
> Commvault
> allows you to use several data readers in the same mount point, which
> really makes things faster for larger file systems. Except
> for that, there
> are like a million other reasons to switch, but that's
> outside the scope
> of this list.
>
> //Oscar - has seen the light.
>
> To sign off this list, send email to
> listserv AT listserv.temple DOT edu and type "signoff networker" in
> the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any
> problems with this list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
> To sign off this list, send email to
> listserv AT listserv.temple DOT edu and type "signoff networker" in
> the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any
> problems with this list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Networker] best way to configure a VTL, mark wragge
- Re: [Networker] best way to configure a VTL, Davina Treiber
- Re: [Networker] best way to configure a VTL, Francis Swasey
- [Networker] Very large filesystems backup, Browning, David
- Re: [Networker] Very large filesystems backup, Oscar Olsson
- Re: [Networker] Very large filesystems backup, Matthew Huff
- Re: [Networker] Very large filesystems backup,
Clark, Patti <=
- Re: [Networker] Very large filesystems backup, Fazil Saiyed
- Re: [Networker] Very large filesystems backup, Matthew Huff
- Re: [Networker] Very large filesystems backup, Jonathan Loran
- Re: [Networker] Very large filesystems backup, Bruce Breidall
- Re: [Networker] Very large filesystems backup, Bruce Breidall
- Re: [Networker] Very large filesystems backup, Jonathan Loran
- Re: [Networker] Very large filesystems backup, Fazil Saiyed
- Re: [Networker] Very large filesystems backup, Bruce Breidall
Re: [Networker] best way to configure a VTL, mark wragge
Re: [Networker] best way to configure a VTL, Fazil Saiyed
|
|
|