Networker

Re: [Networker] Very large filesystems backup

2008-11-10 16:07:37
Subject: Re: [Networker] Very large filesystems backup
From: Bruce Breidall <Bruce.Breidall AT CONCUR DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 10 Nov 2008 13:05:30 -0800
I kindof replied with my sentences out of order, but hopefully you
understand what I was saying...

-----Original Message-----
From: Bruce Breidall 
Sent: Monday, November 10, 2008 3:02 PM
To: 'EMC NetWorker discussion'; 'Jonathan Loran'
Subject: RE: [Networker] Very large filesystems backup

I think you are only limited by the space you have in your catalog to
hold the indexes.

So, for example, the index directory for that server is approx 18 GB.

I have one tree that is over 90 million files. I have a very similar
situation, in that these large file systems are closed and static now,
so I only have to run one backup, with a really long retention. I also
have a clone copy for DR.

Hope that is useful.

-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of Jonathan Loran
Sent: Monday, November 10, 2008 2:53 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] Very large filesystems backup

Hi List,

This discussion is of some relevance to my current backup plans.  We 
have a number of large (huge) data stores that I want to backup very 
infrequently for disaster recovery.  These data stores are basically 
write once, read many repositories, and in theory all of the data within

them can be recreated from secondary/tertiary data sources.   I'm 
expecting the full backups to take several days to run, but that is OK, 
since we only will do fulls a few times a year max. 

Does anyone know the current limits on how large a file system/max 
number of files, Networker can back up?  We are using 7.4.1, upgrading 
to the latest version when we start in a couple months.  The largest 
file system right now is holding 30TB, with just under 30 million files 
(and growing).  If we will run into a size/file count limit, I need to 
make other plans.

Thanks,

Jon

Matthew Huff wrote:
> 100 savesets.
>
> Having 100 savesets is the least of your problems if you have 21
million files to restore :)
>
>
> From: Fazil.Saiyed AT anixter DOT com [mailto:Fazil.Saiyed AT anixter DOT com]
> Sent: Monday, November 10, 2008 12:07 PM
> To: EMC NetWorker discussion; Matthew Huff
> Cc: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: Very large filesystems backup
>
>
> What is the implication of such a approach during restore, do you have
100 saveset to restore or does it still show up as one saveset ?
> Thanks
>
> Matthew Huff <mhuff AT OX DOT COM>
> Sent by: EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>
>
> 11/10/2008 09:19 AM
> Please respond to
> EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>; Please
respond to
> Matthew Huff <mhuff AT OX DOT COM>
>
>
> To
>
> NETWORKER AT LISTSERV.TEMPLE DOT EDU
>
> cc
>
> Subject
>
> Re: Very large filesystems backup
>
>
>
>
>
>
>
> I'd recommend a solution based on option #2. Use a perl or other
scripting language to create a mountpoint/directory list as the saveset
list and pipe it into nsradmin in a job that runs before backups. This
gives you the advantage of having something like a saveset of ALL so
that you won't miss new structures that get created, and yet it will
create parallel saveset streams.
>
>
>
> -----Original Message-----
> From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
On Behalf Of Oscar Olsson
> Sent: Monday, November 10, 2008 10:13 AM
> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> Subject: Re: [Networker] Very large filesystems backup
>
> On 2008-11-10 15:54, Browning, David revealed:
>
> BD> Just curious as to what everyone else does out their for their
VERY
> BD> large filesystem backups.
> BD>
> BD> Our document imaging file server has gradually been increasing in
size
> BD> over the past year, and is now up to 21+ million files.   Data
size is
> BD> under 1TB, so size isn't an issue, it's simply the 21 million
files - it
> BD> takes 48 hours to backup.
> BD>
> BD> We have a couple of other file servers that are large (3 - 5
million),
> BD> but nothing this size.
> BD>
> BD> Are people using some kind of snapshot system, or something else?
>
> Well, in essence, there is really no good way to handle this with
> networker. We have used two approaches in the past:
>
> 1. Take a snapshot of the filesystem using savepnpc. This requires
some
> scripting and its hard to manage errors and ensure that the correct
data
> does get backed up in case of hidden failures.
> 2. Specify several directories. This approach has the drawback that
the
> "All" saveset can't be used, which can create troubles later when
> paths/file systems get added/removed/changed, since data in new paths
> doesn't get backed up.
>
> Or, there's our current approach... We recently migrated from
Networker to
> Commvault Simpana, and in essence the difference is in the magniture
> between DDR and West Germany in 1985 (to commvaults favour). Commvault
> allows you to use several data readers in the same mount point, which
> really makes things faster for larger file systems. Except for that,
there
> are like a million other reasons to switch, but that's outside the
scope
> of this list.
>
> //Oscar - has seen the light.
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>   

-- 


-     _____/     _____/      /           - Jonathan Loran -           -
-    /          /           /                IT Manager               -
-  _____  /   _____  /     /     Space Sciences Laboratory, UC Berkeley
-        /          /     /      (510) 643-5146 jloran AT ssl.berkeley DOT edu
- ______/    ______/    ______/           AST:7731^29u18e3
                                 

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER