It's a pretty interesting question... The most clients I've seen on a single
backup server is 700. -Unfortunately, I can't remember how much data that was.
But again, since it's a general question, it really wouldn't matter, because
then you'd have to go into questions like what was the backup schedule, how
many fulls, etc. Properly balanced, with the right schedule, and the right
equipment/setup, I'd say the Legato software is very capable of doing that and
more.
The main problem I remember with doing that many clients at a time was that we
had to change the number of allowed open file descriptors on startup (Solaris
box).
But at it's heart, that's not really the issue.
For large environments it comes down to a simple question: How many eggs do you
want in one basket? (note, I didn't say "Don't put all of your eggs in one
basket")
The thing is, doing heavy network backups is not a great solution.. 700 clients
for example.. each night, at least *one* of those clients is bound to have
problems and fail.. which makes for an IT management nightmare in solving
issues with multiple groups, and tons of problem tickets to chase down, etc.
Now if all of your data is centralized on a SAN or NAS unit, and instead of
having hundreds of 10GB clients, you have just a few large multi TB clients..
you have a ton less to troubleshoot.
Perhaps this is beyond your control, but using your numbers below...
Would you rather have:
400 clients with 11TB per day
--OR--
4 clients with 11TB per day
I bet you already know the answer due to the netapp array you are backing up.
That one centralized array is a lot easier to manage than those other 399
clients right?
-Just food for thought.
-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]On
Behalf Of Joel Fisher
Sent: Tuesday, September 30, 2008 2:57 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker] Datazone Size...
Hey All...
I've asked this in the past and gotten no response... I could really use
some insight from you guys at really large networker shops. It's budget
time again so I have to look 18 months out and either budget to expand
my current environment, or hope it will be able to handle the growth.
What is the largest single datazone you've seen? I know this is a very
general question for a complex system, but I'm asking rule of thumb and
personal experience.
By largest I mean, number of clients and/or the amount of data.
Our current config:
T5220/16GB memory server
T2000 Storage Node
~400 clients
1 (60TB) netapp array that we do NDMP backups direct to tape
10 x 9940B drives
105TB of adv_file storage
20TB(raw) of diligent VTL storage
300TB per month of backups/~11TB per day/~70TB per week
Last 3 years we've averaged 42% growth
What are the details of the largest single datazone you've seen?
What is your rule of thumb for large environments before you break off
to another datazone?
Any help would be much appreciated.
Thanks!
Joel
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|