Networker

Re: [Networker] Hardware upgrade

2009-10-07 20:44:02
Subject: Re: [Networker] Hardware upgrade
From: "Browning, David" <DBrown AT LSUHSC DOT EDU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 7 Oct 2009 19:40:28 -0500
Well, with 1.5 or even 3 TB, it might not be so bad. 

This past weekend, during our once a month FULL backups, we backed up 45
TB at just one data center.  Another 18 TB at the other data center.
That's why it was going to cost us so much for a backup to disk option. 

The problem we all share, is because just about anyone can get cheap
disk storage systems for their servers.  They can go out and by a 1 or 2
TB system, and then ask why it takes so long to backup? 

Multiply that by 10 or 15 large server/SAN systems, and then another 200
servers that have a couple of hundred gigs, then all of a sudden you are
at 45 TB having to backup in 3 days.   

We use a spectralogic T950 with 13 LTO-3 drives, with 1 server, 1
storage node, and 6 dedicated storage nodes.  Each of our Exchange
servers gets its own drive, and while 1 server can't drive a lto-3 100%
by itself, it can come close.  Then the main server has 4 drives, and
the storage node has 4 drives.  It's quite a mess once a month, with
everyone hammering away at the 13 drives.  The rest of the time, it's
not so bad. 

Good luck with whatever you decide. 

David M. Browning Jr.
IT Project Coordinator Enterprise Backups and Help Desk

-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of Werth, Dave
Sent: Wednesday, October 07, 2009 6:50 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] Hardware upgrade

Costs depend on the size of your installation of course and I didn't
include any of that information.  We currently back up about 1.5 TB for
a full backup (once a week) from 16 different servers but 70% of that
comes from one specific server.  Based on potential expansions we could
be in the 3 TB range by the end of 2010.  Our tape library is a
Spectralogic 10K with 2 AIT-4 drives.  They can do the full backup in
about 18 hours currently (but for some reason the cloning takes about 42
hours).

They've been pushing me for about 3 years to go to a disk-to-disk backup
and I've resisted so far.  If it's truly going to cost us in the 6
figures it's probably still a no-go.  We would want to hold about 5
weeks worth of data on the VTL if we go that way.

Thanks, Dave.

Dave Werth
Garmin AT, Inc.
Salem, Oregon
-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of Browning, David
Sent: Wednesday, October 07, 2009 4:20 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker]

I know every time we looked at it, the costs proved too much.

A lot depends on what you want to accomplish - speeds are improved, and
restores are quicker - IF it's still on disk.  If you have to restore
from tape, depending on your implementation, you might have to restore
it to disk, and then restore it to the destination - a longer restore.

In our case, the amount of disk needed for 30 days, was just way too
much - more than a small 6 figure amount.  Add in de-dup costs (licenses
and such), and it could get to a major 6 figure amount bordering on 7
figures.

Maybe one day we will find a funding source, or a critical need will
arise to justify the costs.

David M. Browning Jr.
IT Project Coordinator Enterprise Backups and Help Desk


-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of Werth, Dave
Sent: Wednesday, October 07, 2009 6:14 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker]

Folks,

We are working on next year's budget.  I'm trying to figure out what
sort of upgrade to our current hardware we want to do.

Currently we are disk to tape to tape (clone).  We want to switch to a
disk to disk to tape setup to speed up backups and restores.

My thinking is along the lines of a VTL with de-duplication.

Does anyone have any comments on the subject?

Thanks, Dave

Dave Werth
Garmin AT, Inc.
Salem, Oregon


------------------------------------------------------------------------
--
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient. If you are not the intended
recipient, please be aware that any disclosure, copying, distribution or
use of this e-mail or any attachment is prohibited. If you have received
this e-mail in error, please contact the sender and delete all copies.
Thank you for your cooperation.

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>