Networker

[Networker]

2009-10-08 10:21:22
Subject: [Networker]
From: "Nelson, Allan" <an AT CEH.AC DOT UK>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 8 Oct 2009 15:06:08 +0100
Hi Dave
In case it's of any use here's our experience.

We trialled a single Data Domain (DD) system back in 2007 and loved it.
We saved up ;-))

In April we installed 4 DD systems (varying sizes), one at each of our sites, 
and have been using them in anger ever since.
We don't use VTL, we keep it simple and just NFS mount the DD space.
3 of the DD's replicate over the WAN to the 4th at our HQ and the 4th 
replicates its 'local' data back to ours, so we have 2 copies of data - 1 
off-site. We ditched the tape libraries at 3 of the sites and HQ clone to tape 
once a month for our 'nice warm fuzzy feeling - just in case' tape backups.

We directly connect the storage nodes to the DD's (private IP) and bond 3x1GB 
NIC's and turned on Jumbo Frames (that's the thing that made the most 
difference).
We do a mix of client and NDMP backups.

Really happy with the DD - looking at stats from about July for our site, I see 
we'd sent 290TB of data to it which it stored in 19TB (that's over 90% 
'compression').  We keep roughly 3 months backup data on the system.  Your 
mileage may vary of course - we found the hardest thing was sizing the boxes in 
the first place.

Godd luck with whatever you choose, but I just thought I'd say I'm well pleased 
with the DD de-dupe solution we have.  The replication has been a godsend.

Cheeers... Allan.

 

On Wed, Oct 7, 2009 at 7:14 PM, Werth, Dave <dave.werth AT garmin DOT com> 
wrote:

> Folks,
>
> We are working on next year's budget.  I'm trying to figure out what sort
> of upgrade to our current hardware we want to do.
>
> Currently we are disk to tape to tape (clone).  We want to switch to a disk
> to disk to tape setup to speed up backups and restores.
>
> My thinking is along the lines of a VTL with de-duplication.
>
> Does anyone have any comments on the subject?
>
> Thanks, Dave
>
> Dave Werth
> Garmin AT, Inc.
> Salem, Oregon
>
>
> --------------------------------------------------------------------------
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient. If you are not the intended recipient,
> please be aware that any disclosure, copying, distribution or use of this
> e-mail or any attachment is prohibited. If you have received this e-mail in
> error, please contact the sender and delete all copies.
> Thank you for your cooperation.
>
> To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> type
> "signoff networker" in the body of the email. Please write to
> networker-request AT listserv.temple DOT edu if you have any problems with 
> this
> list. You can access the archives at
> http://listserv.temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

-- 
This message (and any attachments) is for the recipient only. NERC
is subject to the Freedom of Information Act 2000 and the contents
of this email and any reply you make may be disclosed by NERC unless
it is exempt from release under the Act. Any material supplied to
NERC may be stored in an electronic records management system.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>