Re: [ADSM-L] Data Deduplication
2007-08-31 16:57:31
Good point. They mainly get that 10-20% with compression. (They use
compression after they've de-duped.) They're at different levels of
granularity, so it still works.
---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies
-----Original Message-----
From: Dave Mussulman [mailto:mussulma AT uiuc DOT edu]
Sent: Friday, August 31, 2007 1:34 PM
To: Curtis Preston
Cc: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Data Deduplication
On Thu, Aug 30, 2007 at 03:09:09AM -0400, Curtis Preston wrote:
> Unlike a de-dupe VTL that can be used with TSM, de-dupe backup
software
> would replace TSM (or NBU, NW, etc) where it's used. De-dupe backup
> software takes TSM's progressive incremental much farther, only
backing
> up new blocks/fragements/pieces of data that have never been seen by
the
> backup server. This makes de-dupe backup software really great at
> backing up remote offices.
We had Avamar out a few years ago pitching their solution, and we liked
everything about it except the price. (And now that they're a part of
EMC, I don't expect that price to drop much... *smirk*) But since we're
talking about software, there's an aspect of de-dupe that I don't think
has been explicitly mentioned yet. Avamar said their software got
10-20% reduction on a backup of a stock Windows XP installation. A
single system, say it's the first one you added to your backup group.
That's not two users with the same email attachments saved, or identical
files across two systems - that's hashing files in the OS (I presume
from headers in DLLs and such.) So if you backup two identical stock XP
installs, you get 20% reduction on the first one and 100% on the second
and beyond. Scale that up to hundreds of systems, and that's an
incredible cost savings. Suddenly backing up entire systems doesn't
seem so inefficient anymore.
Dave
|
|
|