Networker

[Networker] de-duplication

2008-11-06 09:10:26
Subject: [Networker] de-duplication
From: brerrabbit <networker-forum AT BACKUPCENTRAL DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Thu, 6 Nov 2008 09:07:07 -0500
Francis Swasey wrote:
> On 11/5/08 12:42 AM, tkimball wrote:
> 
> Datadomain also plays in this space -- We have used their equipment for 
> several years now.  The one gotcha with dedup is that you must write 
> your own scripts if you want to implement staging.  Because of the 
> de-dup process, moving a 1TB saveset off the aftd to another medium will 
> not (or at least shouldn't) free up 1TB of space on the de-dup 
> appliance.  I've asked EMC if their Avamar based de-dup product has 
> solved that issue -- no answer yet.
> 
> 


Frank, the answer to your question about the Avamar is "no", because other than 
replicating to another Avamar unit, there is no way to move or copy the data 
once it lands on the Avamar server.  The entire architecture is designed to put 
it on disk, replicate it to a separate unit over IP, and leave it alone until 
it expires.  Before EMC bought them, I know of at least two approaches that 
Avamar (the company) was trying to float to address the need to get the 
backed-up data on to other media, but both are deprecated or unavailable as of 
the current version (4.x)

HTH
--brerabbit

+----------------------------------------------------------------------
|This was sent by drhulme AT tarrantcounty DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>