Re: [Veritas-bu] Data Domain Question
2011-08-18 07:52:44
If you are currently dumping SQL to disk,
more than likely, you are compressing your data. Your dedup rate
will not be good with this data, because of the compression.
When we did our POC, I gave my DBAs
specific instructions to send the backups uncompressed. That way,
the DataDomain would catch the actual duplicate blocks. To my surprise,
it actually caught duplicate blocks in the first backup alone. It
only got better from there. The key is whether you are doing compression
or not.
There are DBAs out there that even though
you ask them to turn compression off, they will think that you really couldn't
mean it. It runs counter intuitive to how they have been taught.
Just tell them its big in the beginning, but the payoff comes after
a week or two.
From:
| mitch808 <nbu-forum AT backupcentral DOT com>
|
To:
| VERITAS-BU AT MAILMAN.ENG.AUBURN DOT EDU
|
Date:
| 08/17/2011 05:43 PM
|
Subject:
| [Veritas-bu] Data Domain Question
|
Sent by:
| veritas-bu-bounces AT mailman.eng.auburn DOT edu |
I've got a customer with 30TB of SQL data, and a 30%
daily change rate. He at best gets a 4.5:1 reduction
But I've also got another with a 4% daily change rate, that gets closer
to 20:1.
It just depends... Though I don't agree with the above post that
dedupe appliances cant dedupe SQL well. As that is not the case in
my 2nd example.
+----------------------------------------------------------------------
|This was sent by mnabors AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------
_______________________________________________
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
_______________________________________________
Veritas-bu maillist - Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
|
|
|