Test drive of a DataDomain DD690

Sunhillow

ADSM.ORG Senior Member
Joined
Oct 27, 2003
Messages
408
Reaction score
16
Points
0
Location
Stuttgart, Germany
Hi

we are going to test a DD690 library. As our existing VTLs emulate an IBM 3584 we are planning to install the new one as a tape library, not devclass FILE.

According to DataDomain the only possible emulation is a STK L180 which allows max. 10 drives and 174 slots.

Can anyone confirm this? If it is true, I guess it would be better to install a FILE library.

And another question - for the 3584 we don't have an atldd driver. Is it needed for the StorageTek?

... pestering you with questions :rolleyes:

Thank you guys!
 
If so thats garbage! you would have to setup a diff devclass each time you exceeded that slot count or drive capacity.

Not to familiar with the DD690 is it a lower end device?
 
Ohh ok makes more sense. Not to familiar with STK L180. We are in the market now for something more than the HP VLS 9000 we have looked at DD products but havent commited to anything.

Will you be using dudupe?
 
Yes we will use dedup. Actually the main reason for this test is to find which dedup factor we will get.

Promises given by manufacturers are varying widely, but noone wants to guarantee a certain value :tongue:

Deduplicating VTLs are much more expensive than "stupid" ones - so maybe it will be better to use a conventional library that can spin down or even power off inactive disks
 
You should see about 7-10x Deduplication rates but depends on many factors. i hope it works well :).
 
Hi

our test now runs for 3 weeks, and results are mixed.
All pools which contain full backups (Lotus Notes, attachments not extracted, Databases) or something similar (CBMR, Win2k8 Server Backup) deduplicate very well.

For Oracle backups we originally had several tables per DB multiplexed into one data stream which gave a reduction factor of < 1:3. Now after correcting this and with the second weekly full backup it is > 1:7 and surely will go up until the end of our test period.

Unfortunately a big part of our filesystem data consists of pre-compressed formats like jpg, mpg etc which make life hard for any dedup algorithm. So we will have to evaluate if a split strategy will be possible - a small dedup library complemented by a big "stupid" one.
 
Hi

our test now runs for 3 weeks, and results are mixed.
All pools which contain full backups (Lotus Notes, attachments not extracted, Databases) or something similar (CBMR, Win2k8 Server Backup) deduplicate very well.

For Oracle backups we originally had several tables per DB multiplexed into one data stream which gave a reduction factor of < 1:3. Now after correcting this and with the second weekly full backup it is > 1:7 and surely will go up until the end of our test period.

Unfortunately a big part of our filesystem data consists of pre-compressed formats like jpg, mpg etc which make life hard for any dedup algorithm. So we will have to evaluate if a split strategy will be possible - a small dedup library complemented by a big "stupid" one.


I am very interested in you test with DD. We are about to embark on evaluating DD and IBM protectier appliances. I have some additional questions. How much data are you testing with? IE retentions.

What is your unstructured (file Level) vs structured (DB,Exchange,Lotus)?

You said with no multiplexing you get a 7:1 reduction in data? Is this just de duped? or does this include compression? If so 7:1 rates on Oracle db's is not great. What is your full backup frequency on these?

We are looking at segmenting our data similarly for our tests. So any additional info would be great. Has DD or EMC stepped in with best practices for TSM?
 
Yes we will use dedup. Actually the main reason for this test is to find which dedup factor we will get.

Promises given by manufacturers are varying widely, but noone wants to guarantee a certain value :tongue:

Deduplicating VTLs are much more expensive than "stupid" ones - so maybe it will be better to use a conventional library that can spin down or even power off inactive disks
I have gotten Performance guarantees from the Sepaton guys, but no one else, were you able to nail down anyone on that?
 
Performance vs Deduplication rates for some vendors are different. Nobody can guarentee deduplication rates.
 
Right. I meant to clarify there, the guarantees I've gotten are throughput performance related only. You can't guarantee Dedup rates, there are just too many variables.
 
Hi

later in the test we got dedup rates of 1:12 with Oracle. Additional compression was 1:1.2. There are ~10 full backups retained.

We have ca. 65-70% unstructured data containing a lot of multimedia formats (jpg, mpg, tif, avi etc). Plus there are many old and very old files which are only once stored in TSM --> dedup cannot be very high.

The guys from DD were very helpful with searching for optimisations (Oracle backup) and they really know what they are talking about :up:
 
Back
Top