rowl
ADSM.ORG Senior Member
I have heard some comments from local TSM folks that TSM 6 dedup is only usable on pools up to 5-6 TB in size. While I didn't get a lot of details, it sounded like they ended up cpu bound.
I am curious if anyone here has had positive (or negative) experiences with TSM deduplication and large storage pools. We are looking at the possibility of replacing deduplicating VTL's with large disk pools. It would be far less expensive, and less complicated than the VTL route if TSM deduplication is usable on a large scale.
To give you an idea of how large "large" is, we have some hosts that have occupancy numbers in TSM in the 80 - 100 TB range. Nearly 4 PB of total occupancy in our TSM backup environment. On average we move 60-80TB a day of backups to TSM.
The server platform we are considering is the Sun x4540. 12 cores, 32 GB RAM, 48 1.5 TB drives behind a zfs file system. With this platform each server adds more cpu, RAM, and capacity to the environment, so the hope is this will help to scale up the cpu\memory\storage bandwidth needed for deduplication.
Thanks,
-Rowl
I am curious if anyone here has had positive (or negative) experiences with TSM deduplication and large storage pools. We are looking at the possibility of replacing deduplicating VTL's with large disk pools. It would be far less expensive, and less complicated than the VTL route if TSM deduplication is usable on a large scale.
To give you an idea of how large "large" is, we have some hosts that have occupancy numbers in TSM in the 80 - 100 TB range. Nearly 4 PB of total occupancy in our TSM backup environment. On average we move 60-80TB a day of backups to TSM.
The server platform we are considering is the Sun x4540. 12 cores, 32 GB RAM, 48 1.5 TB drives behind a zfs file system. With this platform each server adds more cpu, RAM, and capacity to the environment, so the hope is this will help to scale up the cpu\memory\storage bandwidth needed for deduplication.
Thanks,
-Rowl