Lars-Owe
ADSM.ORG Member
Hi!
Our Exchange servers is the biggest node in our backup system, with about 389TB of data. Every weekend we see a combined ingest of about 25TB from four servers. Would this type of data respond well to deduplication and/or compression? I understand that it's a bit like asking how long is a piece of string, but roughly, what could be a reasonable expectation?
My reasoning goes something like this. We cannot afford 2x400TB of extra disk space to move these backups from tape to disk, but perhaps it would be feasible to buy a smaller amount of disk, if possible, and offset that cost from savings of x TB worth of licenses and say two more tape drives which we would otherwise have to purchase?
Our Exchange servers is the biggest node in our backup system, with about 389TB of data. Every weekend we see a combined ingest of about 25TB from four servers. Would this type of data respond well to deduplication and/or compression? I understand that it's a bit like asking how long is a piece of string, but roughly, what could be a reasonable expectation?
My reasoning goes something like this. We cannot afford 2x400TB of extra disk space to move these backups from tape to disk, but perhaps it would be feasible to buy a smaller amount of disk, if possible, and offset that cost from savings of x TB worth of licenses and say two more tape drives which we would otherwise have to purchase?