rowl
ADSM.ORG Senior Member
I can't seem to find any information how to plan out the underlying disk capacity for use in a directory container storage pool. For example, say we have 500TB source data and testing shows we can achieve 80% reduction with dedup and compression.
First pass it would appear we need 100TB to store this data, and we don't want to run this more than 80% full so take that to 125TB. Then there is expired data in containers, new containers and this will generate additional overhead that we need to account for. I have always been told for tape or disk volumes one should use a ~2x multiplier to account for worst case where every volume is 60% full (before reclaim).
Is there any good rule of thumb for this sort of capaicty planning?
Thanks,
-Rowl
First pass it would appear we need 100TB to store this data, and we don't want to run this more than 80% full so take that to 125TB. Then there is expired data in containers, new containers and this will generate additional overhead that we need to account for. I have always been told for tape or disk volumes one should use a ~2x multiplier to account for worst case where every volume is 60% full (before reclaim).
Is there any good rule of thumb for this sort of capaicty planning?
Thanks,
-Rowl