ADSM-L

Re: [ADSM-L] Seeking wisdom on dedupe..filepool file size client compression and reclaims

2009-08-31 09:54:42
Subject: Re: [ADSM-L] Seeking wisdom on dedupe..filepool file size client compression and reclaims
From: "Allen S. Rout" <asr AT UFL DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 31 Aug 2009 09:52:59 -0400
>> On Sun, 30 Aug 2009 08:34:47 +0200, Stefan Folkerts <stefan.folkerts AT ITAA 
>> DOT NL> said:


> Interesting ideas and a simulator would be fun for this purpose.
> You could be right and your example does make sense in a way but
> still..  I do wonder if it works out in the real world.

> Let's say you have normal data that expires (user files etc) and
> large databases, some you keep for many months and sometimes even
> years.

I understand the case you're making, and I agree that the size of your
files has an impact.  I'm suggesting that the impact isn't huge, and
that it evens out in a reasonably short timeframe.

Eventually, whatever the volume size, you wind up with a library full
of volumes more or less randomly distributed between 0% and 50%
reclaimable.  If you're keeping up with reclamation, that means you're
_in_ a steady state, so you're _doing_ the same amount of work per
unit time.


So when I say "To a first approximation, it's irrelevant", focus on
the "First appoximation" bit; Yes, there are variations here, but
don't sweat them too much.

It's certainly possible to back yourself into corners with very large
or very small volumes.



- Allen S. Rout

<Prev in Thread] Current Thread [Next in Thread>