Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG
2011-10-06 14:43:22
On Thu, Oct 6, 2011 at 1:04 PM, Timothy J Massey <tmassey AT obscorp DOT com> wrote:
> Personally, I feel that compression has no place
in backups. Back
> when we were highly limited in capacity by terrible analog devices
> (i.e. tape!) I used it from necessity. Now, I just throw bigger
> hard drives at it and am thankful. :)
>
> No, it makes perfect sense for backuppc where the point is to keep
> as much history as possible online in a given space.
No, the point of backup is to be able to *restore*
as much historical data as possible. Keeping the data is not the
important part. Restoring it is. Anything that is between storing
data and *restoring* that data is in the way of that job.
Obviously, there *are* things that have to go between
it: a filesystem to store the data, for example. But if I can
avoid something in between storing my data and using my data, I absolutely
will.
Compression falls in that area.
My experience is that the failures are more likely in the parts underneath storing the data than in the compression process. Admittedly, that goes all the way back to storing zip files on floppies vs. large uncompressed text files and media reliability has improved a bit.
> If you have
> trouble with compression, just throw a faster CPU at it. Just
> anecdotally, I saw 95% compression recently on a system where
> someone requested including their web content directory and forgot
> to mention the 40Gb of log files that happened to be there.
That's all well and good. My issue is *NOT*
performance. Or capacity, for that matter. I'm not saying that
there is no value to compression. I'm saying that my objective for
of a backup server is FIRST to be as simple and reliable as possible, and
THEN only to have other features. Features that detract from that
first requirement are considered skeptically.
Media fails. Things that reduce the media necessary to hold a given amount of data reduces the chances of failure. The CPU and RAM can fail too, but if those go you are fried whether you were compressing or not.
This entire thread is a *PERFECT* example of why I
have my reasons. I have avoided an entire category of failure simply
by throwing more disk at it (or by having a smaller "window"
of backups). Seeing as I have, at a minimum, 4 months of data (with
varying gaps between the backups) within the backup server itself, and
archive data in long-term storage every three months, I have what I (and
my clients) feel to be enough data. Extra capacity would have no
value. Extra reliability *always* has value.
YMMV, of course. With compressible data you increase both capacity and reliability by compressing before storage. There's no magical difference between the reliability of 'cat' vs 'zcat'. Either one could fail.
--
Les Mikesell lesmikesell AT gmail DOT com
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1 _______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|
|
|