On Thu, Oct 6, 2011 at 5:42 PM, Holger Parplies <wbppc AT parplies DOT de>
wrote:
>
>> [...]
>> With compressible data you increase both capacity and reliability by
>> compressing before storage. There's no magical difference between the
>> reliability of 'cat' vs 'zcat'. Either one could fail.
>
> the problem, I believe, is not 'cat' or 'zcat' failing, it's a *media* error,
> as you pointed out, rendering a complete compressed file unusable instead of
> only the erraneous bytes/sectors. Yes, there are compression algorithms that
> are able to recover after an error, but I don't think BackupPC uses any of
> these.
>
> Sure, the common case might be losing a complete disk rather than having a few
> bytes altered, but in that case, you can either recover from the remaining
> disks (presuming you have some form of redundancy), or you lose your complete
> pool, whether or not compressed.
I like RAID1 where you can recover from any singe surviving disk.
> While you might reduce the chances of failure with compression, you increase
> the impact of failure.
Maybe, maybe not. You might find something usable if you scrape some
plain text or maybe even part of a tar file off a disk past a media
error which is pretty hard to do anyway, but most other file types
won't have much chance of working.
--
Les Mikesell
lesmikesell AT gmail DOT com
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|