BackupPC-users

Re: [BackupPC-users] Best FS for BackupPC

2011-05-25 14:48:01
Subject: Re: [BackupPC-users] Best FS for BackupPC
From: "Michael Stowe" <mstowe AT chicago.us.mensa DOT org>
To: "Holger Parplies" <wbppc AT parplies DOT de>
Date: Wed, 25 May 2011 13:46:30 -0500
> So I certainly don't disagree with your results, but I do partly disagree
> with your reasoning and interpretations.

Err, actually, you don't ... or perhaps more accurately, I don't disagree
with any of the points you make, so rather than agree with everything you
said individually, I'll skip ahead.

> If that is the case, it is certainly problematic. What I also dislike is
> that 'reiserfsck --rebuild-tree' leaves your FS in an unusable state
> until it has completed - let's hope it does complete. All other 'fsck'
> programs I can remember having used seem to operate in an "incremental"
> way - fixing problems without causing new ones (except maybe trivial
> "wrong count" type inconsistencies), so they can [mostly] be interrupted
> without making the situation worse than it was.

While trying to figure out why reiserfs had gone corrupt, I tested out a
scenario where backing up a reiserfs image via BackupPC (without
compression) would be interpreted as part of the fs by --rebuild-tree, and
hopelessly mangled all data on the disk.

Probably not exactly fair to reiserfs, but it does bother me that backing
up certain types of data could make other corruption unrecoverable.

> What is your understanding of "unstressed"?

Without pushing its limits -- depending on the fs, these can be in
different places.  None of the file systems melted down when simply
subjected to high amounts of I/O.  (Well, zfs did, but that's different.)

> The speculation is, that you didn't test the situations that xfs or jfs
> might have problems with (and reiserfs might handle perfectly).

Which is reasonable enough, and I'm open to finding out if there are any.

> Certainly true. But all I can see here are different data points from
> different people's *experience*. You're unlikely to experience running
> *dozens* of FAT/Win3.1 file systems for 20 years, and if you do, it might
> well be a robust choice *for your usage pattern*. That doesn't mean it
> will work equally well with different usage patterns, or that if you
> suddenly do encounter corruption, a different FS wouldn't be better
> recoverable.

I'm really suggesting that the experience of somebody who has run a file
system for a period of time without (for example) a power failure is
likely to have little to contribute to answer the question on how stable a
file system is during a power failure.

The testing I did has a natural bias toward the scenarios I wanted to
gather data on, and my specific question was stability and speed while
using BackupPC on software RAID, were there distinctions between
filesystems?

In this regard, reiserfs failed miserably, and perhaps unfairly, part of
the reason I tested in the way I did was due to problems I'd experienced
in the past with reiserfs.  So unless there's a really compelling reason
TO use reiserfs that somehow overrides the corruption issue, I (for one)
am pretty satisfied in ruling it out.

> This is a good example of how hardware may corrupt your FS (or prevent
> corruption that would occur with different hardware). If you are truely
> interested in testing *the file systems*, you should not introduce the
> extra complexity of RAID 6. You were probably more interested in testing
> *how the file systems would operate in your hardware environment*.
That
> is a difference.

Quite so, and I also made the implicit assumption that what the fs sits on
doesn't really matter, which may or may not be the case.

> More or less. You'll have different timestamps in log files, a random
> difference in timing (length of the file in progress) ... I'm just
> wondering what exactly you are comparing. "pool" means $TopDir or
> $TopDir/{c,}pool or $TopDir/pc?

I actually used FUSE to do a straight compare; since the test box was
quiescent (I eliminated any files that were not) there was a 100% match in
most cases.

> What your test doesn't catch is long term stability. In the absense of
> power
> failures, will your FS operate well over many years? I've heard (rumours,
> not
> real data points) that reiserfs will operate smoothly up to the point
> where
> accumulated internal inconsistency (presumably due to bugs) exceeds a
> certain
> amount, and then it will destroy just about all of your file system. That
> might even match my observation - I don't remember whether there was a
> power
> failure involved or not. I have no long-term first-hand experience with
> xfs
> (or jfs). Does anyone else?

I've run BackupPC on jfs for a few years now, and it has proven to be
rock-solid.  I've run xfs (but not under BackupPC) which has been
similarly trouble-free.

As file systems go, I can recommend jfs, which you can mark down as a
single anecdotal data point.

> Regards,
> Holger




------------------------------------------------------------------------------
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/