for the record, supposedly freebsd can read XFS: http://people.freebsd.org/~rodrigc/xfs/ i'd keep my linux installation running, though. danno -- Dan Pritts, Sr. Systems Engineer Internet2 office: +1
Unfortunately filesystems tend to be OS specific. Even when some support is brought into a different OS it is rarely a choice filesystem. EXT2/3 is usable in windows and freebsd but one would never
Author: Matthias Meyer <matthias.meyer AT gmx DOT li>
Date: Wed, 11 Mar 2009 22:29:51 +0100
Dear all, How scalable is backuppc? Where are the limits or what can produce performance bottlenecks? I've heard about hardlinks which can be a problem if theire are millions of it. Is that true? I c
Author: Mike Dresser <mdresser_l AT windsormachine DOT com>
Date: Thu, 12 Mar 2009 12:12:48 -0400
The file system can become... interesting to fix or backup when you get a few million hard links, especially if you're using XFS. There _appears_ to be some bugs in Debian etch's xfs tools, last time
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Thu, 12 Mar 2009 15:50:48 -0500
But note that normal operation of backuppc is fairly efficient, doing name lookups within a reasonable tree structure even in the common pool. Only the nightly cleanup has to walk the whole directory
Author: Chris Robertson <crobertson AT gci DOT net>
Date: Thu, 12 Mar 2009 13:34:36 -0800
Hopefully my original message didn't come across as negative of either XFS or BackupPC. Due to how well BackupPC and XFS handled the load I threw at it initially, I expanded the retention policy of m
Author: Mike Dresser <mdresser_l AT windsormachine DOT com>
Date: Fri, 13 Mar 2009 11:36:43 -0400
We keep backups for 36 monthlies, 4 weeklies, and 8 dailies.. haven't seen much degradation in performance from that. Do you run an xfs_fsr on the filesystem now and then to keep fragmentation down?
Has anyone else noticed that all the posted pool information has 4369 directories listed for the pool? Mine does as well. Cody -- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) a
Mike Dresser wrote: Chris Robertson wrote: Hopefully my original message didn't come across as negative of either XFS or BackupPC. Due to how well BackupPC and XFS handled the load I threw at it init
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Fri, 13 Mar 2009 11:53:58 -0500
There's a tree structure designed to limit the number of entries in any single directory, and that is just the top level. Unix file systems are notoriously bad at scaling the number of directory entr
Author: Mike Dresser <mdresser_l AT windsormachine DOT com>
Date: Fri, 13 Mar 2009 13:23:46 -0400
$Conf{FullPeriod} = 6.97; $Conf{FullKeepCnt} = [4,0,36]; 4 weekly, 0 bi-weekly, 36 quad-weekly(close enough to monthly for most purposes) The default config.pl has a blurb just above the config sett
Author: Matthias Meyer <matthias.meyer AT gmx DOT li>
Date: Fri, 13 Mar 2009 19:56:30 +0100
I backup 10 PCs (Linux & Windows) from LAN and 8 PCs (Windows) over Internet Pool is 182.54GB comprising 1008528 files and 4369 directories (as of 3/13 02:29), Pool hashing gives 28891 repeated files
Author: Chris Robertson <crobertson AT gci DOT net>
Date: Fri, 13 Mar 2009 18:54:37 -0800
In the interest of differences in setup, and why you are not having troubles, I have a few questions to ask... How many hosts do you back up? What does df -i show for the mount point? Did you use an
Author: Mike Dresser <mdresser_l AT windsormachine DOT com>
Date: Sat, 14 Mar 2009 21:48:21 -0400
About 30 are active, 12 are sporadic (laptops, etc). Total that gets written out to off site backup is about 300GB of data a day, compressed. /dev/sdb1 6.4G 19M 6.3G 1% No. No. Default options for De
About 30 are active, 12 are sporadic (laptops, etc). Total that gets written out to off site backup is about 300GB of data a day, compressed. /dev/sdb1 6.4G 19M 6.3G 1% No. N
Author: Chris Robertson <crobertson AT gci DOT net>
Date: Tue, 17 Mar 2009 11:25:12 -0800
Yeah, I had really good performance when I was running around 20M inodes. But the more I look the less I think that's related to my problem:: 100 TB of 1 MB files (~100 M files): http://oss.sgi.com/a
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Tue, 17 Mar 2009 14:53:17 -0500
Any chance of a more drastic change? That load looks like it would be a great test for something that can run zfs... -- Les Mikesell lesmikesell AT gmail DOT com -- Apps built with the Adobe(R) Flex(
Author: Chris Robertson <crobertson AT gci DOT net>
Date: Tue, 17 Mar 2009 13:38:06 -0800
Why not? :o) As long as it can continue to access the XFS volume I have (for the month until my current backups would "time out"), I'm game. Chris -- Apps built with the Adobe(R) Flex(R) framework an
Author: Les Mikesell <lesmikesell AT gmail DOT com>
Date: Tue, 17 Mar 2009 17:17:01 -0500
I don't think that's possible - at least in a way that I'd trust. You'd need to run solaris, opensolaris, or freebsd for zfs and I don't think they do xfs natively. There'd be some chance of running