Re: gnutar in configure
2004-03-02 16:12:26
On Tue, 2 Mar 2004 at 3:48pm, Jonathan Dill wrote
> On another note, maybe things have changed, but I once found that gnutar
> incremental backups sucked performance-wise, would make machines pretty
> much unusable during estimates and dumps. Normally, this would not
> matter, but you're talking University with eccentric grad students
> working at 3am and such who complain about these things. I have
> migrated most things to XFS filesystem and use xfsdump on Linux and
> IRIX--a process that I started when XFS went Open Source (around Red Hat
> 7.0) and I got tired of waiting for the problems with dump for ext2fs to
> get sorted out. Machines are still very usable with xfsdump and
> software compression running in the background, and finish faster than
> gnutar dumps. xfsdump estimates are very fast, comparatively speaking.
XFS and xfsdump are indeed very nice. But filesystems like this:
[jlb@$HOST jlb]$ df -h
Filesystem Size Used Avail Use% Mounted on
.
.
.
$SERVER0:/data 535G 518G 18G 97% /data
$SERVER1:/moredata 1.8T 1.2T 621G 66% /moredata
$SERVER2:/emfd 2.0T 779G 1.3T 39% /emfd
make tar rather necessary (those are all XFS on Linux servers BTW). For
the record, estimates on those servers go *very* fast (<5 min). I *do*
have one server with a 1T XFS filesystem that takes a *long* time to
estimate one particular direcotory (~90 minutes). But I'm pretty sure
that's due to an inordinately large number of tiny files and
subdirectories in there (about which I'm beating up the user).
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
|
|
|