Amanda-Users

Re: dump faster then tar ?

2007-07-11 16:40:54
Subject: Re: dump faster then tar ?
From: Jon LaBadie <jon AT jgcomp DOT com>
To: Mailing List Amanda User <amanda-users AT amanda DOT org>
Date: Wed, 11 Jul 2007 16:35:37 -0400
On Wed, Jul 11, 2007 at 02:54:24PM -0400, FM wrote:
> Hello,
> one of my partition have more then 100 GB of html files inside a folder.
> I am using tar because we want to exclude some folders next to the huge
> one. Tar is taking hours to stats (?) all files inside the big folder.
> 
> Could it be faster if I create a new partition for this big folder and
> the use dump to backup the partition?

Tar is known to be slow as, well ... , slow as tar, when dealing with
directories containing many many entries.  Dump would be expected to
normally be faster than tar with a major limitation that incrementals
can only be done on whole file systems, not directory trees.

Tar's slowness is accentuated during amanda's estimate phase when it
may make 2 or 3 additional passes over the directory tree.  If you
can live with reduced accuracy of your estimates, and the DLE changes
aren't drastically different from day to day, consider the calcsize
or server estimate features that avoid using tar (or dump) during the
estimate phase.  They are WAY - WAY faster.

-- 
Jon H. LaBadie                  jon AT jgcomp DOT com
 JG Computing
 4455 Province Line Road        (609) 252-0159
 Princeton, NJ  08540-4322      (609) 683-7220 (fax)

<Prev in Thread] Current Thread [Next in Thread>