Tx for the reply,
I'm already using calcsize :)
>From time to time I also have [data timeout]
Jon LaBadie wrote:
> On Wed, Jul 11, 2007 at 02:54:24PM -0400, FM wrote:
>
>> Hello,
>> one of my partition have more then 100 GB of html files inside a folder.
>> I am using tar because we want to exclude some folders next to the huge
>> one. Tar is taking hours to stats (?) all files inside the big folder.
>>
>> Could it be faster if I create a new partition for this big folder and
>> the use dump to backup the partition?
>>
>
> Tar is known to be slow as, well ... , slow as tar, when dealing with
> directories containing many many entries. Dump would be expected to
> normally be faster than tar with a major limitation that incrementals
> can only be done on whole file systems, not directory trees.
>
> Tar's slowness is accentuated during amanda's estimate phase when it
> may make 2 or 3 additional passes over the directory tree. If you
> can live with reduced accuracy of your estimates, and the DLE changes
> aren't drastically different from day to day, consider the calcsize
> or server estimate features that avoid using tar (or dump) during the
> estimate phase. They are WAY - WAY faster.
>
>
|