I believe that Richard is correct. Because of ADSM's granularity it is
especially vunerable in these sorts of situation.
I remember once seeing a directory filled with thousand of trace files only
24Mb in total size which took 24hrs to backup with ADSM.
Yes I did have txnbytelimit, txngroupmax and the movebatch parameters all
set correctly.
I did exactly what Richard indicated. I excluded the files.
A backup system such as legato would move much faster since it sees it
treats it as an image,but then you loose all the granularity.
Nathan
-----Original Message-----
From: Richard Sims [SMTP:rbs AT BU DOT EDU]
Sent: Tuesday, March 09, 1999 7:56 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Slow backup part 2
>The large volume has a deep directory tree with 700-800K files.
Many
>directories with more than 1000 files. The ADSM client, wether
doing an
>incremental of the whole drive, or an individual directory runs
abysmally
>(sp) slow. A selective backup of a directory runs equally slow.
> Operations on the other volumes in this baby run fine.
That's a classic performance problem for *anything* traversing
those
directories, including ADSM. Ordinary directories, in any
operating
system,
are inefficient data structures (basically being flat files,
sequentially
searched), which hinder access to the data within them. The more
entries in a
directory, the worse the performance. That's why a subdirectory
regimen has
to be observed, to balance performance.
I'd recommend mandating that the owner of those directories
rearrange the
contents to use subdirectories - not just for ADSM, but for
everything in that
system. In the interim, you can Exclude the problem directories
from regular
ADSM backups and perform backups only when data changes in there,
or
on a
special schedule.
Richard Sims, BU
|