ADSM-L

Re: Include/exclude processing question -Reply

1996-12-24 07:25:04
Subject: Re: Include/exclude processing question -Reply
From: Dennis Taylor <TAYLORDC AT GUNET.GEORGETOWN DOT EDU>
Date: Tue, 24 Dec 1996 07:25:04 -0500
If this hasn't been mentioned, a possible work around on Unix systems is
creating virtual mount points.  Here are (snip) some examples.

in dsm.sys:

virtualmountpoint /dumps/prod

in dsm.opt:

domain /dumps/prod

and, maybe somewhat redundant since the domain limits the scope, but our
include/exclude list looks like:

exclude *
include /dumps/prod/*

Explicit domains seems to limit the scope of the scan.  Last time I
checked, based upon the quickness of the dump, this works for us.

Dennis Taylor
Georgetown University Medical Center
taylordc AT gunet.georgetown DOT edu

>>> "Wayne T. Smith" <wts AT MAIL.CAPS.MAINE DOT EDU> 12/23/96 03:41pm >>>
Francisco Reyes wrote, in part..
> How does ADSM perform the scans? ...

I can't answer this, but whatever algorithm it uses, we're unhappy with
it!

We have a couple of systems where we exclude entire subtrees with very
simple include/exclude statements, yet an incremental backup spends an
inordinate amount of time rummaging around subdirectories where is
cannot possibly have anything useful to do.  For example, one subtree
with 250,000 files in it caused 90+ minutes of client cpu crunching.

So there are at least 2 usability problems here: (1) client cpu usage
and backup duration are inexcusably large because ADSM rummages around
where is has no business, and (2) COMMTIMEOUT and/or IDLETIMEOUT must
be ridiculously high because the server and client are ignorant of
what's going on.

One would think that a simple (internal to the ADSM client) question
such as "Is everything in this subdirectory excluded?" would be a
better performer than simply applying all rules to all files.

Holiday cheers,
wayne

Wayne T. Smith               mailto:wts AT maine.maine DOT edu
Systems Group -- CAPS        University of Maine System
<Prev in Thread] Current Thread [Next in Thread>