Amanda-Users

Re: small holdingdisk, large FS

2003-01-11 17:00:53
Subject: Re: small holdingdisk, large FS
From: Toni Mueller <amanda-users AT oeko DOT net>
To: amanda-users AT amanda DOT org
Date: Sat, 11 Jan 2003 22:21:11 +0100

Hi,

On Wed, Apr 17, 2002 at 10:01:11AM +0200, Johannes Niess wrote:
> Jon LaBadie <jon AT jgcomp DOT com> writes:
> > On Tue, Apr 16, 2002 at 10:24:17PM -0500, Dan Debertin wrote:
> > > One of my client hosts has a drive that is larger than my holding disk
> > > -- drive is 18G, holdingdisk is only 4G. No, I can't swap them, and
> > > no, I'd rather not buy a bigger disk right now.
> > > 
> > > I would have thought that Amanda would dump the client to the
> > > holdingdisk in 1G chunks (isn't that what the "chunksize" directive is
> > > for?), as it does the other clients, and then gradually flush those to
> > > tape, in order to keep the drive streaming.

that's what I learned today, too (after getting a bigger & faster
tape) :-(

> > > But it's not; it uses the holdingdisk for the other clients, and
> > > dumps straight to tape with the large one.

The same goes even if the disk(s) are all local.

> > I don't believe so.  I think the entire fs is dumped to the holding disk
> > before anything gets sent to tape.  If the holding disk is not large
> > enough to hold the fs, it must go directly to tape.

There should be a way around this, otherwise the holding disk idea is
only half as powerful as can be. A modern tape drive eats data faster
than reasonable disks can deliver directly from the file system (in my
case: a DLT drive, and a 36G disk which I arranged to be idle during
my tests).

> > The chunksize parameter is generally used to work around the single
> > file size limit that some os's have.

That's "underused. How hard would it be to fix that?

> On the source level maybe an insane increase to the buffer
> between the two taper processes could help. That leads to heavy RAM
> and OS swap space usage on the tape server.

That I don't understand. Can you please point to a specific place
in the source where I should be looking? Currently I'm using the
Debian package of 2.4.3, but could (and would) tweak the source
if that would be helpful.

> Or you could switch from dump to tar and backup subdirectories that
> each are less than 4 GB.

Tar is even worse since it hammers the disk on end just to figure out
the prospective backup size, and then goes all over it again to collect
the data, -and- is way, way slower than dump. I don't know if not all
to-be supported platforms have "du", but that is much faster than tar,
and could (?) be well enough for producing an estimate. When I made
backups without Amanda, the speed difference was about a factor of 10...


TIA!

Best,
--Toni++


<Prev in Thread] Current Thread [Next in Thread>