Amanda-Users

Re: Running amdump leads to high CPU load on Linux server

2003-11-25 16:04:57
Subject: Re: Running amdump leads to high CPU load on Linux server
From: Brian Cuttler <brian AT wadsworth DOT org>
To: Brian Cuttler <brian AT wadsworth DOT org>
Date: Tue, 25 Nov 2003 16:03:03 -0500 (EST)
Speaking of disk I/O...

VMS (yes, I always resort to that) has a parameter (settable at both
the system level and the user level) that determins the min number of
clusters/blocks that are allocated to a file when it requests more
disk space [unused blocks are returned to the free list in the master
index when the file is closed] so that you have fewer file growths
but in a more orderly fashion, hopefully preventing file fragmentation
and speeding file growth and I/O [If I said reducing window turns
would it mean anything to anyone ?]

Is there an equiv in unix (I'm less familiar with unix ufs/efs/xfs
file structure, growth and fragmentation than I am with the other) ?

Would it help to utilize those features on the server when receiving
the dump files from the clients ?

Would it also help when the output device is disk rather than tape ?



> I'd expect I/O contention to force lower CPU usage...
> 
> Rather I'd look at the inparallel parameters or the use of
> the spindle id number in the disklist (optional 4th field)
> to force single threading at least on the server/client if
> not on all clients.
> 
> 
> 
> > On Mon, Nov 24, 2003 at 10:48:43PM -0500, Eric Siegerman wrote:
> > > On Sun, Nov 23, 2003 at 07:46:32PM -0500, Kurt Raschke wrote:
> > > > ...when amdump runs, the load spikes to between 4.00 and
> > > > 6.00, and the system becomes nearly unresponsive for the duration of
> > > > the backup.  The server is backing up several local partitions, and
> > > > also two partitions on remote servers.
> > > 
> > > Are you short of RAM?  If the system's paging heavily, that'd
> > > make it crawl too.
> > 
> > No, the box has plenty of ram.
> > 
> >  
> > > > I've tried starting amdump
> > > > with nice and setting it to a low priority, but when gtar and gzip are
> > > > started by amanda, the priority setting is somehow lost.
> > > 
> > > Not surprising.  Recall that Amanda runs client/server even when
> > > backing up the server's DLE's.  The client-side processes are
> > > descendents of [x]inetd, not of amdump, and so don't inherit the
> > > latter's "nice" level.
> > 
> > I realized that about a second after I hit send.  However, the more
> > that I look at it, I doubt that 'renice'ing tar and gzip will
> > help--the box seems to have hard drive issues.  I suspect there may be
> > a problem with the 3ware RAID card in there, or possibly the driver.
> > 
> >  
> > > > The server
> > > > isn't even trying to back up multiple partitions in parallel,
> > > 
> > > By this do you mean, "only one DLE at a time"; or "only one DLE
> > > *from the server* at a time, along with remote backups in
> > > parallel"?  If the latter, well, of course there's some amount of
> > > server-side work even for the remote DLEs.  Is the compression
> > > for the remote DLEs client- or server-side?  If the latter,
> > > change "some amount" to "a lot" in the previous sentence :-)
> > > 
> > 
> > Well, compression is client-side, and not every DLE is compressed, but
> > as I recall from the runs the past few nights, the server is usually
> > backing up one of the local DLEs as well one of the remote ones at the
> > same time.  I suppose that if it's trying to store the incoming data
> > from the remote client to the HD at the same time it's trying to back
> > up a local DLE, that could cause contention for the disk array.  I'll
> > try moving the holding disk to another drive (not part of the array on
> > the 3ware card) and see if that improves things.
> > 
> > -Kurt
>