Amanda-Users

Re: upgrade to 2.5.1p1 running extremely slow

2006-11-07 13:14:15
Subject: Re: upgrade to 2.5.1p1 running extremely slow
From: Jaz Singh <jazee AT tds DOT net>
To: Steffan <svigano AT boothcreek DOT com>, amanda-users AT amanda DOT org
Date: Tue, 07 Nov 2006 11:36:09 -0600
Steffan wrote:
> So,
> Last night was the first run of my upgraded Amanda server.  In fact,
> this morning is too... since it's still running and is only about 1/4
> of the way done. (it used to take about 4-5 hours on average... this
> one has been running for 10 so far)    The server CPU is pegged with
> tar and gzip processes (mostly tar), even though I'm only backing up
> other machines via dump.   Did Amanda always use tar when writing
> dumps to tape and I'm just now noticing it?  The only changes after
> the upgrade to the config files was the addition of "tape_splitsize 20
> Gb" to the global section of Amanda.conf.
>
> Here's the disklist entry for the currently dumping remote host:
>> bertha   /dev/aacd0s2g   comp-user
>
> Here are the relevant amanda.conf entries:
>
>> define dumptype global {
>>     comment "Global definitions"
>>     index yes
>>     tape_splitsize 20 Gb
>> }
>
>> define dumptype comp-user {
>>     global
>>     comment "Non-root partitions on reasonably fast machines"
>>     compress client fast
>>     priority medium
>> }
>
>
> Debug logs don't show anything out of the ordinary..    I'm currently
> using GNU tar 1.13.25 and FreebBSD 4.7 on that system.   Another
> anomaly that I find strange is that the dump area is not being used by
> the remote hosts.   The initial amcheck showed no complaints about the
> holding disk being full or permissions issues.   ???    Should I not
> have used my existing config files and built them from scratch?  
> Anything else I might be overlooking?
>
> Thanks
>
>
Depending on the file system, 20GB may take a very long time to
generate.  Splitting attempts to create the split file in
split_diskbuffer and then dumps that to tape.
I have found that it doesn't seem to work quite right.  My experience is
that the split file gets created and if the backup source is larger than
tape_splitsize, the disk is ignored and the split goes straight to tape.
On my system (JFS), the file creation seems to get logarithmically
slower.  The first 500MB is created in several seconds, the second is a
little longer, and by the time it is at 5GB, it takes several minutes to
add another 500M.  30GB file took 2 hours to create!!!

I have switched to 500MB tape_splitsize and it works well.

I suggest watching your split_diskbuffer to see if that is the problem. 
If the file takes forever to grow, that is where the time is being consumed.


<Prev in Thread] Current Thread [Next in Thread>