Amanda-Users

Re: Several large partitions, no spool

2004-11-30 20:12:13
Subject: Re: Several large partitions, no spool
From: "Stefan G. Weichinger" <monitor AT oops.co DOT at>
To: amanda-users AT amanda DOT org
Date: Wed, 1 Dec 2004 02:07:02 +0100
Hi, Brian,

on Dienstag, 30. November 2004 at 19:24 you wrote to amanda-users:

>> It's Ok, no problem, I just wanted to be sure that we talk about the
>> same thing and not about any print-spooler or something. You know,
>> using the same terms helps ;-)

BC> Yes, common terminalogy help. There was this one time in SF... but that
BC> is an almost wholey unrelated digression.

SF .. ? ;-) Science-Fiction ? SourceForge ?

Just a joke ...

>> AFAI can see from your report this has been the second run of this
>> configuration as the planner added 13 out of 14 DLEs as "new disk".

BC> Yes, the very first run after (reinstalling the physically failed) jukebox,
BC> (note, do not let anyone load tapes with the paper instructions) of only
BC> the root partition I added the /usr5/* directories and the /usr1 partition.
BC> The idea being that /usr1 and root being "relatively small" would use
BC> dump and the directories on the raid based partition would use tar.

Give us the output of "df -h" to see what the partitions are.

>> So it is very likely that not all of your level 0 backups that have to
>> be done first for new DLEs will fit on your tapes.

BC> Given a large enough holding area I'd expect that any DD

DLE? ;-)

BC> that hit EOT would retry. This is how it operates on my Solaris
BC> 9/Jukebox/LTO amanda server. The problem here is I believe the
BC> inability to restart the DLE from TAR on down and not having a
BC> holding area there is no file to attempt to DD to the next tape
BC> volume.

As this mail is pretty big: Until now I just say: Use a holdingdisk.

>> I understand that you can't run this config every day as it seems to
>> have run for full 3 days this time.

BC> After the initial run I added /usr5/dumps (which is on the same raid
BC> partition) and run time dramatically inproved.

/usr5/dumps is a holdingdisk in your amanda.conf?

BC> ** This is also midleading as the failed level 0 from the previous run
BC>    should have re-run as level zero and many of them ran at level 1.

BC> This is a "second" problem, a result of the first but a completely
BC> different part of the logic.

phew ...

>> Please show me your amanda.conf also so I can see your tapetype (seems
>> to be 160000 Mb "long") and dumptypes.

BC> Initial run had a tape type of

BC> # Quantum sdlt 320, I don't know filemark, mostly its the
BC> # length that is important anyway.
BC> define tapetype SDLT {
BC>     comment "QUAMTUM SDLT320"
BC>     length 160 mbytes
BC>     filemark 100 kbytes         # don't know a better value
BC>     speed 100 kbytes            # dito
BC> }

BC> Which is incorrect for the SDLT 320, I've increased it to 

BC>     length 160000 mbytes

BC> which I'd thought (mistakenly ? ) was correct for the drive.

AFAIK you may also specify this as

160 Gb

in your config. But OK, doesn't matter that much.

>> I am not sure right now if your AMANDA-version supports the parameter
>> "taperalgo" yet, also I am not sure if it helps you when you don't
>> have any holdingdisk.

BC> There is a dumporder parameter in amanda.conf which I believe was
BC> built on the current template when I installed amanda on this server.

BC> I do not know if dumporder is utilized when scheduling the clients or
BC> when scheduling taper.

See this "I am not sure", "I do not know" ?

Get a holdingdisk. Even if it is pretty small compared to your whole
data. It will help, believe me.

(Maybe define "chunksize" pretty small for a start, if needed).

>> From your second posting I now see that you now have got a
>> holdingdisk, which will help you A LOT if it is of any reasonable
>> size. This will buffer things and enable AMANDA to retry things.

BC> Yes, I have yet to hear back from the dept contact though. I don't know
BC> if I'm going to be able to keep a holding area on /usr5. It is more than
BC> likely to interfere with user processing or be unavailable when I need
BC> it for amanda. I really should have another spindle, ideally as large
BC> as the total usage of the top 2 users on /usr5 - however that is about
BC> 300 Gig.

BC> Also, having a holding area on the same "partition" as the file structure
BC> being saved has got to be a questionable move, raid based or not.

Getting the backups right is top-priority.
Getting them fast is secondary, at least for a beginning.

How big is this holdingdisk now?

> top 2 users ?

Getting this on the same partition just influences your time, not the
result, AFAIK.

>> Run "amadmin samar disklist" and have a close look how AMANDA
>> interprets your whole config.

BC> cool, never ran that before. I've included the (head of) the output
BC> below. It looks to be using dump v tar where I'd intended it to.

Ok then, just a check.

>> From your report it seems to be clear that gnutar is run, but I don't
>> know if you know and want that.

BC> The idea was to run gnutar (vendor specific tars where never encoraged
BC> from what I recall of other discussions, true)

yes ...

BC> for the /usr5 directories
BC> since I have no tape that will support a DLE anywhere near the size of
BC> this partition (.8TBytes).

You get more flexibility, yes. Add exclusions, if possible ....

>> Gene is right with pointing at the missing exclusion, you also see the
>> active exclusions in the output of the mentioned command.

BC> This should be a non-issue though, since I am using dump on /root ?

If /root is a separated partition ...

I tend to use one "program" for the whole config as it is easier to
configure (and wrap your head around).

Or, in another question: What are the advantages of using DUMP for
/root in this case ? Are there any?

>> BTW, also have a look at your "columnspec"-parameter to pretty up your
>> reports.

BC> Yes, I'll take a second pass at that.
BC> I should install the more recent version of amanda, since I saw
BC> that it now supports larger "units" of measurement.

;-) Yes, this helps sometimes, but it also means "GOING BETA" ...

best regards,
Stefan

Stefan G. Weichinger
mailto:monitor AT oops.co DOT at





<Prev in Thread] Current Thread [Next in Thread>