Amanda-Users

Re: restore over multiple tape

2009-05-19 11:41:04
Subject: Re: restore over multiple tape
From: Franck GANACHAUD <franck.ganachaud AT altran DOT com>
To: Chris Hoogendyk <hoogendyk AT bio.umass DOT edu>
Date: Tue, 19 May 2009 16:52:30 +0200
I didn't specify these.
Original diskset is 250Gb to save to DLT4 35GB tapes using a 8 slots autoloader. I'm going to split this diskset going down one level in the filesystem which will give me a little less than 150 diskset.

I just have to think about a mechanism to upgrade the diskset list automaticaly.

Fra  nck

Chris Hoogendyk a écrit :
Splitting up your DLE is what I would recommend for a variety of reasons. I'm working with someone now who is configuring the backup of a 3TB raid array to LTO4. Initially they were saying that it needed to be one large DLE. Finally, they agreed to break it up into a bunch of DLE's. They now have it running fairly smoothly. How you do that depends on how the data on the array is organized. But the end result is that it smooths out a lot of things. The backup load is distributed more evenly over the dump cycle, and your recoveries are easier, among other things.

Also, I don't recall if you said what kind of tapes or how large the DLE is. All of that comes into play as you design how your backups are configured.


---------------

Chris Hoogendyk

-
  O__  ---- Systems Administrator
 c/ /'_ --- Biology & Geology Departments
(*) \(*) -- 140 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst
<hoogendyk AT bio.umass DOT edu>

---------------
Erdös 4




Franck GANACHAUD wrote:
Do you think I have to split as much as possible the big diskset into multiple little diskset for amanda to be able to restore as fast as possible without having to scan a complete librairy ?

Franck

Dustin J. Mitchell a écrit :
On Mon, May 18, 2009 at 11:41 AM, Franck GANACHAUD
<franck.ganachaud AT altran DOT com> wrote:
The question behind is, does amanda have to restore the five tapes, concat the dump and tar xzf over the result?

Yes -- it does exactly that.  There's talk of how to fix that using
the Application API, but it's still a long way off.  Beyond the sheer
volume of work to implement it, I think the major challenge will be
handling and storing the metadata describing the location of a
particular tar file without using so much overhead as to render it
unusable.



<Prev in Thread] Current Thread [Next in Thread>