On 2009-06-30 19:52, Chris Hoogendyk wrote:
So, the question is, if Amanda has more than one holding disk
(partition), and they differ in size, will Amanda know when the smaller
one is inadequate for a particular DLE and explicitly choose the larger
one? Also, if I have specified spindle numbers in my disklist, so that
Amanda will avoid doing parallel dumps from the same spindle, is there
any way of informing Amanda of the spindle numbers for the holding disks
(partitions) and taking that into account in the planning?
No problem at all.
You make config entry for each holdingdisk.
On each holdingdisk you can specify peculiarities of the disk,
like the path to the toplevel holdingdirectory, and the amount of
free space that should not be used etc.
Make sure to specify a chunksize that will fit easily on the smallest
disk. I make my chunksize 1GB.
Amanda will spread the holdingdisk data over all the holdingdisks
avoiding any problems when one DLE would not fit on a single area.
As side benefit you'll also get some improvement in throughput because
now taper reading finished dump images will compete less with the
dumpers writing to disk.
Making it different filesystems instead of one large logical volume makes
future adding/removing/swapping disks easier as well. And having
a disk error on one of the disks in raid0 lvm's is much worse
than on independent filesystems.
--
Paul Bijnens, Xplanation Technology Services Tel +32 16 397.525
Interleuvenlaan 86, B-3001 Leuven, BELGIUM Fax +32 16 397.552
***********************************************************************
* I think I've got the hang of it now: exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, ~., *
* stop, end, ^]c, +++ ATH, disconnect, halt, abort, hangup, KJOB, *
* ^X^X, :D::D, kill -9 1, kill -1 $$, shutdown, init 0, Alt-F4, *
* Alt-f-e, Ctrl-Alt-Del, Alt-SysRq-reisub, Stop-A, AltGr-NumLock, ... *
* ... "Are you sure?" ... YES ... Phew ... I'm out *
***********************************************************************
|