Amanda-Users

Re: ideal amanda.conf configuration

2004-07-26 04:38:27
Subject: Re: ideal amanda.conf configuration
From: Paul Bijnens <paul.bijnens AT xplanation DOT com>
To: Sergio A Lima Jr <sergioajr AT ig.com DOT br>
Date: Mon, 26 Jul 2004 10:19:34 +0200
Sergio A Lima Jr wrote:

I'm have dificulties for create and manager back-up with amanda server.

My problems: I not get ideal configuration for running back-up.

My scenary:

        1. Back-up server machine running GNU/Linux kerner 2.4 and file system
type ext3. Tape device is SONY and model: SDT-2000E, with tapes of the
2GB uncompressed (single slot).

        2. Is necessary running back-up 3 every time this week and incluind
rotine of the back-up 3 servers.

        3. The volum total of the files is approximate 22Gb.


Unless I misunderstood, you seem to have a set of conflicting
requirements here.
You cannot backup 22 Gbyte to 2 GByte tapes running 3 times a week.

Your tapecapacity is 2 GB, and you need to backup 22 GB.
That means that you need 11 tapes to do a full backup (uncompressed).
Let's assume the compression ratio is 50% (optimistic), then you
still need 6 tapes to do the full backup.

Amanda tries to spread the full backups over a "dumpcycle".
Each day, a few of the disklistentries are full backed up, the rest
gets an incremental dump.

That means we need a dumpcycle of at least 6 tapes in the case
that no data changes and incremental dumps are minimal.  If data does
change, then the incremental backups need more space on the tapes
too. So you need even more tapes in each cycle.  Lets say 10 tapes fit.

The total number of tapes is best double or more of the number of tapes
in each dumpcycle.

We spread the 10 tapes over 2 weeks, using one tape each working day:

  dumpcycle 14 days
  runspercycle 10
  runtapes 1
  tapecycle 20 tapes

If you really really insist on running only 3 times each week,
then you could try this:

  dumpcycle 21 days
  runspercycle 9
  runtapes 1
  tapecycle 18 tapes

I already indicated this is very optimistic, and assumes the data
does not change very much during the cycle.  If you have less
compressable data, or larger incrementals, you could set:

  dumpcycle 21 days
  runspercycle 15
  runtapes 1
  tapecycle 30 tapes

Increasing the dumpcycle results in less full backups each run,
but probably means that the average size of incrementals also
increases.

If you do have more tapes, just enlarge the tapecycle value.
The idea is to keep the dumpcycle as small as possible, and
the tapecycle as large as possible.

There is probably another problem.  Each disklistentry needs to
fit completely on a tape. Your tapecapacity is 2 GByte.
That means that, after compression, the full backup of a DLE
should be less than 2 Gbyte.  The same tape should also have
enough space to hold all the other incrementals too!

Given that those 22 GByte is divided over 3 hosts, you probably
need to use GNUTAR plus include/exclude mechanisms to split
larger partitions up into smaller ones.
Have a look at the last example in $SRCS/example/disklist file.

One final remark:  it takes a few cycles before amanda can
spread the backups more or less optimal over the dumpcycle.
This gets done faster if you add DLE's a few at a time each run.


--
Paul Bijnens, Xplanation                            Tel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUM    Fax  +32 16 397.512
http://www.xplanation.com/          email:  Paul.Bijnens AT xplanation DOT com
***********************************************************************
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, F6, *
* quit,  ZZ, :q, :q!,  M-Z, ^X^C,  logoff, logout, close, bye,  /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* kill -9 1,  Alt-F4,  Ctrl-Alt-Del,  AltGr-NumLock,  Stop-A,  ...    *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out          *
***********************************************************************



<Prev in Thread] Current Thread [Next in Thread>