Amanda-Users

Problem configuring full dump

2004-04-20 07:19:28
Subject: Problem configuring full dump
From: Gertjan van Oosten <gertjan AT West DOT NL>
To: amanda-users AT amanda DOT org
Date: Tue, 20 Apr 2004 13:16:05 +0200
Hello,

I'm having a problem configuring Amanda-2.4.4p2 (on sun-sparc-solaris8)
to always do a full dump of a filesystem.  The filesystem in question is
over 100 GByte, and no matter what I try, when Amanda starts, Amanda's
planner asks for a size estimate for the level 0 dump of this filesystem
(good), and then for a size estimate for a level 1 dump of this
filesystem (not good).  The level 1 estimate takes too long, so Amanda
aborts with the (in)famous:

FAILURE AND STRANGE DUMP SUMMARY: 
  datahost    /data lev 0 FAILED [Estimate timeout from datahost]


In my disklist I have:

  datahost /data always-full


Excerpt from my amanda.conf:

  inparallel 4
  dumporder "Ssss"
  netusage  10000 Kbps

  dumpcycle 1 day
  runspercycle 5
  tapecycle 5 tapes

  bumpsize 20 Mb
  bumpdays 1
  bumpmult 4

  etimeout 300
  dtimeout 1800
  ctimeout 30

  tapebufs 30

  runtapes 1
  tpchanger "chg-zd-mtx"
  tapedev "/dev/rmt/0bn"
  rawtapedev "/dev/null"
  changerfile "/opt/amanda-2.4.4p2/etc/amanda/tapehost/changer"
  changerdev "/dev/scsi/changer/c3t0d0"

  maxdumpsize -1

  tapetype HP-LTO-2
  define tapetype HP-LTO-2 {
      comment "HP LTO-2 Ultrium (hardware compression off)"
      length 201216 mbytes
      filemark 0 kbytes
      speed 23663 kps
  }

  define dumptype global {
      comment "Global definitions"
      index yes
  }

  define dumptype always-full {
      global
      comment "Full dump of this filesystem always"
      compress none
      priority high
      dumpcycle 0
      strategy noinc
  }


What I want is simple: Amanda should do a level 0 dump of that
filesystem, and not try to find out how large a level 1 dump of this
filesystem would be (it wastes a *HUGE* amount of time, almost an hour).
I want a level 0 or nothing at all.  Is that possible, and if so, how?

By the way, even though the above specifies:

  etimeout 300      # number of seconds per filesystem for estimates.
  dtimeout 1800     # number of idle seconds before a dump is aborted.

the estimate is not aborted after 300 seconds but only after 1800
seconds.  That doesn't seem right, does it?

Furthermore, even after the planner decides to timeout and cut the
connection, the amandad/sendsize/ufsdump processes keep on running on
the client (until they're done, in fact, but they have nowhere to send
the data to).  Shouldn't they be terminated as well?

Kind regards,
-- 
-- Gertjan van Oosten, gertjan AT West DOT NL, West Consulting B.V., +31 15 
2191 600

<Prev in Thread] Current Thread [Next in Thread>