Amanda-Users

Amanda not backing up Veritas file systems correctly

2004-05-27 17:10:18
Subject: Amanda not backing up Veritas file systems correctly
From: Jason.Brooks AT windriver DOT com
To: amanda-users AT amanda DOT org
Date: Thu, 27 May 2004 13:03:28 -0700
Hello,

I have been using amanda for nearly two years now.  Most things seem pretty
easy to figure out...  At least if you like source code and cscope...  :)

I currently am running amanda-2.4.4p2 on both server and client, the server
being a redhat 7.x system.

The client is a recently added solaris 9 server running clearcase.  Its /,
/bybee1, and /var filesystems are using ufs.  The filesystems /bybee[2..10]
are using vxfs.  Amanda can back them up ok, but if I mount their snapshots
separately then try to backup the snapshots, they are mistaken for ufs
filesystems and ufsdump therefore dies.  

Ok, more information:

The filesystems I am testing against are:
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c1t0d13s2   51423232   29078 48182027     1%    /bybee9
/dev/dsk/c1t0d18s2   51423232   29078 48182020     1%    /bybee9SnapVol

/bybee9 is simply a vxfs filesystem mounted at /bybee9
/bybee9SnapVol is a snapshot of /bybee9 mounted at /bybee9SnapVol

I am running planner directly on the server:
        planner Daily2 bybee /bybee9SnapVol
        planner Daily2 bybee /dev/dsk/c1t0d18s2 

If I run "planner Daily2 bybee /bybee9", I get a successful run of vxdump
(looking in /tmp/amanda/sendsize)

However, if I run the planner on either /bybee9SnapVol, or its device file,
sendsize tries to run ufsdump instead.  If I use the solaris program
"fstyp", it returns that the device files are vxfs filesystems.  Where is
amanda going wrong (or me for that matter)?

*******************************************
* disklist entries I created for my tests *
*******************************************

bybee /bybee9SnapVol            nocomp-root 3
bybee /bybee9                           nocomp-root 3
bybee /dev/dsk/c1t0d18s2      nocomp-root 3

********************************
* failed sendbackup debug file *
********************************

sendsize: debug 1 pid 8725 ruid 11600 euid 11600: start at Thu May 27
12:18:00 2004
sendsize: version 2.4.4p2
sendsize[8728]: time 0.004: calculating for amname '/bybee9SnapVol', dirname
'/bybee9SnapVol', spindle 3
sendsize[8728]: time 0.005: getting size via dump for /bybee9SnapVol level 0
sendsize[8725]: time 0.005: waiting for any estimate child: 1 running
sendsize[8728]: time 0.009: calculating for device '/bybee9SnapVol' with ''
sendsize[8728]: time 0.009: running "/usr/sbin/ufsdump 0Ssf 1048576 -
/bybee9SnapVol"
sendsize[8728]: time 0.010: running /usr/local/libexec/killpgrp-2.4.4p2
sendsize[8728]: time 0.021: Unable to create temporary directory in any of
the directories listed below:
sendsize[8728]: time 0.022:     /tmp/
sendsize[8728]: time 0.023:     /var/tmp/
sendsize[8728]: time 0.023:     /
sendsize[8728]: time 0.024: Please correct this problem and rerun the
program.
sendsize[8728]: time 0.024:   DUMP: `/bybee9SnapVol' is not on a locally
mounted filesystem
sendsize[8728]: time 0.025:   DUMP: The ENTIRE dump is aborted.
sendsize[8728]: time 0.025: .....
sendsize[8728]: estimate time for /bybee9SnapVol level 0: 0.016
sendsize[8728]: no size line match in /usr/sbin/ufsdump output for
"/bybee9SnapVol"
sendsize[8728]: .....
sendsize[8728]: estimate size for /bybee9SnapVol level 0: -1 KB
sendsize[8728]: time 0.026: asking killpgrp to terminate
sendsize[8728]: time 1.019: done with amname '/bybee9SnapVol', dirname
'/bybee9SnapVol', spindle 3
sendsize[8725]: time 1.020: child 8728 terminated normally
sendsize: time 1.020: pid 8725 finish time Thu May 27 12:18:01 2004


 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jason Brooks ~ (503) 641-3440 x1861
      Direct ~ (503) 924-1861
Email to: jason.brooks AT windriver DOT com
Twiki: http://twiki.wrs.com/do/view/Main/JasonBrooks

Senior Systems Administration Analyst 
Wind River Systems
8905 SW Nimbus ~ Suite 255      
Beaverton, Or 97008

<Prev in Thread] Current Thread [Next in Thread>