Hello list,
My current bacula director does seem to be able to connect to it's clients, but for some reason the backups are much smaller than expected.
Terminated Jobs:
JobId Level Files Bytes Status Finished Name
====================================================================
43 Incr 0 0 OK 27-Mar-13 03:05
cloud.mydomain.com 44 Incr 0 0 OK 27-Mar-13 03:05
beta.mydomain.com
45 Incr 0 0 OK 27-Mar-13 03:05
mail.mydomain.com 46 Incr 0 0 OK 27-Mar-13 03:05
chef.mydomain.com
47 Full 1 539.4 K OK 27-Mar-13 03:10 mydomain_BackupCatalog
48 Incr 0 0 OK 28-Mar-13 03:05
cloud.mydomain.com 49 Incr 0 0 OK 28-Mar-13 03:05
beta.mydomain.com
50 Incr 0 0 OK 28-Mar-13 03:05
mail.mydomain.com 51 Incr 0 0 OK 28-Mar-13 03:05
chef.mydomain.com
52 Full 1 554.5 K OK 28-Mar-13 03:10 mydomain_BackupCatalog
Taking my mail server for example:
Terminated Jobs:
JobId Level Files Bytes Status Finished Name
======================================================================
14 Full 197 20.36 M OK 22-Mar-13 02:10
mail.mydomain.com 18 Incr 0 0 OK 22-Mar-13 23:05
mail.mydomain.com
21 Full 197 20.36 M OK 23-Mar-13 07:59
mail.mydomain.com 22 Full 197 20.36 M OK 23-Mar-13 08:01
mail.mydomain.com
25 Incr 0 0 OK 23-Mar-13 23:05
mail.mydomain.com 30 Diff 0 0 OK 24-Mar-13 23:05
mail.mydomain.com
35 Incr 0 0 OK 25-Mar-13 23:05
mail.mydomain.com 40 Incr 0 0 OK 26-Mar-13 03:05
mail.mydomain.com
45 Incr 0 0 OK 27-Mar-13 03:05
mail.mydomain.com 50 Incr 0 0 OK 28-Mar-13 03:05
mail.mydomain.com
Even full backups of my mail server which should be in the GIGABYTES are only on the order of 20 MB or so in the current backup rotation.
From my bacula-dir.conf here's my director:
Director { # define myself
Name =
storage.mydomain.com DIRport = 9101 # where we listen for UA connections
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/spool/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 1
Password = "secret" # Console password
Messages = Daemon
}
Mail job:
Job {
Name = "
mail.mydomain.com"
Type = Backup
Client =
mail.mydomain.com FileSet = "Full Set"
Schedule = "WeeklyCycle"
Storage = File
Messages = Standard
Pool = "Default"
Write Bootstrap = "/var/spool/bacula/%c.bsr"
}
Schedule:
Schedule {
Name = "WeeklyCycle"
Run = Full 1st sun at 03:05
Run = Differential 2nd-5th sun at 03:05
Run = Incremental mon-sat at 03:05
}
Client:
# Client (File Services) to backup
Client {
Name =
mail.mydomain.com Address =
mail.mydomain.com FDPort = 9102
Catalog = JokefireCatalog
Password = "secret" # password for
File Retention = 30d # 30 days
Job Retention = 30d # 30 days
AutoPrune = yes # Prune expired Jobs/Files
}
Storage:
# Definition of file storage device
Storage {
Name = File
# Do not use "localhost" here
Address =
storage.mydomain.com # N.B. Use a fully qualified name here
SDPort = 9103
Password = "secret"
Device = FileStorage
Media Type = File
}
My default pool:
Pool {
Name = "Default"
Pool Type = Backup
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 14 days # 2 week retention
Maximum Volume Bytes = 5G # Limit Volume size to something reasonable
Maximum Volumes = 2000 # Limit number of Volumes in Pool
Volume Use Duration = 7d # Use a new tape every week
Recycle = yes # Recycling
Recycle Oldest Volume = yes # Recycle the oldest volume
LabelFormat = "jf-backup-tape-"
}
This is what st storage looks like
*st storage
Automatically selected Storage: File
Connecting to Storage daemon File at
storage.mydomain.com:9103
storage.mydomain.com Version: 5.2.13 (19 February 2013) x86_64-unknown-linux-gnu redhat
Daemon started 20-Mar-13 00:38. Jobs: run=52, running=0.
Heap: heap=249,856 smbytes=98,943 max_bytes=169,182 bufs=80 max_bufs=99
Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0
Running Jobs:
No Jobs running.
====
Jobs waiting to reserve a drive:
====
Terminated Jobs:
JobId Level Files Bytes Status Finished Name
===================================================================
43 Incr 0 0 OK 27-Mar-13 03:05
cloud.mydomain.com 44 Incr 0 0 OK 27-Mar-13 03:05
beta.mydomain.com
45 Incr 0 0 OK 27-Mar-13 03:05
mail.mydomain.com 46 Incr 0 0 OK 27-Mar-13 03:05
chef.mydomain.com
47 Full 1 539.5 K OK 27-Mar-13 03:10 mydomain_BackupCatalog
48 Incr 0 0 OK 28-Mar-13 03:05
cloud.mydomain.com 49 Incr 0 0 OK 28-Mar-13 03:05
beta.mydomain.com
50 Incr 0 0 OK 28-Mar-13 03:05
mail.mydomain.com 51 Incr 0 0 OK 28-Mar-13 03:05
chef.mydomain.com
52 Full 1 554.6 K OK 28-Mar-13 03:10 mydomain_BackupCatalog
====
Device status:
Device "FileStorage" (/backup/tapes) is not open.
==
====
Used Volume status:
====
====
And volumes are being created.. just not receiving quite the expected amount of data:
*list volume
Automatically selected Catalog: JokefireCatalog
Using Catalog "JokefireCatalog"
Pool: Default
+---------+---------------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+-----------+---------------------+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten |
+---------+---------------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+-----------+---------------------+
| 1 | jf-backup-tape-0001 | Used | 1 | 296,534,028 | 0 | 1,209,600 | 1 | 0 | 0 | File | 2013-03-27 03:10:03 |
| 2 | jf-backup-tape-0002 | Append | 1 | 557,482 | 0 | 1,209,600 | 1 | 0 | 0 | File | 2013-03-28 03:10:05 |
+---------+---------------------+-----------+---------+-------------+----------+--------------+---------+------+-----------+-----------+---------------------+
I'm using my mail server here as an example but certainly ALL the clients should be backing up far more data than they are. Does anyone have any tips they can share on how I can troubleshoot this strange phenomenon?
Thanks!
Tim
--
GPG me!!
gpg --keyserver
pool.sks-keyservers.net --recv-keys F186197B