Hi folks,
I'm wondering if the following issue is a bug or feature:
We're using separate incremental- and full pools for each client
backed up by bacula (5.2.13). Each pool represents a directory in the
file system containing the volumes (1 job / volume), so we have a
structure like this:
online_backup_<client_name>/incr
online_backup-<client_name>/full
and so on, for each client. Each storage device has its own
media type assigned called
file_<client_name>_incr
and
file_<client_name>_full
so the storage definition in bacula-sd.conf looks like this:
######################################################################
Device {
Name = FileStorage_<client_name>_full
Media Type = File_<client_name>_full
Archive Device = /mnt/msa/online_backup_<client_name>/full/
LabelMedia = yes; # lets Bacula label unlabeled
media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
}
Device {
Name = FileStorage_<client_name>_incr
Media Type = File_<client_name>_incr
Archive Device = /mnt/msa/online_backup_<client_name>/incr/
LabelMedia = yes; # lets Bacula label unlabeled
media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
}
######################################################################
With the pool- & storage defs defined like so:
######################################################################
Storage {
Name = FileStorage_<client_name>_full
Address = <fd-server> # N.B. Use a fully qualified name
here
SDPort = 9103
Password = "xxxxx"
Device = FileStorage_<client_name>_full
Media Type = File_<client_name>_full
Maximum Concurrent Jobs = 2
}
Storage {
Name = FileStorage_<client_name>_incr
Address = <fd-server> # N.B. Use a fully qualified name
here
SDPort = 9103
Password = "xxxxxx"
Device = FileStorage_<client_name>_incr
Media Type = File_<client_name>_incr
Maximum Concurrent Jobs = 2
}
Pool {
Name = Online_<client_name>_full
Pool Type = Backup
Storage = FileStorage_<client_name>_full
Recycle = yes
AutoPrune = yes # Prune expired volumes
Volume Retention = 60 days # one year
Purge Oldest Volume = yes
Recycle Oldest Volume = yes
Maximum Volumes = 3
Maximum Volume Jobs = 1
Action On Purge = Truncate
Label Format = "${JobName}-${Level}"
Next Pool = "Offline"
}
Pool {
Name = Online_<client_name>_incr
Pool Type = Backup
Storage = FileStorage_<client_name>_incr
Recycle = yes
AutoPrune = yes # Prune expired volumes
Volume Retention = 20 days # one year
Purge Oldest Volume = yes
Recycle Oldest Volume = yes
Maximum Volumes = 7
Maximum Volume Jobs = 1
Action On Purge = Truncate
Label Format = "${JobName}-${Level}"
}
######################################################################
So far, so good (keep two full backups and a handful of incrementals
in a weekly schedule).
Now when I try to restore a "most recent" backup from within bconsole
(say a full and the most recent incremental), bacula gets all confused
about media types and which storages to use to read the volumes:
######################################################################
08-May 12:34 <fd-server>-dir JobId 68311: Start Restore Job
RestoreFiles.2013-05-08_12.34.22_02
08-May 12:34 <fd-server>-dir JobId 68311: Using Device
"FileStorage_<client_name>_full" to read.
08-May 12:34 <fd-server>-sd JobId 68311: acquire.c:120 Changing read
device. Want Media Type="File_<client_name>_incr" have="File_<client_name>_full"
device="FileStorage_<client_name>_full"
(/mnt/msa/online_backup_<client_name>/full/)
08-May 12:34 <fd-server>-sd JobId 68311: Fatal error: acquire.c:175 No
suitable device found to read Volume
"<client_name>.2013-02-18_16.10.00_14-Incremental"
08-May 12:34 <fd-server>-fd JobId 68311: Fatal error: job.c:2395 Bad
response to Read Data command. Wanted 3000 OK data
, got 3000 error
08-May 12:34 <fd-server>-dir JobId 68311: Error: Bacula <fd-server>-dir
5.2.13 (19Jan13):
Build OS: x86_64-unknown-linux-gnu redhat
JobId: 68311
Job: RestoreFiles.2013-05-08_12.34.22_02
Restore Client: <fd-server>-fd
Start time: 08-May-2013 12:34:22
End time: 08-May-2013 12:34:22
Files Expected: 112,497
Files Restored: 0
Bytes Restored: 0
Rate: 0.0 KB/s
FD Errors: 1
FD termination status: Error
SD termination status: Error
Termination: *** Restore Error ***
08-May 12:34 <fd-server>-dir JobId 68311: Begin pruning Jobs older than
6 months .
08-May 12:34 <fd-server>-dir JobId 68311: Pruned 31 Jobs for client
<fd-server>-fd from catalog.
08-May 12:34 <fd-server>-dir JobId 68311: Begin pruning Files.
08-May 12:34 <fd-server>-dir JobId 68311: No Files found to prune.
08-May 12:34 <fd-server>-dir JobId 68311: End auto prune.
######################################################################
If I don't manually assign the storage device in the restore job, some
random one seems to get selected and things go even "wronger" (tm).
I can still restore data using bls and bextract on the on-disk
volumes, but it'd be great if somebody could shed a light on how to
handle this situation.
All the best & thanks in advance,
Uwe
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and
their applications. This 200-page book is written by three acclaimed
leaders in the field. The early access version is available now.
Download your free book today! http://p.sf.net/sfu/neotech_d2d_may
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|