Hi.
I now have weekday incremental backups to a tape library working well.
Then at the end of the week the incremental backups are consolidated into full
backups via the VirtualFull feature.
All of this is happening using the one auto-changer tape library with two
drives in it.
Now I am trying to create off-site tapes by using the Copy jobs feature.
My goal is to keep the Incremental and VirtualFull backups on-site at all times
so that I can do adhoc restores when users delete files they didn't mean to.
While still having off-site copy tapes that would only be needed in the event
of a catastrophic disaster where my current backups get destroyed.
I'm using copies because I don't ever want to have to bring them on-site just
to do the adhoc restores.
The copies are failing with the following message:
Fatal error: Read storage "TL2000" same as write storage.
The VirtualFull backup was able to cope with using the same auto-changer tape
library and utilised both drives (one to read and one to write).
I was hoping that the Copy job would do the same, but it seems that it can't
(or won't).
Is there a way to override or fool the copy job into trying and using the same
algorithm as the VirtualFull when running?
Also, related, but not as important as what I've asked about above.
Is there a way to delay the execution of 'Selection Type = SQLQuery', to run
when a job is able to start instead of when it is scheduled?
I initially had the VirtualFull job run on Friday at 22:10 with a priority of
11 and the Copy job to run at 22:20 with a priority of 13.
The Copy job uses SQL to select the most recent successful full backup for each
job.
At 22:10 the VirtualFull backups kick off and queue up.
At 22:20, the Copy job starts and it runs the SQL query, selecting not the
VirtualFull backups that are about to happen that it is waiting for.
But instead chooses the VirtualFull backups from the week before.
It then queues waiting for all the higher priority VirtualFull backups to
complete.
My solution to this is to schedule the Copy job to run on Sunday instead,
giving the VirtualFull backup plenty of time to complete.
But it would be better if it could start as soon as the VirtualFull backups
were complete, and then run its SQL query to select them.
I'm still on Bacula 3.0.2 as supplied by Debian.
My director configuration (warts and all) as I have it at the moment follows
(minus passwords).
# Bacula Director Configuration file
#
# For Bacula release 3.0.2 (18 July 2009) -- debian squeeze/sid
Director{
Name = vc-dir
DIRport = 9101
QueryFile = "/etc/bacula/scripts/query.sql"
WorkingDirectory = "/var/lib/bacula"
PidDirectory = "/var/run/bacula"
# Maximum Concurrent Jobs must be at least 2 for VirtualFull backups to work
since we need a reader and a writer.
Maximum Concurrent Jobs = 2
Password = ""
Messages = Daemon
}
#=======================================================================
# Jobs
# Default values to be included in jobs below.
JobDefs{
Name = "DefaultJob"
Type = Backup
Level = Incremental
Client = vc-fd
FileSet = "LinuxSet"
Schedule = "DailyBackupSchedule"
Messages = Standard
Pool = Default
# If an incremental backup gets upgraded to a Full backup, then send its
output to FullPool.
Full Backup Pool = FullPool
Priority = 10
# Bootstrap file will be named after the job.
Write Bootstrap = "/var/lib/bacula/%n.bsr"
# 'Accurate = yes' will detect files that have been moved but still have old
time stamps.
# Doing this check will use up a lot more memory on the client but is
necessary for a permanent incremental strategy to work.
Accurate = yes
}
#
# Backup jobs for each client.
#
Job{
Name = "BackupBuildatron"
JobDefs = "DefaultJob"
Client = buildatron-fd
FileSet = WindowsSetC
}
Job{
Name = "BackupDavros"
JobDefs = "DefaultJob"
Client = davros-fd
FileSet = WindowsSetCDHI
}
Job{
Name = "BackupDc1"
JobDefs = "DefaultJob"
Client = dc1-fd
FileSet = WindowsSetC
}
Job{
Name = "BackupDc2"
JobDefs = "DefaultJob"
Client = dc2-fd
FileSet = WindowsSetC
}
Job{
Name = "BackupFreddy"
JobDefs = "DefaultJob"
Client = freddy-fd
FileSet = WindowsSetCtoD
}
Job{
Name = "BackupMail"
JobDefs = "DefaultJob"
Client = mail-fd
}
Job{
Name = "BackupShadow"
JobDefs = "DefaultJob"
Client = shadow-fd
FileSet = WindowsSetCtoE
}
Job{
Name = "BackupSpirateam"
JobDefs = "DefaultJob"
Client = spirateam-fd
FileSet = WindowsSetC
}
Job{
Name = "BackupVc"
JobDefs = "DefaultJob"
Client = vc-fd
}
Job{
Name = "BackupWiki"
JobDefs = "DefaultJob"
Client = wiki-fd
}
Job{
Name = "BackupWikiHcn"
JobDefs = "DefaultJob"
Client = wiki-hcn-fd
}
# Backup the catalog database
Job{
Name = "CatalogBackup"
JobDefs = "DefaultJob"
Level = Full
FileSet = "Catalog"
Schedule = "CatalogBackupSchedule"
# This creates an ASCII copy of the catalog.
# WARNING!!! Passing the password via the command line is insecure.
# See comments in make_catalog_backup for details.
# Arguments to make_catalog_backup are: make_catalog_backup <database-name>
<user-name> <password> <host>
RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup bacula bacula"
# This deletes the copy of the catalog.
RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup"
# Run after client backups are complete (since this has a lower (higher
number) priority).
Priority = 12
}
# Copy the full backups over to tapes for off-site storage.
Job{
Name = "OffsiteBackup"
Type = Copy
Level = Full
# Client and FileSet are unused for Copy jobs but still need to be defined.
Client = vc-fd
FileSet = "LinuxSet"
Schedule = "OffsiteBackupSchedule"
Messages = Standard
# Uses the 'Next Pool' definition from FullPool for where to write the
copies to.
Pool = FullPool
# Use SQL to select the most recent (successful) Full backup for each job
written to the FullPool pool.
Selection Type = SQLQuery
Selection Pattern = "SELECT MAX(Job.JobId) FROM Job, Pool WHERE Job.Level =
'F' and Job.Type = 'B' and Job.JobStatus = 'T' and Pool.Name = 'FullPool' and
Job.PoolId = Pool.PoolId GROUP BY Job.Name ORDER BY Job.JobId;"
Allow Duplicate Jobs = yes
Allow Higher Duplicates = no
# Run after the catalog backup has been done.
Priority = 13
}
# Standard Restore template, to be changed by Console program.
# Only one such job is needed for all Jobs/Clients/Storage ...
Job{
Name = "RestoreFiles"
Type = Restore
Client = vc-fd
FileSet = "LinuxSet"
Pool = Default
Messages = Standard
}
#=======================================================================
# Schedules
# Incremental backups on the week days.
# Spool the incremental backups to disk to prevent tape shoe-shine.
# Consolidate the incremental backups into a full backup on Friday.
# Set the priority of the full backups to happen after incremental backups are
complete but before the catalog backup happens.
Schedule{
Name = "DailyBackupSchedule"
Run = Level=Incremental SpoolData=yes mon-fri at 22:05
Run = Level=VirtualFull Priority=11 fri at 22:10
}
# Backup the catalog.
Schedule{
Name = "CatalogBackupSchedule"
Run = mon-fri at 22:15
}
# Create offsite tapes.
Schedule{
Name = "OffsiteBackupSchedule"
Run = sun at 22:20
}
#=======================================================================
# Clients
Client {
Name = buildatron-fd
Address = buildatron.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = davros-fd
Address = davros.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = dc1-fd
Address = dc1.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = dc2-fd
Address = dc2.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = freddy-fd
Address = freddy.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = mail-fd
Address = mail.dmz.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = shadow-fd
Address = shadow.hcn.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client {
Name = spirateam-fd
Address = spirateam.dmz.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client{
Name = vc-fd
Address = localhost
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client{
Name = wiki-fd
Address = wiki.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
Client{
Name = wiki-hcn-fd
Address = wiki.hcn.ddihealth.com
Catalog = MyCatalog
Password = ""
File Retention = 3 months
Job Retention = 3 months
}
#=======================================================================
# Storage
Storage{
Name = TL2000
Address = vc.ddihealth.com
Password = ""
Device = TL2000
Media Type = LTO
Autochanger = yes
# Allow two jobs to use this tape library so that we can utilise both drives.
Maximum Concurrent Jobs = 2
}
#=======================================================================
# FileSets
# List of files to be backed up on Linux servers.
FileSet{
Name = "LinuxSet"
Include {
Options {
# Also back up ACLs.
aclsupport = yes
# Report if the file changes while being backed up.
checkfilechanges = yes
# Only back up certain Linux filesystems (avoids Samba, NFS, iso9660,
proc, sysfs, etc)
# ext2 covers ext2, ext3, and ext4.
fstype = ext2
fstype = xfs
# Do not update access time of files.
noatime = yes
# Cross over mount point boundaries.
onefs = no
signature = MD5
# Exclude all the patterns listed below.
exclude = yes
Regex = "^.*/lost\+found/.*$"
Regex = "^.*/Maildir/shared-folders/.*/(cur|new|tmp)/.*$"
RegexFile = "^.*/core$"
RegexFile = "^.*/courierimapkeywords/.*$"
RegexFile = "^.*/Maildir/tmp/.*$"
RegexFile = "^.*/Maildir/.*/tmp/.*$"
Wild = "/var/cache/apt-cacher-ng/*"
Wild = "/var/spool/quarantine/*"
Wild = "/var/spool/squid*/*"
WildFile = "*~"
WildFile = "*.bak"
WildFile = "*.dpkg-dist"
WildFile = "*.dpkg-old"
WildFile = "*.old"
WildFile = "*.tmp"
WildFile = "*.swp"
WildFile = "/tmp/*"
WildFile = "/var/backups/*"
WildFile = "/var/cache/apt/*"
WildFile = "/var/cache/apt-cacher/*"
WildFile = "/var/cache/bind/*"
WildFile = "/var/home/*/Maildir/.Trash/*"
WildFile = "/var/lib/apt/lists/*"
WildFile = "/var/lib/nagios*/*"
WildFile = "/var/lib/twiki/working/*"
WildFile = "/var/local/maildirshared/*"
WildFile = "/var/lock/*"
WildFile = "/var/log/*"
WildFile = "/var/run/*.pid"
WildFile = "/var/spool/exim*/input/*"
WildFile = "/var/spool/exim*/msglog/*"
WildFile = "/var/spool/havp/*"
WildFile = "/var/spool/MIMEDefang/*"
WildFile = "/var/spool/mqueue*/*"
WildFile = "/var/tmp/*"
}
File = /
}
# Files and directories to exclude.
Exclude {
File = /dev
File = /lib/init/rw
File = /proc
File = /sys
File = /var/lib/bacula
File = /.journal
File = /.fsck
}
}
# Backup C: on Windows.
FileSet{
Name = "WindowsSetC"
Include {
@/etc/bacula/fileset-windows-exclude.conf
File = "C:/"
}
}
# Backup C: and D: on Windows.
FileSet{
Name = "WindowsSetCtoD"
Include {
@/etc/bacula/fileset-windows-exclude.conf
File = "C:/"
File = "D:/"
}
}
# Backup C:, D: and E: on Windows.
FileSet{
Name = "WindowsSetCtoE"
Include {
@/etc/bacula/fileset-windows-exclude.conf
File = "C:/"
File = "D:/"
File = "E:/"
}
}
# Backup C:, D:, H: and I: on Windows.
FileSet{
Name = "WindowsSetCDHI"
Include {
@/etc/bacula/fileset-windows-exclude.conf
File = "C:/"
File = "D:/"
File = "H:/"
File = "I:/"
}
}
# Backup the catalog.
FileSet{
Name = "Catalog"
Include {
Options {
signature = MD5
}
File = /var/lib/bacula/bacula.sql
}
}
#=======================================================================
# Generic catalog service
Catalog{
Name = MyCatalog
dbname = "bacula"; dbuser = "bacula"; dbpassword = ""
}
#=======================================================================
# Messages
# Send most everything to email address and to the console.
Messages{
Name = Standard
mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\"
-s \"Bacula: %t %e of %c %l\" %r"
operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula: Intervention needed for %j\" %r"
mail = root@localhost = all, !skipped
operator = root@localhost = mount
console = all, !skipped, !saved
append = "/var/lib/bacula/log" = all, !skipped
catalog = all
}
# Message delivery for daemon messages (no job).
Messages{
Name = Daemon
mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\"
-s \"Bacula daemon message\" %r"
mail = root@localhost = all, !skipped
console = all, !skipped, !saved
append = "/var/lib/bacula/log" = all, !skipped
}
#=======================================================================
# Pools
# Default pool definition used by incremental backups.
# We wish to be able to restore files for any day for at least 2 weeks, so set
the retention to 13 days.
Pool{
Name = Default
Volume Retention = 13 days
Pool Type = Backup
# Automatically prune and recycle volumes.
AutoPrune = yes
Recycle = yes
# Do not use tapes whos labels start with CLN since they are cleaning tapes.
Cleaning Prefix = "CLN"
# There are only 22 usable tapes in the library after adding a cleaning tape.
# Commented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
Storage = TL2000
# Get tapes from scratch pool and return them to the scratch pool when they
are purged.
Scratch Pool = Scratch
Recycle Pool = Scratch
# The location where the VirtualFull backups will be written to.
Next Pool = FullPool
}
# Pool used by Full and VirtualFull backups.
# We only need at least the last 2 weeks, so set the retention to 13 days.
Pool{
Name = FullPool
Volume Retention = 13 days
Pool Type = Backup
# Automatically prune and recycle volumes.
AutoPrune = yes
Recycle = yes
# Do not use tapes whos labels start with CLN since they are cleaning tapes.
Cleaning Prefix = "CLN"
# There are only 22 usable tapes in the library after adding a cleaning tape.
# Commented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
Storage = TL2000
# Get tapes from scratch pool and return them to the scratch pool when they
are purged.
Scratch Pool = Scratch
Recycle Pool = Scratch
# The location where the copies go for offsite backups.
Next Pool = CopyPool
}
# Pool used by Copy jobs for offsite tapes.
# These only need to be valid for a week before being eligible to be
overwritten.
Pool{
Name = CopyPool
Volume Retention = 6 days
Pool Type = Backup
# Automatically prune and recycle volumes.
AutoPrune = yes
Recycle = yes
# Do not use tapes whos labels start with CLN since they are cleaning tapes.
Cleaning Prefix = "CLN"
# There are only 22 usable tapes in the library after adding a cleaning tape.
# Commented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
Storage = TL2000
# Get tapes from scratch pool and return them to the scratch pool when they
are purged.
Scratch Pool = Scratch
Recycle Pool = Scratch
}
# Scratch pool definition
Pool{
Name = Scratch
Pool Type = Backup
Recycle Pool = Scratch
}
#=======================================================================
# Restricted console used by tray-monitor to get the status of the director
Console{
Name = vc-mon
Password = ""
CommandACL = status, .status
}
Thanks,
----------
Jim Barber
DDI Health
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and deployment - and focus on
what you do best, core application coding. Discover what's new with
Crystal Reports now. http://p.sf.net/sfu/bobj-july
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|