Forgive the long post, but I have a situation here that
requires some
help...
My current setup...
-- NetApp filer containing ~35TB of data to be backed up
to tape
-- Quantumi40 w/ 2 LTO-5 drives
-- Quantum SL3 w/ 1 LTO-5 drive
-- HP DL-380G5 with 16GB RAM and 3 x spool hdd
-- CentOS 6.5 with kernel 2.6.32-431.23.3.el6.x86_64
-- Bacula 7.0.5
-- All data going to tape comes via NFS from filer with
each tape drive
having a dedicated spool spindle
-- Full jobs run once per week, on Wednesdays
-- Incremental jobs run daily
-- Tapes leave daily to IronMountain, offsite for 1 week
My current issues...
-- Tape auto-recycling is not happening
consistently/reliably
-- Tapes now need to be offsite for longer periods of
time.
Given the amount of hand holding I have had to do with
bacula over the
last couple of years, I am convinced that my configs are
not very
helpful or accurate...
So my question is this...
What would be the best setup for pools/retention/schedules
to accomplish:
-- Manage volumes/pools between both autochanger devices
Having the same
media type for both autochanger devices will let you
manage volumes in both autochangers. Update slots
command will update the device information for the
tapes.
Having the same media type between the two changers actually
produced a scenario wherein bacula became confused with slot numbers
and pool inventory. If memory serves, it was Kern who suggested I
use LTOa and LTOb as media types between the changers to avoid this
contention.
Now the update slots [scan] command (since versions 5.0.0 if I am not wrong) permits the update of slot number, inchager flag and StorageId used by the volume. This way you can have the same media type between autochangers. Indeed, if you need to use a volume interchangeably between both tape libraries you should use the same media type. Having this kind of configuration helps you if you have problems with one of your autochangers and need to redirect all your backups to the one working fine.
Unfortunately, it is not possible (by now) to configure pools or jobs to use more than one storage for backups (it would be a good idea :)). So you will need to configure one of your autochangers to be used by a specific job or pool. Hopefully, you can configure the Storage to be used by a job/pool on your schedule resource. I particularly like this situation because it is very flexible when you need to change the storage for a specific job/pool.
If however, I can mange a single set of pools/volumes/schedules
across both changers, that would be ideal...
Would this be a single bacula-sd with 2 storage resources, or 2
individual bacula-sd with a single storage resource each?
My current storage device setup...
Autochanger {
Name = Quantumi40
Device = Drive0, Drive1
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
Changer Device = /dev/quantumi40
}
Autochanger {
Name = Quantumsl3
Device = Drive2
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
Changer Device = /dev/quantumsl3
}
Device {
Name = Drive{0|1|2}
Drive Index = {0|1}
Media Type = LTO5{a|b}
Device Type = tape
Archive Device = /dev/tape{0|1|2}
AutomaticMount = yes;
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
AutoChanger = yes
LabelMedia = yes;
Maximum Changer Wait = 9000
Maximum Concurrent Jobs = 1
Spool Directory = /spool/{0|1|2}
Maximum Spool Size = 100gb
Minimum block size = 1048576
Maximum block size = 1048576
Hardware End of Medium = yes
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
}
-- Tapes offsite daily, to return 31 days later for
recycling
-- Tapes written daily/weekly via incremental and/or diff
jobs
-- Tapes written weekly/monthly via Full jobs
I would have 3
pools: DailyPool, WeeklyPool, MonthlyPool (I am
supposing you have different retention periods for this.
You just specified a 31 days retention for daily
backups?).
To phrase it another way, I need to have a full set of the data
offsite for 31 days at a time. Where "full set" is a full week's
worth of tapes, sent out daily after the Incremental/Full jobs run.
I suspect switching to a monthly Full, with weekly Diffs and Daily
Incremental cycle would be my way forward from here...
I think this setup would require a total of 6 pools, 3 for each
autochanger...
I think if you use the same media type and tells bacula the storage to be used on the schedule resource you could have just 3 pools: daily, weekly and monthly.
And If you need to have a "full week's set of tapes", then your retention period should be more than 31 days to accomplish this. Because the tape used on mondays should not be used in 31 days but in 38 days. I do not use this way since we have a full week backup and we will not need the previous differencial tapes. I use 31 days volume retention for the daily diff backups.
Best regards,
Ana
Here's my current thinking...
Pool {
Name = {$i40|$sl3}-{$full|$diff|$inc}Pool
Pool Type = Backup
Recycle = yes
Recycle Oldest Volume = yes
Auto Prune = yes
ScratchPool = Scratch
RecyclePool = Scratch
Volume Retention = {31|14|7} days
}
Which would make the Schedule(s) something like... Unless there's a
way to have a unified schedule/pools for both changers...
Schedule {
Name = "i40-Daily"
Run = Level=Incremental Pool=i40-IncPool tue-sun at 18:00
Run = Level=Full Pool=i40-FullPool 1st mon at 18:00
Run = Level=Differential Pool=i40-DiffPool 2nd mon at 18:00
Run = Level=Differential Pool=i40-DiffPool 3rd mon at 18:00
Run = Level=Differential Pool=i40-DiffPool 4th mon at 18:00
Run = Level=Differential Pool=i40-DiffPool 5th mon at 18:00
}
Schedule {
Name = "sl3-Daily"
Run = Level=Incremental Pool=sl3-IncPool sat-thu at 03:00
Run = Level=Full Pool=sl3-FullPool 1st fri at 03:00
Run = Level=Differential Pool=sl3-DiffPool 2nd fre at 03:00
Run = Level=Differential Pool=sl3-DiffPool 3rd fri at 03:00
Run = Level=Differential Pool=sl3-DiffPool 4th fri at 03:00
Run = Level=Differential Pool=sl3-DiffPool 5th fri at 03:00
}
Pool {
Name = DailyPool
...
Volume Retention
= 31 days
...
}
-- Automating the 'update slots' and 'volume status'
changes when tapes
leave daily
I have an admin
job that runs every day immediately before the first
backup job. And this admin job just runs an update slots
from bconsole.
-- Restore validation tests performed quarterly
Do you want this
to happen automatically? I do not understand your need
here... But if this is the case, you can do this with
some scripting and bls/bextract.
Automating the restore testing would be an epic win for me... I will
dig into bls/bextract as that option never occurred to me.
I have a 30 days
retention for daily diff backups, 40 day retention for
weekly full backups and 5 years retention for monthly full
backups. Please let me know if you have a similar case.
My requirements...
retention: 31 days for fulls, 14 days for weekly diff, and 7 days
for daily incremental.
1. tapes written daily, multiple jobs, single client
- Full recoverable set of data every week.
2. multiple tape libraries, lto-5
- three tape drives, 56 slots between autochangers
3. tapes leave daily after successful jobs
4. every 3 months a restore test is performed from the full set,
ideally automatically
5. minimal daily involvement from my team, other than swapping tapes
in the changer mail slots.
All the best,
--eddie
Best regards,
Ana
Let me know if you need further information, and thanks in
advance for
reading and any help!
--eddie
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe |
Onebox
This email, its contents and attachments contain
information from j2 Global, Inc. and/or its affiliates
which may be privileged, confidential or otherwise
protected from disclosure. The information is intended to
be for the addressee(s) only. If you are not an addressee,
any disclosure, copy, distribution, or use of the contents
of this message is prohibited. If you have received this
email in error please notify the sender by reply e-mail
and delete the original message and any copies. (c) 2015
j2 Global, Inc. All rights reserved. eFax, eVoice,
Campaigner, FuseMail, KeepItSafe, and Onebox are
registered trademarks of j2 Global, Inc. and its
affiliates.
------------------------------------------------------------------------------
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785351&iu=/4140