Bacula-users

Re: [Bacula-users] Quantum Superloader 3

2010-06-09 20:54:28
Subject: Re: [Bacula-users] Quantum Superloader 3
From: bwellsnc <bwellsnc AT gmail DOT com>
To: John Drescher <drescherjm AT gmail DOT com>
Date: Wed, 9 Jun 2010 20:51:14 -0400
Here is what I have setup for my conf's.  I have my conf files in a conf.d directory. I added this to my bacula-dir.conf file:

 @|"sh -c 'for f in /etc/bacula/conf.d/*.conf ; do echo @${f} ; done'"

I then setup a conf file just for the tape pool and tape storage:

# Tape pool definition
Pool {
  Name = TapeCopy
  Pool Type = Copy
  AutoPrune = yes
  Recycle = yes
  Storage = LTO4Tape
  Maximum Volumes = 1
  LabelFormat = "Tape-"
}

Storage {
  Name = LTO4Tape
  Address = mystorage-sd
  Password = "password"
  Device = DLT-S4
  Media Type = DLT-S4
}

Below is my client.conf file that is for just one server.  I have several of these to help keep everything clean and I can work on just one file and know where everything is saved.  This has 2 jobs.  One job is for the full backup to a file storage volume, then there is a copy job that copies the full backup to my tape drive.

Schedule {
  Name = "servername"
  Run = Level=Full Pool=servername-full sun at 2:00
  Run = Level=Differential Pool=servername-diff mon-sat at 2:00
  Run = Level=Incremental Pool=servername-inc hourly at 0:05
  Run = Level=Incremental Pool=servername-inc hourly at 0:35
}

Client {
  Name = servername-fd
  Maximum Concurrent Jobs = 10
  Address = servername
  FDPort = 9102
  Catalog = MyCatalog
  Password = "l3tm3in"      # password for FileDaemon
  File Retention = 7 days            # 30 days
  Job Retention = 2 months            # six months
  AutoPrune = yes                     # Prune expired Jobs/Files
}

Pool {
  Name = servername
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 6 days         # one year
  Maximum Volume Bytes = 5G          # Limit Volume size to something reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
  Next Pool = TapeCopy
  LabelFormat = "servername-"
  Storage = servername
}

Pool {
  Name = servername-full
  Pool Type = Backup
  Recycle = yes           # automatically recycle Volumes
  AutoPrune = yes         # Prune expired volumes
  Next Pool = TapeCopy
  Volume Retention = 6 months
  Maximum Volume Bytes = 5G
  Storage = servername
  Label Format = "servername-full-"
  Maximum Volumes = 100
}

Pool {
  Name = servername-inc
  Pool Type = Backup
  Recycle = yes           # automatically recycle Volumes
  AutoPrune = yes         # Prune expired volumes
  Volume Retention = 1 month
  Maximum Volume Bytes = 1G
  Storage = servername
  Label Format = "servername-inc-"
  Maximum Volumes = 100
}

Pool {
  Name = servername-diff
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 14 days
  Maximum Volume Bytes = 5G
  Storage = servername
  Label Format = "servername-diff-"
  Maximum Volumes = 100
}

Job {
Name = "servername-fd" #Change this
Type = Backup
Maximum Concurrent Jobs = 10
Client = servername-fd #Change this
FileSet = "dnsdata_servername" #Change this
Schedule = "servername"
Messages = Standard
Storage = servername
Pool = servername
Full Backup Pool = servername-full
Incremental Backup Pool = servername-inc
Differential Backup Pool = servername-diff
Write Bootstrap = "/var/lib/bacula/%c.bsr"
}
FileSet {
Name = "dnsdata_servername"
Include {
    Options {
      compression = GZIP
    }
File = /home/
File = /etc/
File = /root/
File = /var/named/
 }
}

Job {
  Name = "servername-copy"
  Type = Copy
  Level = Full
  Client = servername-fd
  FileSet = "dnsdata_servername"
  Messages = Standard
  Pool = servername
  Storage = LTO4Tape
  Full Backup Pool = servername-full
  Maximum Concurrent Jobs = 10
  Selection Type = SQLQuery
  Selection Pattern = "SELECT MAX(Job.JobId) FROM Job, Pool WHERE Job.Level = 'F' and Job.Type = 'B' and Job.JobStatus = 'T' and Pool.Name = 'servername-full' and Job.PoolId = Pool.PoolId GROUP BY Job.Name ORDER BY Job.JobId;"
}

Storage {
  Name = servername
  Maximum Concurrent Jobs = 10
  Address = mystorage-sd                # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "password"
  Device = servername
  Media Type = File
}

Here is my storage entry for my tape drive:


Autochanger {
Name = Autochanger
Device = DLT-S4
Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
Changer Device = /dev/sg3
}

Device {
Name = DLT-S4
Drive Index = 0
Media Type = DLT-S4
Archive Device = /dev/st0
AutomaticMount = yes # when device opened, read it
LabelMedia = yes
AlwaysOpen = yes
Autoselect = yes
RemovableMedia = yes
RandomAccess = no
AutoChanger = yes
# Enable the Alert command only if you have the mtx package loaded
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
}


I mirrored another setup I saw online that stated this works with the Quantum Superloader 3.

On Wed, Jun 9, 2010 at 5:00 PM, John Drescher <drescherjm AT gmail DOT com> wrote:
2010/6/9 bwellsnc <bwellsnc AT gmail DOT com>:
> The loader is set to Random.  It looks more like to me an issue with mtx and
> the mtx-changer script.  Like i said, it will write for client1-job1, then
> it goes to client2-job1 it will move the tape to slot 1 then it won't bring
> it back.  I want the tape to continue to fill until it's full.  If it's not
> full and I remove it, then it goes to the next tape.
>

Are all clients using the same pool?

John

------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users