Bacula-users

[Bacula-users] Migration and data destruction

2010-03-30 22:29:04
Subject: [Bacula-users] Migration and data destruction
From: Greg Golin <greg.golin AT etouchpoint DOT com>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 30 Mar 2010 18:32:38 -0700
Hello,

We would like to use Bacula for archiving data. A problem that we're trying to solve is how to prevent Bacula from recycling a volume in case a migration job fails. The scenario we're concerned about is as follows:

Auto-recycling is on
1. TestBackupJob runs
2. TestArchiveJob runs and fails (this is the migration job)
3. During subsequent TestBackupJob runs, Bacula recycles the volume due to a retention period expiration and we lose data

So far we've come up with the following scheme to prevent the aforementioned from happening:

Auto-recycling is off. 
1. TestBackupJob runs
2. TestArchiveJob runs and executes an external script that selects all volumes that have all of their jobs in Migration status and purges, then deletes those volumes
3. During the next TestBackupJob run, Bacula creates a new volume and uses it

Another approach that we've been discussing is to run an external command that would set the retention period to infinite on a migration job failure.

We are wondering if this is the best way to ensure we wont lose data in case of a migration job failure.

Here is how our test system is set up:

** Bacula config **
Pool {
  Name = TestBackupPool
  Storage = File
  Pool Type = Backup
  Recycle = no
  AutoPrune = yes
  Volume Use Duration = 60 seconds
  LabelFormat = TestBackupVol
  Next Pool = TestArchivePool
  Migration Time = 60 seconds
}

Pool {
  Name = TestArchivePool
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Storage = Archive
  LabelFormat = ArchiveVol
}
Job {
  Name = TestBackupJob
  Type = Backup
  Level = Full
  Client = testclient-fd
  FileSet = TestBackupFileset
  Schedule = TestBackupSchedule
  Storage = File
  Pool = TestBackupPool
  Messages = NoEmail
  Maximum Concurrent Jobs = 10
}

Job {
  Name = TestArchiveJob
  Type = migrate
  Level = Full
  Client = testclient-fd
  FileSet = TestBackupFileset
  Schedule = TestArchiveSchedule
  Storage = Archive
  Pool = TestBackupPool
  Messages = NoEmail
  Selection Type = PoolTime
  Maximum Concurrent Jobs = 10
  Priority = 8
  RunAfterJob = "/bin/bash /etc/bacula/scripts/DeleteMigratedVol.sh"
}

#TestBackupSchedule executes a job every 60 seconds.
#TestArchiveSchedule executes a job every 120 seconds.

** End bacula config **

Contents of DeleteMigratedVol.sh:
#!/bin/bash
mysqlbin='/usr/bin/mysql'
username='bacula'
database='bacula'
password='edited'
dboptions='--skip-column-names'
bconsolebin='/usr/sbin/bconsole'
voldisklocation='/opt/bacula/backup'

volumeNames=$($mysqlbin -u $username -p$password $database $dboptions<<EOQ
SELECT DISTINCT m.VolumeName, j.Type 
FROM Media m 
JOIN JobMedia jm 
    ON jm.MediaId=m.MediaId 
JOIN Job j
    ON j.JobId=jm.JobId 
WHERE j.TYPE = 'M'
AND m.VolumeName NOT IN (
    SELECT m2.VolumeName
    FROM Media m2 
    JOIN JobMedia jm2 
        ON jm2.MediaId=m2.MediaId 
    JOIN Job j2
        ON j2.JobId=jm2.JobId
    WHERE j.TYPE != 'M'
);
EOQ)

for i in $volumeNames;
        do $bconsolebin <<EOF
                purge volume=$i yes
                delete volume=$i yes
                quit
EOF
        /bin/rm -fv $voldisklocation/$i

done
**End /etc/bacula/scripts/DeleteMigratedVol.sh**

Thank you,
Greg
------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
<Prev in Thread] Current Thread [Next in Thread>
  • [Bacula-users] Migration and data destruction, Greg Golin <=