Hi there,
Thanks for replying.
I had thought of doing it with a bash script like this, yes, but it seems (According to http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html) that Bacula is capable of doing what I want. It is preferred to use Bacula for this task, so that backups are centrally managed within Bacula and volumes are cleaned and recycled automatically as per my pool settings.
I feel like I'm nearly there - i.e. I am able to copy the data to the other system - volumes are being created. However I think the only indication to the cause of my problems is the following lines which are taken from my output from the copy job, which I posted in my original email: 14-Jan 23:13 FileServer1-sd JobId 3348:
User defined maximum volume capacity 1,073,741,824 exceeded on device
"SVN_Full_Copy" (/mnt/mac_backup/Bacula/SVN/Full). 14-Jan 23:13 FileServer1-sd JobId 3348:
End of medium on Volume "SVN_Full_Copy_0281" Bytes=1,073,729,232
Blocks=16,646 at 14-Jan-2012 23:13. 14-Jan 23:13 FileServer1-dir JobId 3348: There are no more Jobs associated with Volume "SVN_Full_Copy_0280". Marking it purged. 14-Jan 23:13 FileServer1-dir JobId 3348: All records pruned from Volume "SVN_Full_Copy_0280"; marking it "Purged" 14-Jan 23:13 FileServer1-dir JobId 3348: Recycled volume "SVN_Full_Copy_0280" 14-Jan 23:13 FileServer1-sd JobId 3348:
Recycled volume "SVN_Full_Copy_0280" on device "SVN_Full_Copy"
(/mnt/mac_backup/Bacula/SVN/Full), all previous data lost. 14-Jan 23:13 FileServer1-sd JobId 3348:
New volume "SVN_Full_Copy_0280" mounted on device "SVN_Full_Copy"
(/mnt/mac_backup/Bacula/SVN/Full) at 14-Jan-2012 23:13.
The way I read the above output is that [possibly] my volumes on my iMac (which were created on the first run of the copy job) are 'expiring'(?) and thus being over written. So the next time the copy job runs, Bacula realises that it doesn't have a copy of the job IDs which were purged from the copied volumes, and thus queues them again for copy.
Any ideas why this would be happening? As, (as far as I am aware from my settings) the volume should not being expiring (yet)... Note: I could be wrong though, and I could have misread the job logs.
Many thanks for you help.
Kind regards,
Joe Nyland
Why not just write a shell script to copy the archive volumes and either run it as a type=Admin job or from cron? That's what I do. Have it set up so each client has it's own pool of "use-once" volumes with the date-time in the name. Simple 'touch' and 'find -newer' commands figure out what to copy when.
|