Hello,
I've currently got a fileset, which is around 1.5 TB in size. I have setup Bacula to backup this fileset, by performing incremental backups every night - SharedData_Inc, differential backups every 2nd-5th Sunday - SharedData_Diff, then a full backup on the first Sunday of the month - SharedData_Full. Each of these pools are set to backup to their own storage device resources, under their respective names.
The server on which this data resides is called FileServer1. This is the client resource from bacula-dir.conf for FileServer1: Client { Name = FileServer1-fd Address = FileServer1 FDPort = 9102 Catalog = MyCatalog Password = File Retention = 3 months Job Retention = 3 months AutoPrune = yes Maximum Concurrent Jobs = 2 } My problem is that I am using Bacula to backup the SharedData fileset to an external 2TB RAID array, so I am somewhat limited in how much room my backups can take up. As the data within the fileset it relatively static, I would simply like to keep an up to date copy of the data on the external array, and I don't really need any sort of archive for this particular data. However there are other backups on the same client, for example the full system backups, SVN repo's, etc. which I would like to retain some backup history for.
So... I believed that I had overcome this by setting up my pools like this:
Pool { Name = SharedData_Full Pool Type = Backup Volume Retention = 3 weeks Recycle = yes AutoPrune = yes LabelFormat = SharedData_Full_ Maximum Volume Bytes = 1G Storage = "SharedData_Full" }
Pool { Name = SharedData_Inc Pool Type = Backup Volume Retention = 6 days Recycle = yes AutoPrune = yes LabelFormat = SharedData_Inc_ Maximum Volume Bytes = 1G Storage = "SharedData_Inc" }
Pool { Name = SharedData_Diff Pool Type = Backup Volume Retention = 3 weeks Recycle = yes AutoPrune = yes LabelFormat = SharedData_Diff_ Maximum Volume Bytes = 1G Storage = "SharedData_Diff" } And for my other backups, which I want a backup history for (I do a full backup every week for my clients, so there's no diff pool):
Pool { Name = FileServer1_Full Pool Type = Backup Volume Retention = 1 months Recycle = yes AutoPrune = yes LabelFormat = FileServer1_Full_ Maximum Volume Bytes = 1G Storage = "FileServer1_Full" Next Pool = "FileServer1_Full_Copy" }
Pool { Name = FileServer1_Inc Pool Type = Backup Volume Retention = 1 months Recycle = yes AutoPrune = yes LabelFormat = FileServer1_Inc_ Maximum Volume Bytes = 1G Storage = "FileServer1_Inc" Next Pool = "FileServer1_Inc_Copy" }
However, the more I re-read the Bacula manual and through the list archive, the more I'm concerned that my settings for my SharedData pools may not be setup to give the behaviour that I require, as I am specifying a File and Job retention of 3 months in my client resource but a Volume retention which is much less, to satisfy by space limitation - Does this mean that even though my volume retention is under one month for full backups, my File/Job retention would stop the volumes from being overwritten, and thus fill up my backup array?
Would anyone be able to say whether or not they think the way I have set this up is correct? Or if they have any suggestions on how I could improve this?
I would appreciate any input anyone has on this.
Thanks,
Joe Nyland
------------------------------------------------------------------------------
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2 _______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|