Bacula-users

Re: [Bacula-users] Pool per client

2011-04-13 13:19:36
Subject: Re: [Bacula-users] Pool per client
From: Pablo Marques <pmarques AT miamilinux DOT net>
To: Gavin McCullagh <gavin.mccullagh AT gcd DOT ie>
Date: Wed, 13 Apr 2011 13:17:00 -0400 (EDT)
>> But I would still have the problem that I need a device tied up backing
>> up each client.  The problem I am facing is that I need to backup lots of
>> slow clients, and I need to come up with something so I can back them up
>> all at the _same_ time on one or maybe a few devices, and still have a
>> Pool per client.

>I'm not clear if you're trying to avoid lots of physical devices or lots of
>bacula storage device definitions.  You could create one Device {} entry
>per client in the bacula-sd.conf.  These each correspond to a different
>directory on some filesystem.   You then run each backup to its own file
>Device -- these can all happen concurrently.

>You should then be able to migrate each one in turn to tape.

>Or maybe I've missed something?

I would try to avoid changing the device definitions in bacula-sd.conf every 
time I add or delete a client, as this could happen very often.

If I backup, let's say, 500 clients at night, the ideal would be to back them 
up all to the same device at the same time. If one client stalls or loses the 
connection, the others can continue without problems.
If I tie up a device per client, if a client has problems it could render the 
device unusable until the client finishes or the job times out.

I guess I could modify bacula-sd an add/remove a file device per client as 
needed. I am not sure if I can "reload" bacula-sd.conf without interrupting 
running backups.

When I add a client I have a "template" client definition with all per client 
definitions that I need:
I replace $CLIENT_NAME $IP_ADDRESS $PORT and generate a new file and then I do 
a "reload" on bconsole, and the client is ready to go. 
The clients, in my application, decide the backup schedule, and they run it 
from their bconsole client.
Each client can only run or restore its own backups. 

=====================================================================================
 # We need $CLIENT_NAME $IP_ADDRESS $PORT
Client {
  Name = $CLIENT_NAME-fd
  Address = $IP_ADDRESS
  FDPort = $PORT
  Catalog = MyCatalog
  Password = "xccxcc"          # password for FileDaemon
  File Retention = 5 year
  Job Retention = 15 years
  AutoPrune = yes                     # Prune expired Jobs/Files
}
Console {
  Name = $CLIENT_NAME
  Password = "$CLIENT_NAMEpassword"
  JobACL = "$CLIENT_NAME-fd-data","$CLIENT_NAME-Restore"
  ClientACL = $CLIENT_NAME-fd
  StorageACL = CHANGER
  ScheduleACL = *all*
  PoolACL = $CLIENT_NAME
  FileSetACL = "$CLIENT_NAME-set"
  CatalogACL = MyCatalog
  WhereACL = *all*
  CommandACL = run, restore
}

Job {
  Name = "$CLIENT_NAME-base-fd-data"
  JobDefs = "jobbaculadefs"
  Client = $CLIENT_NAME-fd
  FileSet = "$CLIENT_NAME-set"
  Pool = $CLIENT_NAME
  Level = Base
  SpoolData = yes
  Maximum Concurrent Jobs = 1000
  Max Run Sched Time = 86400
}

Job {
  Name = "$CLIENT_NAME-fd-data"
  JobDefs = "jobbaculadefs"
  Client = $CLIENT_NAME-fd
  FileSet = "$CLIENT_NAME-set"
  Pool = $CLIENT_NAME
  Base = $CLIENT_NAME-base-fd-data
  Accurate = yes
  SpoolData = yes
  Maximum Concurrent Jobs = 1000
  Max Run Sched Time = 86400
}

Job {
  Name = "$CLIENT_NAME-restore"
  Type = Restore
  Client = $CLIENT_NAME-fd
  FileSet="$CLIENT_NAME-set"
  Storage = CHANGER
  Pool = $CLIENT_NAME
  Messages = Standard
  Where = /
}

Pool {
   Name = $CLIENT_NAME
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 15 years
   Recycle Oldest Volume = yes
   Recycle Pool = Scratch
}
# We need $CLIENT_NAME 
FileSet {
  Name = "$CLIENT_NAME-set"
  Include {
    Options {
      signature = MD5
      compression = GZIP
      Sparse = yes
      }
  @/etc/bacula/clients-configs/$CLIENT_NAME-filelist
  }
}
=======================================================================

Hope this clarifies my setup. 

Pablo
------------------------------------------------------------------------------
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

------------------------------------------------------------------------------
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users