On 11/10/2010 04:56 PM, Phil Stracchino wrote:
> On 11/10/10 09:32, Igor Zinovik wrote:
>> Hello.
>>
>> I'm deploying bacula in our network. Before it will go into production I
>> need
>> to solve one problem for myself how should I manage pools in my setup.
>>
>> I have an ordinary linux box running centos and bacula 5.0.3 I'm going
>> to store all my copies on NFS share that is mounted from netapp NAS.
>>
>> I read bacula documentation and I always see that it says that Pools
>> are very good for managing tapes, but what about disks? Should I ever
>> bother about defining several pools for disks, e.g. should I create
>> pool for each client
>> so that bacula would write all data thats belong to special client into
>> separate
>> pool and though into separate volume. And will have something like this:
>> client1 -> pool1 -> client1-vol
>> client2 -> pool2 -> client2-vol
>
> This would be a very bad idea, because it will effectively mean that you
> can only ever have one job running at a time per storage device. Any
> storage device can only have one volume at a time mounted, and if each
> client has its own individual pool and can use volumes only from that
> pool, then every job has to wait for its turn to own the storage device
> to mount a volume it's allowed to write to.
Not exactly true, if you create a Device Type and a device and a storage for
each client you can run whatever you want
concurrently. in different pools or so.
>
>> Or maybe I should not bother about Pools in my disk setup? I have rather
>> big NFS share which capacity is about 2 terabytes. netapp NAS
>> protects my copies
>> with raid-dp (modified raid6 that protects against double disk
>> faults). Maybe i should
>> just use one `Default' pool and should not care about pool management.
>
> This would mean that all volumes have the same retention. Whether this
> is a problem for you depends on how you choose to use and manage your
> volumes.
>
>> Or maybe it is better to create separate pool for full, incremental
>> and differential backups?
>
> This is the way I do it. I use separate Full, Differential and
> Incremental pools, each with its own volume retention time. Volumes are
> automatically created and autolabelled by date as needed, with a volume
> use duration window to make sure each volume is used for only one set of
> backups. Purged volumes are recycled into the scratch pool, and an
> admin job goes through the scratch pool once a week, finds al;l of the
> purged volumes, and deletes them both from the catalog and from disk.
> Full backups additionally get copied to tape after completion, by a
> separate backup job that runs after all full backups have completed.
>
>
--
Bruno Friedmann (irc:tigerfoot)
Ioda-Net Sàrl www.ioda-net.ch
openSUSE Member
User www.ioda.net/r/osu
Blog www.ioda.net/r/blog
fsfe fellowship www.fsfe.org
GPG KEY : D5C9B751C4653227
vcard : http://it.ioda-net.ch/ioda-net.vcf
------------------------------------------------------------------------------
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|