Bacula-users

Re: [Bacula-users] RFC on how best to share pools, storage devices etc.

2011-04-05 06:48:44
Subject: Re: [Bacula-users] RFC on how best to share pools, storage devices etc.
From: Gavin McCullagh <gavin.mccullagh AT gcd DOT ie>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 5 Apr 2011 11:45:28 +0100
Hi,

On Fri, 01 Apr 2011, Gavin McCullagh wrote:

> When I was starting out, I came across a post somewhere on the lists that
> said it was a good idea with disk volumes to create a separate storage
> device for each client as it would avoid concurrency issues, etc.
> 
> I went a little further with this and created multiple pools (full, inc,
> diff) for each client, a fileset for each client, a schedule for each
> client, etc.  

.....

> I have a bunch of laptops to back up now and I'm thinking maybe I should
> try and be more disciplined for these and create a single storage device,
> pool set, jobdefs, schedule and (default) fileset for all.  This will allow
> me to delegate creation of new jobs more easily, as the config will be
> smaller for junior staff.  I will only be able to see the total shared
> volume sizes and mtimes which is a downside.  Are there any other
> disadvantages?  Is this a good idea or should I just keep going how I have
> been?  Should I try to do the same on our servers to reduce the config
> down?
> 
> Many thanks in advance for any suggestions,

So I guess nobody has any opinion on whether it's better to create lots of
storage devices (directories), pools, etc.  for each backup or just share
one between them?

Gavin



------------------------------------------------------------------------------
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>