A hunerd-n-twenty jobs makes me wince. Why not simplify...a couple of ideas and feel free to knock them down: a. I HATE NFS - so I'm not mentioning it as a solution b. rsync all /etc's into a central location and back that up. This would provide a much faster backup scenario because the rsync's can be in parallel. It is also simpler. Restoring will be a two-step process and I can see where that could be a problem time wise if you restore big chunks.
c. Use multiple Baculas - like say 6 - each running 20 jobs. Faster and maybe not simpler. d. I'm not going to mention Bittorrent cuz I've not tried it.
I like 'b'
Mehma === On Sat, Feb 27, 2010 at 11:27 AM, Kevin Keane <subscription AT kkeane DOT com> wrote:
> We don't backup whole servers, there's no point. So, yes, 120 systems
> may seem like a lot, but for a lot of those, it will only be /etc,
> /opt, /root and perhaps the crontabs.
Ah, that makes sense. I am using a similar minimal backup for some of my remote Web servers (I am also backing up web sites, home directories etc.). One additional thing I am backing up: I run "rpm -qa" into a file (as a Client Run Before Job) and back up that file. That way, I know which packages I need during disaster recovery.
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev _______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|