BackupPC-users

Re: [BackupPC-users] Per-PC pools

2013-03-13 13:48:03
Subject: Re: [BackupPC-users] Per-PC pools
From: <backuppc AT kosowsky DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Wed, 13 Mar 2013 13:46:21 -0400
Holger Parplies wrote at about 15:51:19 +0100 on Wednesday, March 13, 2013:
 > Hi,
 > 
 > backuppc AT kosowsky DOT org wrote on 2013-03-13 10:11:17 -0400 [Re: 
 > [BackupPC-users] Per-PC pools]:
 > > Stephen Joyce wrote at about 07:52:11 -0400 on Wednesday, March 13, 2013:
 > >  > I'm in a situation where I find myself desiring per-pc pools.[1]
 > >  > [...]
 > >  
 > > I read what you write and come to a different conclusion...
 > 
 > I'm not saying I don't, but there's one thing you don't mention:
 > 
 > > [...]
 > > 2. Second, why go to the trouble of rewriting deeply embedded code to
 > >    separate pools within a single BackupPC daemon process rather than
 > >    just running separate instances of BackupPC? Once the pools are
 > >    separate, there is truly de minimus savings to run one vs. multiple
 > >    BackupPC instances.
 > 
 > True, but the one thing you *don't* get with independent daemon processes is
 > coordinated scheduling. There are good reasons to limit concurrent backups.
 > With independent storage units that may be less important, but there *can* be
 > other reasons for wanting a global decision process (i.e. which backup should
 > be run first, how many concurrently ...). Depending on your infrastructure,
 > that could either mean using separate servers, modifying the code as you
 > suggested, pooling the money and keeping a single instance, or just running
 > several instances because you don't need coordinated scheduling. In fact, you
 > might even need coordination per group rather than globally, which would be
 > much easier with independent instances.
 > 
 > Another point that springs to mind is that all might benefit from a common
 > pool on a faster storage system (more spindles, faster RAID level). While
 > total backup time over all systems might be the same, each individual backup
 > might complete faster. You might just get more out of your money by pooling
 > resources.
 > 

I agree with your comment on scheduling... though that would be a
manageable issue unless he were supporting large numbers of
independent users/pools. Given his use case referencing faculty with
separate filesystems, I assumed that the overall number of distinct
instances would be relatively small. For a small such number, just
reduce the max processes per instance and/or adjust the blackout
periods. Since the rate limiting step on most modern systems is file
access and not computational power and since the OP was talking about
separate filesystems (presumably on separately paid for hardware), the
number of concurrent processes is probably not even an issue. This
seems a lot less risky than mucking with the code especially since the
notion of pool location is scattered throughout the code...

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/