BackupPC-users

Re: [BackupPC-users] xxShareName = /cygwin ??

2011-09-08 20:51:06
Subject: Re: [BackupPC-users] xxShareName = /cygwin ??
From: Holger Parplies <wbppc AT parplies DOT de>
To: hansbkk AT gmail DOT com
Date: Fri, 9 Sep 2011 02:48:44 +0200
Hi,

hansbkk AT gmail DOT com wrote on 2011-09-09 01:54:57 +0700 [[BackupPC-users] 
xxShareName = /cygwin ??]:
> Our users have a variety of storage media from ordinary flash drives
> and SD cards to eSata or firewire HDDs, and even some swappable
> "internal" HD's. Much of these data is as or sometimes even more
> important than those on the fixed drives.
> 
> Just as the notebook users are only intermittently attached to the
> LAN, these various drives are only occasionally attached to the
> notebooks.
> 
> Obviously a management challenge to set policies that will try to
> maximise the security of these data, but my question here is
> specifically to try to set up config.pl to avoid having to create and
> maintain customized hostname.pl's.
> 
> I've tried to create an RsyncShareName = /cygwin  - note NOT
> specifying a drive letter, the idea being that if a given user has
> their F drive inserted one day and their H another, BackupPC will just
> grab whatever's there and mounted.

no, that will not work. Simple reason: your backup history will contain the
files backed up on one day, and the next day, when the drive isn't connected,
they will appear to have been deleted (or changed to what now happens to be
connected under the same path). Inevitably, the day the disk *is* connected
will end up being an incremental backup and will thus expire, whether or not
you have more recent backups of the data. Even full backups can expire while
older backups are still kept if you use an exponential scheme. So besides you
having to look through *all backups* of *all hosts* to find one particular
disk, and then deciding *manually* which is the newest copy, the newest copy
may not even exist any longer, leaving only older data. What good is a backup
scheme with such characteristics? It's definitely an idea nobody has come up
with yet, but that alone doesn't mean it's good.

Further reason: you are reproducing the chaos of *not* having solved the
management challenge within your backups.

Structure.

>>From the recent thread (to which I *will* reply, like it or not, though not
today) I get the impression that that is not your strong point. Structure.
And I in no way mean to insult you there. I'm just pointing out what I notice,
and you'll know whether it applies or is way off. We all have our strengths
and weaknesses, and that's fine.

As Arnold pointed out, you will first need to solve the social problem. *Then*
you will need to create *a virtual host* for each disk (meaning a BackupPC
hosts file entry with a name representing this disk) - because the disk is the
unit you back up. You will somehow need to fill in $Conf{ClientNameAlias} with
something usable for the backup (maybe it will be static for the disk, because
you have set up the policy that this disk is *always* backed up via host X).
You will need to set up a PingCmd for this host which finds out whether the
disk is currently accessible. Or perhaps you will require backups to be
initiated manually (also makes sense not to start a backup when the user needs
to leave for the airport *with the disk* in 5 minutes, right?). Finally, you
will need to set things up for the disk contents to be backed up in the same
way, regardless of how the disk is accessed in the individual backup (last
time it was drive F:, today it's drive H: ... no good, unless you can hide
that difference from BackupPC). This is sort of implementing dhcp for disks
instead of hosts. (Disks could even register themselves with <something>,
which you could then query to find the disk. Autorun has to be good for
*something*.)

It's complicated, but it should be doable. I can't tell you whether BackupPC
is the best tool to use. But, *first of all*, you need a concept. *Then*, you
can implement the concept. "Let's just see which data happens to end up in the
backups" is only going to do just that: capture random data. If you are in any
way serious about the data being important, that is not going to be good
enough by far.

> Downside is that we'd also be backing up data from optical media that
> happened to be in the DVD drive at the time, but that's a price we're
> willing to pay, perhaps handled with strategic exceptional excludes if
> it proves worth the headaches.

With a concept, you get that avoided for free. And, no, going down the wrong
path and patching up a huge leak in your dam with a band-aid to cut down the
water flow by 0.1% is not worth the headaches. You will still drown.

> Is there a way to accomplish this? even if it's a kludge workaround. . .

You shouldn't settle for kludges before even thinking about the problem.

> Or is this a truly idiotic idea that should indeed be prevented by design?

The *goal* of backing up *any* valuable data is not idiotic at all. Your
current approach towards it is (since you're putting it that way).

As for Tim Fletcher's idea of backing up Windoze with tar: err, no.

Hope that helps.

Regards,
Holger

P.S.: Thank you for an interesting question. The goal is good. As Arnold
      pointed out, not everything can (or should) be solved by technical
      means alone, though I believe they can be a great aid in this case.

------------------------------------------------------------------------------
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/