BackupPC-users

Re: [BackupPC-users] BackupPC-users Digest, Vol 60, Issue 1

2011-04-07 07:18:42
Subject: Re: [BackupPC-users] BackupPC-users Digest, Vol 60, Issue 1
From: akeem abayomi <akeem077 AT yahoo DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Thu, 7 Apr 2011 04:15:03 -0700 (PDT)
Hi All,

How can i upgrade my system from BackupPC 3.1.0 to 3.2.0 version?
 
Jimoh Akeem. A    |Network Engineer| Network Data Systems Limited
                                  |290a  Ajose Adeogun Str. Victoria Island.Lagos|
                                  |Tel:01-7740581, 01-2701286, 08191164047, 08026098177| www.netdatangr.com




From: "backuppc-users-request AT lists.sourceforge DOT net" <backuppc-users-request AT lists.sourceforge DOT net>
To: backuppc-users AT lists.sourceforge DOT net
Sent: Wed, April 6, 2011 4:07:53 PM
Subject: BackupPC-users Digest, Vol 60, Issue 1

Send BackupPC-users mailing list submissions to
    backuppc-users AT lists.sourceforge DOT net

To subscribe or unsubscribe via the World Wide Web, visit
    https://lists.sourceforge.net/lists/listinfo/backuppc-users
or, via email, send a message with subject or body 'help' to
    backuppc-users-request AT lists.sourceforge DOT net

You can reach the person managing the list at
    backuppc-users-owner AT lists.sourceforge DOT net

When replying, please edit your Subject line so it is more specific
than "Re: Contents of BackupPC-users digest..."


Today's Topics:

  1. Daylight Saving change - 1 day backups were off (Boniforti Flavio)
  2. Re: Daylight Saving change - 1 day backups were off
      (Tyler J. Wagner)
  3. Re: Daylight Saving change - 1 day backups were off
      (Boniforti Flavio)
  4. Re: Daylight Saving change - 1 day backups were off
      (Tyler J. Wagner)
  5. Restrict machine to do full backups Friday night    and
      incremental on weekdays? (Scott)
  6. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Bowie Bailey)
  7. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Timothy J Massey)
  8. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Jeffrey J. Kosowsky)
  9. Re: Auth failed on module cDrive (Tom Brown)
  10. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Bowie Bailey)
  11.  [newb] ssh rsync with restricted permissions (yilam)
  12. --exclude (backuppc-users AT whitleymott DOT net)
  13. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Timothy J Massey)
  14. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Timothy J Massey)
  15. High Repeated Data Transfer Volumes During    Incremental Backup
      (nhoeller AT sinet DOT ca)
  16. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Timothy J Massey)
  17. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (Jeffrey J. Kosowsky)
  18. DeltaCopy Windows Server Enterprise (Mark Maciolek)
  19. Re: High Repeated Data Transfer Volumes    During    Incremental
      Backup (Jeffrey J. Kosowsky)
  20. BackupPC_dump hangs with: .: size doesn't match (12288 vs
      17592185913344) (John Rouillard)
  21. Re: High Repeated Data Transfer Volumes During Incremental
      Backup (John Rouillard)
  22. Re: DeltaCopy Windows Server Enterprise (Timothy J Massey)
  23. Re: Encrypted archives (Adam Monsen)
  24. Re: Restrict machine to do full backups Friday night and
      incremental on weekdays? (hansbkk AT gmail DOT com)
  25. Re: High Repeated Data Transfer Volumes During Incremental
      Backup (nhoeller AT sinet DOT ca)
  26. Re: High Repeated Data Transfer Volumes During Incremental
      Backup (nhoeller AT sinet DOT ca)
  27. Re: Daylight Saving change - 1 day backups were off
      (Boniforti Flavio)
  28. Re: High Repeated Data Transfer Volumes During Incremental
      Backup (John Rouillard)
  29. Re: Daylight Saving change - 1 day backups were off
      (Tyler J. Wagner)
  30. Re: Daylight Saving change - 1 day backups were off
      (Jeffrey J. Kosowsky)
  31. hey (Edgars Abolin?)
  32. Keeping 1 month of files and number of full backups (Scott)
  33. Re: Keeping 1 month of files and number of full    backups
      (Matthias Meyer)
  34. Viewing detail of a backup in progress? (Scott)
  35. excluding files (Scott)
  36. backing up to NAS over NFS (Peter Lavender)
  37. More on backing up to NFS (Peter Lavender)
  38.  Another BackupPC Fuse filesystem (Saturn2888)
  39. Re: Another BackupPC Fuse filesystem (Doug Lytle)
  40. bare metal restore? (Neal Becker)
  41. Re: More on backing up to NFS (Peter Lavender)
  42.  Another BackupPC Fuse filesystem (Saturn2888)
  43.  Another BackupPC Fuse filesystem (Saturn2888)
  44. Re: Another BackupPC Fuse filesystem (Doug Lytle)
  45. Re: bare metal restore? (Carl Wilhelm Soderstrom)
  46. Re: bare metal restore? (Neal Becker)
  47. Re: bare metal restore? (Carl Wilhelm Soderstrom)
  48. Re: bare metal restore? (Tyler J. Wagner)
  49. Re: Viewing detail of a backup in progress?
      (Carl Wilhelm Soderstrom)
  50. Re: Viewing detail of a backup in progress? (Tyler J. Wagner)
  51. Re: excluding files (Bowie Bailey)
  52. Re: BackupPC_dump hangs with: .: size doesn't    match (12288 vs
      17592185913344) (Holger Parplies)
  53. Re: Viewing detail of a backup in progress? (Holger Parplies)
  54. Re: excluding files (Holger Parplies)
  55. Re: Viewing detail of a backup in progress?
      (Carl Wilhelm Soderstrom)
  56. Re: Keeping 1 month of files and number of full    backups
      (Holger Parplies)
  57. Re: Auth failed on module cDrive (Holger Parplies)
  58. Re: bare metal restore? (Pedro M. S. Oliveira)
  59. Re: bare metal restore? (Matthias Meyer)
  60. Re: Viewing detail of a backup in progress? (Matthias Meyer)
  61. Change archive directory on a per-host basis? (Jake Wilson)
  62. Re: Change archive directory on a per-host basis? (Michael Stowe)
  63. Re: Change archive directory on a per-host basis? (Jake Wilson)
  64. Re: Change archive directory on a per-host basis? (Adam Goryachev)
  65. Re: Change archive directory on a per-host basis? (Adam Goryachev)
  66. Re: BackupPC_dump hangs with: .: size doesn't    match (12288 vs
      17592185913344) (Jeffrey J. Kosowsky)
  67. Re: Change archive directory on a per-host basis?
      (Jeffrey J. Kosowsky)
  68. Re: Change archive directory on a per-host basis?
      (Jeffrey J. Kosowsky)
  69. Re: Change archive directory on a per-host basis? (Jake Wilson)
  70. Re: Change archive directory on a per-host basis?
      (Jeffrey J. Kosowsky)
  71. Empty directories for backups (Lee A. Connell)
  72. Re: Empty directories for backups (Jeffrey J. Kosowsky)
  73. Re: bare metal restore? (Neal Becker)
  74. Re: bare metal restore? (Tyler J. Wagner)
  75. Re: bare metal restore? (Carl Wilhelm Soderstrom)
  76.  [newb] ssh rsync with restricted permissions (yilam)
  77. Re: [newb] ssh rsync with restricted permissions (Steve)
  78. Managing connections to the administration    interface
      (Gr?goire COUTANT)
  79. Re: Managing connections to the administration    interface
      (Carl Wilhelm Soderstrom)
  80. Re: Managing connections to the    administration    interface
      (Bowie Bailey)
  81. Question about $Conf{DumpPostUserCmd} (Mark Wass)
  82. Re: Question about $Conf{DumpPostUserCmd} (Mark Wass)
  83.  Another BackupPC Fuse filesystem (Saturn2888)
  84. Making errors in log stand out (Sorin Srbu)
  85. Re: Making errors in log stand out (Tyler J. Wagner)
  86. Re: Making errors in log stand out (Sorin Srbu)
  87. Re: Making errors in log stand out (Tyler J. Wagner)
  88. Re: Making errors in log stand out (Sorin Srbu)
  89. Re: Making errors in log stand out (Sorin Srbu)
  90. Re: Another BackupPC Fuse filesystem (Carl Wilhelm Soderstrom)
  91. Re: Making errors in log stand out (Sorin Srbu)
  92. Re: Managing connections to the administration    interface
      (Gr?goire COUTANT)
  93. Re: Making errors in log stand out (Tyler J. Wagner)
  94. Re: Making errors in log stand out (Sorin Srbu)
  95. Re: Making errors in log stand out (Bowie Bailey)
  96. Re: Making errors in log stand out (Sorin Srbu)
  97.  Another BackupPC Fuse filesystem (Saturn2888)


----------------------------------------------------------------------

Message: 1
Date: Wed, 30 Mar 2011 11:40:12 +0200
From: "Boniforti Flavio" <flavio AT piramide DOT ch>
Subject: [BackupPC-users] Daylight Saving change - 1 day backups were
    off
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <[email protected]>
Content-Type: text/plain;    charset="us-ascii"

Hello everybody.

Something strange happened and my backups didn't work one night.

Here's the situation:

I got my backups scheduled at 21:00 every night.

Here (Switzerland) the Daylight Saving Time change happened on Sunday
morning, 27.03.2011.

So, my backups worked ok on 26.03.2011, starting at 21:00.
Sunday evening the backups still worked ok, but started at 22:00.
I guess this is ok, considering standard IncPeriod value of 0.97.
Now, the strange part is that last night every backup stated "Nothing to
do"!
I can see that backups started at 21:00, which is not yet 97% of 24
hours, right?

Why has this happened?
Is there any way to correct this behaviour?

Thanks in advance and kind regards,
Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: flavio AT piramide DOT ch



------------------------------

Message: 2
Date: Wed, 30 Mar 2011 12:09:09 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Daylight Saving change - 1 day backups
    were off
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1301483349.3658.70.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Wed, 2011-03-30 at 11:40 +0200, Boniforti Flavio wrote:
> Hello everybody.
>
> Something strange happened and my backups didn't work one night.
>
> Here's the situation:
>
> I got my backups scheduled at 21:00 every night.
>
> Here (Switzerland) the Daylight Saving Time change happened on Sunday
> morning, 27.03.2011.
>
> So, my backups worked ok on 26.03.2011, starting at 21:00.
> Sunday evening the backups still worked ok, but started at 22:00.
> I guess this is ok, considering standard IncPeriod value of 0.97.

I see the same. All my backups executed an hour later, which is what I'd
expect given IncrPeriod. Also, the log shows BackupPC is aware of the
timezone change (it expected to skip an hour):

2011-03-27 00:00:00 Next wakeup is 2011-03-27 02:00:00

> Now, the strange part is that last night every backup stated "Nothing to
> do"!
> I can see that backups started at 21:00, which is not yet 97% of 24
> hours, right?

I do not have this problem on any of my 4 BackupPC servers.

I cannot see why it would start for you 3 days later. Whatever this is,
I don't see how it has anything to do with the switch to summer time.

Regards,
Tyler

--
"Should one decide to implement violence as a solution, it should be
applied without hesitation or relent until the issue is resolved. It is
to no one's benefit, save your adversaries, to half-beat someone."
  -- Henry Clyde Hatcher IV




------------------------------

Message: 3
Date: Wed, 30 Mar 2011 13:46:57 +0200
From: "Boniforti Flavio" <flavio AT piramide DOT ch>
Subject: Re: [BackupPC-users] Daylight Saving change - 1 day backups
    were off
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <[email protected]>
Content-Type: text/plain;    charset="us-ascii"

Hello Tyler and thanks for taking time for this issue...

> > So, my backups worked ok on 26.03.2011, starting at 21:00.
> > Sunday evening the backups still worked ok, but started at 22:00.
> > I guess this is ok, considering standard IncPeriod value of 0.97.
>
> I see the same. All my backups executed an hour later, which
> is what I'd expect given IncrPeriod. Also, the log shows
> BackupPC is aware of the timezone change (it expected to skip
> an hour):
>
> 2011-03-27 00:00:00 Next wakeup is 2011-03-27 02:00:00

Where may I check this for the past days?

> > Now, the strange part is that last night every backup
> stated "Nothing
> > to do"!
> > I can see that backups started at 21:00, which is not yet 97% of 24
> > hours, right?
>
> I do not have this problem on any of my 4 BackupPC servers.
>
> I cannot see why it would start for you 3 days later.
> Whatever this is, I don't see how it has anything to do with
> the switch to summer time.

Well, I put it in relation tu summer time, because nothing else during
that weekend.
What else could it be?

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: flavio AT piramide DOT ch



------------------------------

Message: 4
Date: Wed, 30 Mar 2011 13:41:23 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Daylight Saving change - 1 day backups
    were off
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1301488883.3658.82.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Wed, 2011-03-30 at 13:46 +0200, Boniforti Flavio wrote:
> > 2011-03-27 00:00:00 Next wakeup is 2011-03-27 02:00:00
>
> Where may I check this for the past days?

See "Old Logs" on the left-side menu.

> Well, I put it in relation tu summer time, because nothing else during
> that weekend.
> What else could it be?

If the problem started last night (showing "Nothing to do"), why do you
assume it has anything to do with the weekend at all? Last night was
Tuesday. Did Monday's backups run?

Regards,
Tyler

--
"If we confuse dissent with disloyalty ? if we deny the right of the
individual to be wrong, unpopular, eccentric or unorthodox ? if we
deny the essence of racial equality then hundreds of millions in Asia and
Africa who are shopping about for a new allegiance will conclude that we
are concerned to defend a myth and our present privileged status. Every
act that denies or limits the freedom of the individual in this country
costs us the ... confidence of men and women who aspire to that freedom
and independence of which we speak and for which our ancestors fought."
  -- Edward R. Murrow




------------------------------

Message: 5
Date: Wed, 30 Mar 2011 10:16:23 -0400
From: Scott <coolcoder AT gmail DOT com>
Subject: [BackupPC-users] Restrict machine to do full backups Friday
    night    and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <AANLkTinRuz6_AXDvqvKBScYHQVvZpJnmGXkpX-MwFKav AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

Full backups from one machine look like they are going to take > 12 hours,
so a night time full backup is not going to work - for this one machine I
need it to happen starting Friday night so it has all weekend to finish
(poor connectivity).    All the other machines can stay on the normal
default schedule.  Is this possible/how? Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 6
Date: Wed, 30 Mar 2011 10:52:21 -0400
From: Bowie Bailey <Bowie_Bailey AT BUC DOT com>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D9343A5.7090801 AT BUC DOT com>
Content-Type: text/plain; charset=ISO-8859-1

On 3/30/2011 10:16 AM, Scott wrote:
> Full backups from one machine look like they are going to take > 12
> hours, so a night time full backup is not going to work - for this one
> machine I need it to happen starting Friday night so it has all
> weekend to finish (poor connectivity).    All the other machines can
> stay on the normal default schedule.  Is this possible/how? Thanks!

Two possibilities here:

1) Start the backup manually the first Friday night.  After this, the
normal backup scheduling will continue starting the backup at
approximately the same time each week.  If it shifts too much, then run
another manual backup to get it back on schedule.

2) Disable scheduled backups for this machine and run them from cron
instead.  For example:

# Machine1 backups (3:15am) -- Full on Saturday, Inc other days
15 3 * * 6 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
machine1 backuppc 1
15 3 * * 0-5 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
machine1 backuppc 0

--
Bowie



------------------------------

Message: 7
Date: Wed, 30 Mar 2011 11:55:15 -0400
From: Timothy J Massey <tmassey AT obscorp DOT com>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <OF9CF511B5.669C3B48-ON85257863.0056AE0A-85257863.00579897 AT obscorp DOT com>
   
Content-Type: text/plain; charset="us-ascii"

Bowie Bailey <Bowie_Bailey AT BUC DOT com> wrote on 03/30/2011 10:52:21 AM:

> On 3/30/2011 10:16 AM, Scott wrote:
> > Full backups from one machine look like they are going to take > 12
> > hours, so a night time full backup is not going to work - for this one
> > machine I need it to happen starting Friday night so it has all
> > weekend to finish (poor connectivity).    All the other machines can
> > stay on the normal default schedule.  Is this possible/how? Thanks!
>
> Two possibilities here:
>
> 1) Start the backup manually the first Friday night.  After this, the
> normal backup scheduling will continue starting the backup at
> approximately the same time each week.  If it shifts too much, then run
> another manual backup to get it back on schedule.

This actually works reasonably well.  If the impact of running the fulls
on the wrong day occasionally isn't too great and you keep an eye once a
week, this works sufficiently.

Also, don't forget that future fulls are shorter than the first full if
you use rsync/rsyncd.  So if the first one is taking 12 hours, the
subsequent ones will take less.

Finally, is a 12 hour backup really that bad for your environment?  Can it
run from 6 P.M. to 6 A.M., for example?

In any case, if you absolutely have to make sure you run them on a certain
day...

> 2) Disable scheduled backups for this machine and run them from cron
> instead.  For example:
>
> # Machine1 backups (3:15am) -- Full on Saturday, Inc other days
> 15 3 * * 6 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
> machine1 backuppc 1
> 15 3 * * 0-5 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
> machine1 backuppc 0

I do not recommend *disabling* scheduled backups.  But modify it:  set the
full backup age to something like 7.97 days and use the cron entries
described above.  That way, if something goes wrong with the cron jobs,
BackupPC will still initiate a backup.  Yes, it will do this a day late,
but at least you're getting a backup--and maybe the slowdown (or whatever
you're trying to avoid by running it on a certain day) will let you know
that there's a problem!  :)

The other thing to keep in mind is that, while it's not tremendously great
for the performance of a server to be doing a backup, most decent servers
can handle a backup running right in the middle of the day with only a
little drop in performance.  I've done that more than once when a backup
had a problem for whatever reason and I don't want to wait to get a backup
in.

In short, it's probably a good idea to really make sure you *have* to have
things exactly like you want them rather than just let BackupPC take care
of things and occasionally readjust things (i.e. start a manual full on
Friday if they get out of sync).

Timothy J. Massey


Out of the Box Solutions, Inc.
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmassey AT obscorp DOT com

22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 8
Date: Wed, 30 Mar 2011 16:11:05 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Timothy J Massey wrote at about 11:55:15 -0400 on Wednesday, March 30, 2011:
> Bowie Bailey <Bowie_Bailey AT BUC DOT com> wrote on 03/30/2011 10:52:21 AM:
>
> > On 3/30/2011 10:16 AM, Scott wrote:
> > > Full backups from one machine look like they are going to take > 12
> > > hours, so a night time full backup is not going to work - for this one
> > > machine I need it to happen starting Friday night so it has all
> > > weekend to finish (poor connectivity).    All the other machines can
> > > stay on the normal default schedule.  Is this possible/how? Thanks!
> >
> > Two possibilities here:
> >
> > 1) Start the backup manually the first Friday night.  After this, the
> > normal backup scheduling will continue starting the backup at
> > approximately the same time each week.  If it shifts too much, then run
> > another manual backup to get it back on schedule.
>
> This actually works reasonably well.  If the impact of running the fulls
> on the wrong day occasionally isn't too great and you keep an eye once a
> week, this works sufficiently.
>
> Also, don't forget that future fulls are shorter than the first full if
> you use rsync/rsyncd.  So if the first one is taking 12 hours, the
> subsequent ones will take less.
>
> Finally, is a 12 hour backup really that bad for your environment?  Can it
> run from 6 P.M. to 6 A.M., for example?
>
> In any case, if you absolutely have to make sure you run them on a certain
> day...
>
> > 2) Disable scheduled backups for this machine and run them from cron
> > instead.  For example:
> >
> > # Machine1 backups (3:15am) -- Full on Saturday, Inc other days
> > 15 3 * * 6 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
> > machine1 backuppc 1
> > 15 3 * * 0-5 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
> > machine1 backuppc 0
>
> I do not recommend *disabling* scheduled backups.  But modify it:  set the
> full backup age to something like 7.97 days and use the cron entries
> described above.  That way, if something goes wrong with the cron jobs,
> BackupPC will still initiate a backup.  Yes, it will do this a day late,
> but at least you're getting a backup--and maybe the slowdown (or whatever
> you're trying to avoid by running it on a certain day) will let you know
> that there's a problem!  :)
>

Wouldn't a better/more robust solution be to define the blackout
period for that machine to exclude everything except for the weekend
-- or everything but Friday night if you just want a single Friday
night backup.

Just use a host-specific config file



------------------------------

Message: 9
Date: Wed, 30 Mar 2011 16:13:44 -0400
From: "Tom Brown" <tbrown AT riverbendhose DOT com>
Subject: Re: [BackupPC-users] Auth failed on module cDrive
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <E3D13017E1A8404B95D1F45744633F1E@ITGUY>
Content-Type: text/plain;    charset="iso-8859-1"

It appears I've solved the problem.

I am surprised it took this long for someone to respond to my inquiry (thank
you Fresel Michal). But, the problem may be unique to this particular
software stack: W7, Symantec Endpoint, BackupPC 2.1.2 and Cygwin rsync
2.6.8_0.

Here?s what I?ve found so far.

1. Rsync Password

If the rsync password is set in the rsync section of the "General per-PC
configuration settings" section of /conf/config.pl, it is NOT overridden by
the individual PC config.pl files, and authentication fails for module
cDrive.

The global rsync password IS the password on each PC with version 2.1.2 of
Backuppc in spite of per PC config files.

I tried commenting out $Conf{RsyncPasswd} in the "General per-PC
configuration settings" section of /conf/config.pl to allow password control
by the unique PC config.pl files, but nothing changed.

If I?ve missed something about handling global and per PC passwords in
/conf/config.pl and /pc/pc_name/config.pl, please let me know.

2. Windows Firewall/Symantec Endpoint

Symantec Endpoint doesn?t successfully manage Windows Firewall although it
is supposed to. Windows Firewall must explicitly allow inbound and outbound
rsyncd traffic therefore you need to create new rules for rsyncd in Windows
Firewall. Setting the rules in Symantec Endpoint is ineffective.

You could alternatively open port 873 to all traffic, but I didn?t try this
method.

Pinging (ICMP) must also be allowed in Windows Firewall. W7 appears to allow
pings if you declare your network private or trusted, then you don?t need to
create new firewall rules for pings.

Tom --

________________________________________
From: Fresel Michal - hi competence e.U. [mailto:m.fresel AT hi-competence DOT eu]
Sent: Tuesday, March 29, 2011 17:46
To: tbrown AT riverbendhose DOT com; General list for user discussion,questions and
support
Subject: Re: [BackupPC-users] Auth failed on module cDrive

hi Tom,

as of your "Permission denied (13)" - problem
please recheck the permissions for the mentioned folders
this should be set to at least read-permission for "System"
this is always an issue when users create own directories belonging only to
them as they think this is more secure :)
as in this cases it's even the oposite: so even your antivirus-app is not
able to acces those dirs :)
and please try using a long password using alphanumerical (i.e
password4myserverATwork instead of a short cryptical like ":-~)
so you prevent issues with special characters like :.,;"'`

and for your tests using command-line try a password like passwort :)
and remember to change it after your tests :)))

Greetings
Mike

Am 21.03.2011 um 22:08 schrieb Tom Brown:


Server: x226.rbhs.lan running BackupPC 2.1.2 (old I know)
Client: W7 SP1 running cygwin-rsyncd-2.6.8_0
?
BackupPC reports ?auth failed on module cDrive?. The rsyncd.log on the W7
client reports ?connect from x226.rbhs.lan; password mismatch?.
?
1. I?ve double, triple and quadruple checked the password in rsyncd.secrets
and /mnt/backuppc/pc/daved-hp/config.pl.
2. I?ve removed the password from the backuppc account user login, the
secrets file and the client?s?config.pl?file on the server and get the same
errors.
3. ?strict modes = false? is set in the rsyncd.conf on the client.
4. The W7 client account ?backuppc? is listed in the Backup Operators group
on the client. The client account login pwd is the same as the
rsyncd.secrets pwd.
5. The C: drive on the client is not shared; it shouldn?t need to be because
of #4 above. When I do share it as ?cDrive? with client account ?backupp?
having full control of the share, the result is the same.
6. Rsync is running as a service on the W7 client.
7. The firewall allows traffic from/to the backuppc server. When I shut off
the firewall, backuppc and rsyncd report the same error condition. The
firewall is actually controlled by Symantec Endpoint 11.
?
When I rsync files from the command line on the backuppc server using?
?rsync -av backuppc@daved-hp::cDrive .?, I get a series of permission denied
errors.
?
receiving file list ...
rsync: opendir "ygdrive/c/Windows/system32/c:/Documents and Settings" (in
cDrive) failed: Permission denied (13)
rsync: opendir "ygdrive/c/Windows/system32/c:/ProgramData/Application Data"
(in cDrive) failed: Permission denied (13)
rsync: opendir "ygdrive/c/Windows/system32/c:/ProgramData/Desktop" (in
cDrive) failed: Permission denied (13)
rsync: opendir "ygdrive/c/Windows/system32/c:/ProgramData/Documents" (in
cDrive) failed: Permission denied (13)
rsync: opendir "ygdrive/c/Windows/system32/c:/ProgramData/Favorites" (in
cDrive) failed: Permission denied (13)
rsync: opendir "ygdrive/c/Windows/system32/c:/ProgramData/Start Menu" (in
cDrive) failed: Permission denied (13)
rsync: opendir "ygdrive/c/Windows/system32/c:/ProgramData/Templates" (in
cDrive) failed: Permission denied (13)
?etc?
?
TIA for any troubleshooting ideas.
?
Tom --
?
----------------------------------------------------------------------------
--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software?
be a part of the solution? Download the Intel(R) Manageability Checker?
today!?http://p.sf.net/sfu/intel-dev2devmar_________________________________
______________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: ???https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: ???http://backuppc.wiki.sourceforge.net
Project:?http://backuppc.sourceforge.net/






------------------------------

Message: 10
Date: Wed, 30 Mar 2011 16:49:32 -0400
From: Bowie Bailey <Bowie_Bailey AT BUC DOT com>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D93975C.80200 AT BUC DOT com>
Content-Type: text/plain; charset=ISO-8859-1

On 3/30/2011 4:11 PM, Jeffrey J. Kosowsky wrote:
> Timothy J Massey wrote at about 11:55:15 -0400 on Wednesday, March 30, 2011:
>  > Bowie Bailey <Bowie_Bailey AT BUC DOT com> wrote on 03/30/2011 10:52:21 AM:
>  >
>  > > On 3/30/2011 10:16 AM, Scott wrote:
>  > > > Full backups from one machine look like they are going to take > 12
>  > > > hours, so a night time full backup is not going to work - for this one
>  > > > machine I need it to happen starting Friday night so it has all
>  > > > weekend to finish (poor connectivity).    All the other machines can
>  > > > stay on the normal default schedule.  Is this possible/how? Thanks!
>  > >
>  > > Two possibilities here:
>  > >
>  > > 1) Start the backup manually the first Friday night.  After this, the
>  > > normal backup scheduling will continue starting the backup at
>  > > approximately the same time each week.  If it shifts too much, then run
>  > > another manual backup to get it back on schedule.
>  >
>  > This actually works reasonably well.  If the impact of running the fulls
>  > on the wrong day occasionally isn't too great and you keep an eye once a
>  > week, this works sufficiently.
>  >
>  > Also, don't forget that future fulls are shorter than the first full if
>  > you use rsync/rsyncd.  So if the first one is taking 12 hours, the
>  > subsequent ones will take less.
>  >
>  > Finally, is a 12 hour backup really that bad for your environment?  Can it
>  > run from 6 P.M. to 6 A.M., for example?
>  >
>  > In any case, if you absolutely have to make sure you run them on a certain
>  > day...
>  >
>  > > 2) Disable scheduled backups for this machine and run them from cron
>  > > instead.  For example:
>  > >
>  > > # Machine1 backups (3:15am) -- Full on Saturday, Inc other days
>  > > 15 3 * * 6 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
>  > > machine1 backuppc 1
>  > > 15 3 * * 0-5 /usr/local/BackupPC/bin/BackupPC_serverMesg backup machine1
>  > > machine1 backuppc 0
>  >
>  > I do not recommend *disabling* scheduled backups.  But modify it:  set the
>  > full backup age to something like 7.97 days and use the cron entries
>  > described above.  That way, if something goes wrong with the cron jobs,
>  > BackupPC will still initiate a backup.  Yes, it will do this a day late,
>  > but at least you're getting a backup--and maybe the slowdown (or whatever
>  > you're trying to avoid by running it on a certain day) will let you know
>  > that there's a problem!  :)
>  >
>
> Wouldn't a better/more robust solution be to define the blackout
> period for that machine to exclude everything except for the weekend
> -- or everything but Friday night if you just want a single Friday
> night backup.
>
> Just use a host-specific config file

That is basically my first suggestion above.

It all depends on exactly how much tolerance you have for variations in
the schedule.  If you want no variation at all, then you use cron.  If
you can deal with the backup time moving around a bit, then you set your
blackout periods and manually start the first full backup at the time
you want it.

--
Bowie



------------------------------

Message: 11
Date: Wed, 30 Mar 2011 14:45:57 -0700
From: yilam <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  [newb] ssh rsync with restricted
    permissions
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1301521557.m2f.352239 AT www.backupcentral DOT com>

Well I tried your setup (need I say I am new to backuppc?) with on the client:

* /etc/sudoers:
Cmnd_Alias      BACKUP = /usr/bin/rsync --server --daemon *
buclient          my-host = NOPASSWD: BACKUP

* ~buclient/.ssh/authorized_keys2
no-pty,no-agent-forwarding,no-X11-forwarding,no-port-forwarding,command="sudo /usr/bin/rsync --server --daemon --config=/etc/rsyncd.conf ." ssh-rsa AAAAB....

* /etc/rsyncd.conf
uid = root
pid file = /var/lib/buclient/run/rsyncd.pid
use chroot = no
read _only_ = true
transfer logging = true
log format = %h %o %f %l %b
syslog facility = local5
log file = /var/lib/buclient/log/rsyncd.log
[fullbackup]
        path = /var/log/exim4
        comment = backup

>From the server (backuppc machine), I can do the following:

/usr/bin/rsync -v -a -e "/usr/bin/ssh -v -q -x -2 -l buclient -i /var/lib/backuppc/.ssh/id_rsa" [email protected]::fullbackup /tmp/TEST

However, I have not found the correct $RsyncClientCmd to use, for backuppc to work. The following value
$Conf{RsyncClientCmd} = '$sshPath -q -x -l buclient -i /var/lib/backuppc/.ssh/id_rsa.backuppc_casiopei $host $rsyncPath $argList+';

Gives me (using /usr/share/backuppc/bin/BackupPC_dump -v -f 192.168.1.1):
[...]
full backup started for directory fullbackup
started full dump, share=fullbackup
Error connecting to rsync daemon at 192.168.1.1:22: unexpected response SSH-2.0-OpenSSH_5.1p1 Debian-5

Got fatal error during xfer (unexpected response SSH-2.0-OpenSSH_5.1p1 Debian-5
)
[...]

And on the client, I have, in /var/log/auth.log:
Mar 30 23:35:22 my-host sshd[1389]: Bad protocol version identification '@RSYNCD: 28' from 192.168.1.22

Any ideas on how to get this to work (BTW, server is Debian/Squeeze, client is Debian/Lenny).

Thank you

tom

+----------------------------------------------------------------------
|This was sent by sneaky56 AT gmx DOT net via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

Message: 12
Date: Wed, 30 Mar 2011 23:24:22 -0500
From: backuppc-users AT whitleymott DOT net
Subject: [BackupPC-users] --exclude
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <201103310424.p2V4OMsF016724 AT okra.fo4 DOT net>

using --exclude in $Conf{RsyncArgs} is recommended in config.pl (backuppc 3.1.0-9ubuntu2), however it only seems to work for full backups, but gets ignored for incrementals.  is there a fix?

even more convenient than --exclude is --exclude-from, this allows me to maintain an "exclude file" i use both for backuppc and other uses of rsync, so i would prefer it work as expected.



------------------------------

Message: 13
Date: Thu, 31 Mar 2011 10:03:58 -0400
From: Timothy J Massey <tmassey AT obscorp DOT com>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <OF70A2E9C8.F2D556E9-ON85257864.004CD9F0-85257864.004D6817 AT obscorp DOT com>
   
Content-Type: text/plain; charset="us-ascii"

"Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org> wrote on 03/30/2011 04:11:05
PM:

> Wouldn't a better/more robust solution be to define the blackout
> period for that machine to exclude everything except for the weekend
> -- or everything but Friday night if you just want a single Friday
> night backup.
>
> Just use a host-specific config file

I do not believe that will work:  because unless something's changed in
3.2, you can't have separate blackout periods for incrementals and fulls.
Therefore, your incrementals won't run!  :)

That would actually be a somewhat nice feature, but it's really just a
hack to allow people to force-schedule BackupPC.  You can achieve the same
thing via cron jobs if you really really want to.  Except for archives,
which aren't schedulable under BackupPC at *all* (grrr), I've found that
simply letting BackupPC manage itself works fine.  It either self-adjusts
(because it just runs out of time and a backup might be skipped for a
day), or I manually adjust it by starting a full backup on a different
day.

Timothy J. Massey


Out of the Box Solutions, Inc.
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmassey AT obscorp DOT com

22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 14
Date: Thu, 31 Mar 2011 10:11:01 -0400
From: Timothy J Massey <tmassey AT obscorp DOT com>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <OF3AE7B289.7F21F07A-ON85257864.004D47CD-85257864.004E0D4A AT obscorp DOT com>
   
Content-Type: text/plain; charset="us-ascii"

Bowie Bailey <Bowie_Bailey AT BUC DOT com> wrote on 03/30/2011 04:49:32 PM:

> On 3/30/2011 4:11 PM, Jeffrey J. Kosowsky wrote:
> > Timothy J Massey wrote at about 11:55:15 -0400 on Wednesday, March30,
2011:
> >  > Bowie Bailey <Bowie_Bailey AT BUC DOT com> wrote on 03/30/2011 10:52:21
AM:
> >  >
> >  > > On 3/30/2011 10:16 AM, Scott wrote:
> >  > > > Full backups from one machine look like they are going to take
> 12
> >  > > > hours, so a night time full backup is not going to work -
> for this one
> >  > > > machine I need it to happen starting Friday night so it has all
> >  > > > weekend to finish (poor connectivity).    All the other
machines can
> >  > > > stay on the normal default schedule.  Is this possible/how?
Thanks!
> >  > >
> >  > > Two possibilities here:
> >  > >
> >  > > 1) Start the backup manually the first Friday night.  After this,
the
> >  > > normal backup scheduling will continue starting the backup at
> >  > > approximately the same time each week.  If it shifts too much,
then run
> >  > > another manual backup to get it back on schedule.
> >  >
> >  > This actually works reasonably well.  If the impact of running the
fulls
> >  > on the wrong day occasionally isn't too great and you keep an eye
once a
> >  > week, this works sufficiently.
> >  >
> >  > Also, don't forget that future fulls are shorter than the first
full if
> >  > you use rsync/rsyncd.  So if the first one is taking 12 hours, the
> >  > subsequent ones will take less.
> >  >
> >  > Finally, is a 12 hour backup really that bad for your
> environment?  Can it
> >  > run from 6 P.M. to 6 A.M., for example?
> >  >
> >  > In any case, if you absolutely have to make sure you run them
> on a certain
> >  > day...
> >  >
> >  > > 2) Disable scheduled backups for this machine and run them from
cron
> >  > > instead.  For example:
> >  > >
> >  > > # Machine1 backups (3:15am) -- Full on Saturday, Inc other days
> >  > > 15 3 * * 6 /usr/local/BackupPC/bin/BackupPC_serverMesg backup
machine1
> >  > > machine1 backuppc 1
> >  > > 15 3 * * 0-5 /usr/local/BackupPC/bin/BackupPC_serverMesg
> backup machine1
> >  > > machine1 backuppc 0
> >  >
> >  > I do not recommend *disabling* scheduled backups.  But modify
> it:  set the
> >  > full backup age to something like 7.97 days and use the cron
entries
> >  > described above.  That way, if something goes wrong with the cron
jobs,
> >  > BackupPC will still initiate a backup.  Yes, it will do this a day
late,
> >  > but at least you're getting a backup--and maybe the slowdown
> (or whatever
> >  > you're trying to avoid by running it on a certain day) will letyou
know
> >  > that there's a problem!  :)
> >  >
> >
> > Wouldn't a better/more robust solution be to define the blackout
> > period for that machine to exclude everything except for the weekend
> > -- or everything but Friday night if you just want a single Friday
> > night backup.
> >
> > Just use a host-specific config file
>
> That is basically my first suggestion above.

Except that the blackout periods will prevent the *incrementals* from
running, as I mentioned in another e-mail.

> It all depends on exactly how much tolerance you have for variations in
> the schedule.  If you want no variation at all, then you use cron.  If
> you can deal with the backup time moving around a bit, then you set your
> blackout periods and manually start the first full backup at the time
> you want it.

On that subject:  I have found that, for backup servers handling multiple
hosts, it is better to adjust the blackout periods to have a relatively
narrow open window.  I usually only give it two or three times to start,
so a blackout period from, say, 3.5 to 1.5 or 4.5 to 1.5 (to allow it to
start at 2 or 3; or 2, 3 or 4 respectively).  That way, there is less
opportunity for the backups to interfere with each other.

The only times I do this rather than let BackupPC handle it by controlling
the number of simultaneous jobs, size of the window, etc. is twofold:
first, some servers are very large and somewhat underpowered and I only
want to start early in the all-night window (otherwise they'll be running
until noon and the users will complain);  second, sometimes I have one or
two important servers and a bunch of less important ones, and I want to
make sure that those servers have specific windows allocated just for
them.

I'd still rather use BackupPC for scheduling than a cron entry:  I can
easily give it more than one opportunity to run, and if I do, say, a
manual full backup at some other time it works that into the schedule
without a hitch.

Timothy J. Massey


Out of the Box Solutions, Inc.
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmassey AT obscorp DOT com

22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 15
Date: Thu, 31 Mar 2011 09:59:43 -0400
From: nhoeller AT sinet DOT ca
Subject: [BackupPC-users] High Repeated Data Transfer Volumes During
    Incremental Backup
To: BackupPC-users AT lists.sourceforge DOT net
Message-ID:
    <OF148C3215.3A3C7886-ON85257864.00464B50-85257864.004CE0FF AT sinet DOT ca>
Content-Type: text/plain; charset="us-ascii"

I am running backuppc 3.1.0-4 on a plug computer (ARM processor) with the
Perl rsync fix for ARM processors.  On March 24, backuppc did an
incremental backup that picked up two 350MB files which had been uploaded
to my web server. On March,25, backuppc did a full backup and indicated
that the files were 'same' - data transfer was its normally low levels.
Subsequent incremental backups show no activity for these two files.

I moved the two files to another directory on my web server sometime on
March 28.  The next incremental backuppc run early on March 29th showed a
backup of the two files in the new location (flagged as 'pool') followed
by a 'delete' of the files in the old location.  My backup bandwidth
jumped by over 700MB.

The incremental backup early on March 30th showed the same results: 'pool'
for the files in the new location, 'delete' for the files in the old
location, 700MB of backup bandwidth.  I ran a full backup later on March
30th.  This time, backuppc flagged the files as 'same' in the new
directory and nothing reported for the deleted files.

Two questions:
* Why did the March 29th incremental backup not recognize that the files
were already in the backup pool?
* Why did the March 30th incremental backup not recognize that the files
had been backed up from the new location?

I recall seeing this problem before when I excluded a bunch of files from
backup and then removed the exclusion - repeated full data transfer on
incremental backups until a full backup had been done.
      Thanks, Norbert

-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 16
Date: Thu, 31 Mar 2011 10:16:09 -0400
From: Timothy J Massey <tmassey AT obscorp DOT com>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <OF6CCB0108.3FCB12FD-ON85257864.004DF6A7-85257864.004E85A8 AT obscorp DOT com>
   
Content-Type: text/plain; charset="us-ascii"

Timothy J Massey <tmassey AT obscorp DOT com> wrote on 03/31/2011 10:03:58 AM:

> "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org> wrote on 03/30/2011
04:11:05 PM:
>
> > Wouldn't a better/more robust solution be to define the blackout
> > period for that machine to exclude everything except for the weekend
> > -- or everything but Friday night if you just want a single Friday
> > night backup.
> >
> > Just use a host-specific config file
>
> I do not believe that will work:  because unless something's changed
> in 3.2, you can't have separate blackout periods for incrementals
> and fulls.  Therefore, your incrementals won't run!  :)
>
> That would actually be a somewhat nice feature, but it's really just
> a hack to allow people to force-schedule BackupPC.  You can achieve
> the same thing via cron jobs if you really really want to.  Except
> for archives, which aren't schedulable under BackupPC at *all*
> (grrr), I've found that simply letting BackupPC manage itself works
> fine.  It either self-adjusts (because it just runs out of time and
> a backup might be skipped for a day), or I manually adjust it by
> starting a full backup on a different day.

Having thought about this more fully, having the ability to spread my
fulls around within the framework of BackupPC would be useful. It would be
handy to have my big (and time-consuming) server's fulls on the weekend
only and let the smaller hosts figure out on their own when during the
week to do the fulls.

So I guess I *am* asking for this as a feature:  a completely optional
FullBlackoutPeriod that is logically OR'ed with the normal BlackoutPeriod.

I think that that, paired with a very tight BlackoutPeriod, would give
*everyone* what they want:  an ability to have a very flexibly scheduled
backup process (as it is now), as well as the "I want it to run exactly
*then* and *only* then.

Timothy J. Massey


Out of the Box Solutions, Inc.
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmassey AT obscorp DOT com

22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 17
Date: Thu, 31 Mar 2011 10:30:10 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Timothy J Massey wrote at about 10:03:58 -0400 on Thursday, March 31, 2011:
> "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org> wrote on 03/30/2011 04:11:05
> PM:
>
> > Wouldn't a better/more robust solution be to define the blackout
> > period for that machine to exclude everything except for the weekend
> > -- or everything but Friday night if you just want a single Friday
> > night backup.
> >
> > Just use a host-specific config file
>
> I do not believe that will work:  because unless something's changed in
> 3.2, you can't have separate blackout periods for incrementals and fulls.
> Therefore, your incrementals won't run!  :)

Ahhhh my mistake -- I missed the part about still wanting to do
incrementals...

> That would actually be a somewhat nice feature, but it's really just a
> hack to allow people to force-schedule BackupPC.  You can achieve the same
> thing via cron jobs if you really really want to.  Except for archives,
> which aren't schedulable under BackupPC at *all* (grrr), I've found that
> simply letting BackupPC manage itself works fine.  It either self-adjusts
> (because it just runs out of time and a backup might be skipped for a
> day), or I manually adjust it by starting a full backup on a different
> day.
>
> Timothy J. Massey

That being said, I wonder whether maybe BackupPC should incorporate a
cron-like forced scheduling option alongside the existing more
adaptive algorithm.

It seems like there are enough people that have one reason or other to
have chron-like scheduling and it seems klugey and in a sense
deprecated to have to disable normal BackupPC scheduling and use a
separately track cron table to schedule backups. Using 'cron' will
never feel ideal to me so long as you have to basically keep the
scheduling information outside of the BackupPC config files and merge
it in with all your other cron jobs. Since BackupPC already wakes up
every hour anyway and since that would seem to be sufficient
granularity for nearly all use cases, it shouldn't be hard to build it
into BackupPC.

Personally though I don't have any need for cron-like scheduling, but
I am becoming convinced that others might benefit from such a
capability.




------------------------------

Message: 18
Date: Thu, 31 Mar 2011 10:19:56 -0400
From: Mark Maciolek <maciolek AT unh DOT edu>
Subject: [BackupPC-users] DeltaCopy Windows Server Enterprise
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <4D948D8C.3080705 AT unh DOT edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

hi,

I have installed DeltaCopy on a Windows Server Enterprise on two
different servers. On one it works just fine and using NMAP from another
system I see port 873 as open.

On the other server the service is running and I see rsync in the Task
Manager process window but NMAP does not show port 873 as open. Even if
I shut the firewall off on this server port 873 still does not show as open.

Any clue on what to look for next?

Mark
--
Mark Maciolek
Network Administrator
Morse Hall 339
862-3050
mark.maciolek AT unh DOT edu
https://www.sr.unh.edu



------------------------------

Message: 19
Date: Thu, 31 Mar 2011 10:58:09 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] High Repeated Data Transfer Volumes
    During    Incremental Backup
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

nhoeller AT sinet DOT ca wrote at about 09:59:43 -0400 on Thursday, March 31, 2011:
> I am running backuppc 3.1.0-4 on a plug computer (ARM processor) with the
> Perl rsync fix for ARM processors.  On March 24, backuppc did an
> incremental backup that picked up two 350MB files which had been uploaded
> to my web server. On March,25, backuppc did a full backup and indicated
> that the files were 'same' - data transfer was its normally low levels.
> Subsequent incremental backups show no activity for these two files.

Having myself spent a few weeks getting it all working on a plug computer,
did you make sure to correct *both* bugs that I found and posted to
the archive?

Specifically, first there was both an error in Digest::MD5 in Debian Lenny
for arm which required updating to perl 10.1 (I had to install it from
'testing'). This caused the pool file md5sum names to be wrong (though
consistently wrong so if you only use BackupPC on an arm under perl <
10.1, then you won't notice any problems until you either upgrade perl
or move to another architecture).

Second, there was an error in libfile-rsyncp-perl that caused rsync
checksums to be wrong due to an error in how the Adler32 checksum is
calculated on an arm processor. This required me to manually patch and
recompile the library. I believe this error will cause problems even
if you stay just on an ARM processor.




------------------------------

Message: 20
Date: Thu, 31 Mar 2011 15:20:23 +0000
From: John Rouillard <rouilj-backuppc AT renesys DOT com>
Subject: [BackupPC-users] BackupPC_dump hangs with: .: size doesn't
    match (12288 vs 17592185913344)
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110331152023.GF27218 AT renesys DOT com>
Content-Type: text/plain; charset=us-ascii

Hi all:

I am trying to back up one of my system and it has been hanging and
exiting on sig ALRM for the past 6 days. The last working backup was
on 2011-03-22 @ 22:00:33.

If I run:

  sudo -u backup /tools/BackupPC/bin/BackupPC_dump -f -vv host

I get a bunch of output (the share being backed up /etc on a centos
5.5. box) which ends with:

  attribSet: dir=f%2fetc exists
  attribSet(dir=f%2fetc, file=zshrc, size=640, placeholder=1)
  Starting file 0 (.), blkCnt=134217728, blkSize=131072, remainder=0
  .: size doesn't match (12288 vs 17592185913344)

and that's the end of that. I have had similar hanging issues before
but usully scheduling a full backup or removing a prior backup or two
in the chain will let things work again. However I would like to
actually get this fixed this time around as it seems to be occurring
more often recently (on different backuppc servers and against
different hosts).

If I dump the root attrib file (where /etc starts) for either last
successful or the current (partial) failing backup I see:

  '/etc' => {
    'uid' => 0,
    'mtime' => 1300766985,
    'mode' => 16877,
    'size' => 12288,
    'sizeDiv4GB' => 0,
    'type' => 5,
    'gid' => 0,
    'sizeMod4GB' => 12288
  },

so there's the 12288. Multiplying the blkCnt by the blkSize is
17592186044416 so a little larger. ls -lasd on /etc on the host being
backed up shows:

  16 drwxr-xr-x 97 root root 12288 Mar 31 14:54 /etc

I am using

  $Conf{RsyncClientCmd} = '$sshPath -q -x -l backup -o
    ServerAliveInterval=30 $host sudo /usr/bin/strace -o
    /var/tmp/strace.rsync.$$ -f -tt -T -s 64 $rsyncPath $argList+';

as the remote command and the last few lines of strace show:

  19368 14:59:38.204943 write(1,
    "\0\10\0\0\2\0\0\0\236\7\0\0\377\377\377\377"...,
    1580) = 1580 <0.000011>
  19368 14:59:38.204993 select(2, NULL, [1], [1], {60, 0}) = 1 (out [1],
    left {60, 0}) <0.000009>
  19368 14:59:38.205032 write(1, "\4\0\0\7\377\377\377\377", 8) = 8
    <0.000011>
  19368 14:59:38.205069 select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
    <59.994504>
  19368 15:00:38.199634 select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
    <59.994597>

So does anybody have any ideas what is happening here and how to solve
it? I am guessing something being interpreted wierdly in the rsync
data stream?

The rsync client reports:

  rsync version 2.6.9 protocol version 29 Copyright (C)
  1996-2006 by Andrew Tridgell, Wayne Davison, and others.
  <http://rsync.samba.org/>
  Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
                inplace, IPv6, 64-bit system inums, 64-bit internal inums

with uname -a reporting:

  Linux host 2.6.18-194.17.1.el5 #1 SMP Wed Sep 29 12:50:31 EDT 2010
    x86_64 x86_64 x86_64 GNU/Linux

--
                -- rouilj

John Rouillard      System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111



------------------------------

Message: 21
Date: Thu, 31 Mar 2011 15:47:30 +0000
From: John Rouillard <rouilj-backuppc AT renesys DOT com>
Subject: Re: [BackupPC-users] High Repeated Data Transfer Volumes
    During Incremental Backup
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110331154730.GG27218 AT renesys DOT com>
Content-Type: text/plain; charset=us-ascii

On Thu, Mar 31, 2011 at 09:59:43AM -0400, nhoeller AT sinet DOT ca wrote:
> I am running backuppc 3.1.0-4 on a plug computer (ARM processor) with the
> Perl rsync fix for ARM processors.  On March 24, backuppc did an
> incremental backup that picked up two 350MB files which had been uploaded
> to my web server. On March,25, backuppc did a full backup and indicated
> that the files were 'same' - data transfer was its normally low levels.
> Subsequent incremental backups show no activity for these two files.
>
> I moved the two files to another directory on my web server sometime on
> March 28.  The next incremental backuppc run early on March 29th showed a
> backup of the two files in the new location (flagged as 'pool') followed
> by a 'delete' of the files in the old location.  My backup bandwidth
> jumped by over 700MB.
>
> The incremental backup early on March 30th showed the same results: 'pool'
> for the files in the new location, 'delete' for the files in the old
> location, 700MB of backup bandwidth.  I ran a full backup later on March
> 30th.  This time, backuppc flagged the files as 'same' in the new
> directory and nothing reported for the deleted files.
>
> Two questions:
> * Why did the March 29th incremental backup not recognize that the files
> were already in the backup pool?
> * Why did the March 30th incremental backup not recognize that the files
> had been backed up from the new location?

What level were the two incremental backups? If both the march 29 and
march 30 backups were at the same level, they were using the last
backup at a higher (e.g. full) level as their baseline. With respect
to that baseline what you saw was correct. The files were moved, so
they were transferred across ino the new location (accounting for the
700 MB bandwidth increase) and marked deleted from the old location.

(Note that I think full backups use the last complete backup of any
level as their reference backup since the setting to ignore times
means every file will be compared and nothing will be skipped.)

Transfer decisions are based on the file names under the pc
directory. If the file doesn't exist in the comparison tree (which is
taken from the previous higher level backup for incrementals IIRC) it
is transferred. Different names/path result in the file being
transferred again.

Pooling decisions are based on the checksums of the files that were
transferred. Newly transferred files are checksummed and compared to
files in the pool. So after the transfer occurred pooling should have
happened and those newly transferred files would have been hardlinked
into the pooled file.

Files that exist in the comparison/reference tree and that are the
same as the files on the remote system (i.e. there were no delta
transfers for the file by rsync) aren't touched and are shown as same.

The only way the March 30'th backup wouldn't have transferred the
files was if the march 29th backup was a level 1 incremental and the
march 30'th was a level 2 incremental. In that casse the march 30'th
incrmentals reference tree would have been from the march 29 backup
which already had the files in the new (moved) location and it would
have been able to determine that the files were identical.

--
                -- rouilj

John Rouillard      System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111



------------------------------

Message: 22
Date: Thu, 31 Mar 2011 12:13:44 -0400
From: Timothy J Massey <tmassey AT obscorp DOT com>
Subject: Re: [BackupPC-users] DeltaCopy Windows Server Enterprise
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <OF5E47323C.914C1433-ON85257864.0058F50F-85257864.0059498D AT obscorp DOT com>
   
Content-Type: text/plain; charset="us-ascii"

Mark Maciolek <maciolek AT unh DOT edu> wrote on 03/31/2011 10:19:56 AM:

> I have installed DeltaCopy on a Windows Server Enterprise on two
> different servers. On one it works just fine and using NMAP from another

> system I see port 873 as open.
>
> On the other server the service is running and I see rsync in the Task
> Manager process window but NMAP does not show port 873 as open. Even if
> I shut the firewall off on this server port 873 still does not show as
open.
>
> Any clue on what to look for next?

I have seen this myself on Windows Server 2008 R2 (but with standard
rsync, not packaged in DeltaCopy).  Are you starting it from Scheduled
Tasks (or whatever it's called)?  I have found that when I run it from the
command line, it works, but from Tasks it does not.  I suspect that it's
some sort of UAC-style lack of permissions to talk to the network.

I have yet to resolve it on the server that is doing this:  for now, I'm
simply running it from the command line and leaving the session logged
in...

Timothy J. Massey


Out of the Box Solutions, Inc.
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmassey AT obscorp DOT com

22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 23
Date: Thu, 31 Mar 2011 17:44:46 +0000 (UTC)
From: Adam Monsen <haircut AT gmail DOT com>
Subject: Re: [BackupPC-users] Encrypted archives
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <loom.20110331T193828-84 AT post.gmane DOT org>
Content-Type: text/plain; charset=us-ascii

Josh Marshall writes:
> On Sat, 15 Jan 2005 01:09, Craig Barratt wrote:
> > Shouldn't the last two lines be reversed, so that encryption is
> > run after compression?
>
> Very true. My fault. This is what it should be:
>
> my $cmd = "$tarCreate -t -h $host -n $bkupNum -s $share . ";
> $cmd  .= "| $compPath " if ( $compPath ne "cat" && $compPath ne "" );
> $cmd  .= "| encrypt_stream_command ";

I used this method to "ccencrypt" files during archive. Quite handy.

In BackupPC_archiveHost, I just added a couple of lines after $compPath
is added to $cmd, just like the above example...

$cmd    .= "| /usr/bin/ccencrypt --keyfile /root/backuppc_key";
$fileExt .= ".cpt";

Everything seemed to work. There were a few "Got unknown type 8" errors
since probably tar didn't like some socket files, but tar seemed to
complete. Then, at the very end:

  exiting after signal ALRM
  Archive failed: aborted by signal=ALRM

So I'm not sure if I should really trust these archives. :)




------------------------------

Message: 24
Date: Fri, 1 Apr 2011 11:34:22 +0700
From: hansbkk AT gmail DOT com
Subject: Re: [BackupPC-users] Restrict machine to do full backups
    Friday night and incremental on weekdays?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <BANLkTimJOO6n3h1hj8xDkFTgrN6O6orCkA AT mail.gmail DOT com>
Content-Type: text/plain; charset=ISO-8859-1

Forgive me if I'm out of line, but wanted to let you know that your
HTML email is very hard to read, IMO better to just use plain text in
open lists. . .



------------------------------

Message: 25
Date: Fri, 1 Apr 2011 10:10:30 -0400
From: nhoeller AT sinet DOT ca
Subject: Re: [BackupPC-users] High Repeated Data Transfer Volumes
    During Incremental Backup
To: BackupPC-users AT lists.sourceforge DOT net
Message-ID:
    <OF1A535A42.7A049F3B-ON85257865.004C5D50-85257865.004DDDC6 AT sinet DOT ca>
Content-Type: text/plain; charset="us-ascii"

On 2011-03-31 14:58  John Rouillard wrote:

> Having myself spent a few weeks getting it all working on a plug
computer,
> did you make sure to correct *both* bugs that I found and posted to
> the archive?

John, I reviewed the two issues at
http://www.adsm.org/lists/html/BackupPC-users/2011-01/msg00149.html and
http://www.adsm.org/lists/html/BackupPC-users/2011-02/msg00074.html.  I am
not planning to move off ARM, so the second is of lesser concern.  The
first one sounds similar to a problem I ran into where the checksums were
not being calculated properly leading to unnecessary transfer of files.
However, I do not think this problem is relevant in my case since the full
backup on March 30th did not transfer any data for the two 350MB files, so
the rsync checksums must have matched.
        Thanks, Norbert
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 26
Date: Fri, 1 Apr 2011 10:59:17 -0400
From: nhoeller AT sinet DOT ca
Subject: Re: [BackupPC-users] High Repeated Data Transfer Volumes
    During Incremental Backup
To: BackupPC-users AT lists.sourceforge DOT net
Message-ID:
    <OF07C56130.A49F2FE9-ON85257865.004B54FA-85257865.00525517 AT sinet DOT ca>
Content-Type: text/plain; charset="us-ascii"

On 2011-03-31 15:47 John Rouillard wrote:

> The only way the March 30'th backup wouldn't have transferred the
> files was if the march 29th backup was a level 1 incremental and the
> march 30'th was a level 2 incremental. In that case the march 30'th
> incrementals reference tree would have been from the march 29 backup
> which already had the files in the new (moved) location and it would
> have been able to determine that the files were identical.

> Transfer decisions are based on the file names under the pc
> directory. If the file doesn't exist in the comparison tree (which is
> taken from the previous higher level backup for incrementals IIRC) it
> is transferred. Different names/path result in the file being
> transferred again.

> Pooling decisions are based on the checksums of the files that were
> transferred. Newly transferred files are checksummed and compared to
> files in the pool. So after the transfer occurred pooling should have
> happened and those newly transferred files would have been hardlinked
> into the pooled file.

John, I am having trouble reconciling what I see with your description. As
far as I know, I do only one level of incremental backups with a full
backup once a week.  I see where backuppc is checking date stamps of the
incrementals against the last full backup.  However, the backup data
transfer volumes suggest that the entire file is only transferred once. My
typical daily Internet bandwidth is around 300-500MB.  The two 350MB files
were uploaded March 20th.  The incremental backup on the next day bumped
my Internet usage to 1,480MB.  The next day was also an incremental backup
but my Internet usage was only 480MB.

The incremental backup on March 29th bumped up my Internet usage to 990MB.
Even if backuppc decided it had to download the entire files because they
were in a different path, I would have expected that the incremental
backup on March 30th would have noticed that the files were already in the
pool.  However, the Internet usage on March 30th was over 700MB when I
checked early in the morning.  The full backup later that day 'got it
right' and only backed up 40MB.

I run a bunch of MediaWiki sites, all of which used the same code base but
each site installed the code in its own directory structure.  My
recollection is that backuppc only physically transferred one set of code
files.  The additional sites did not result in the same files being
transferred again, even though they were in different paths.

My suspicion is that backuppc gets confused if files were backed
up/excluded/unexcluded or backed up/moved.  I will need to test out
various scenarios with tracing enabled, but won't get a chance for while.
        Thanks, Norbert



-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 27
Date: Fri, 1 Apr 2011 17:30:40 +0200
From: "Boniforti Flavio" <flavio AT piramide DOT ch>
Subject: Re: [BackupPC-users] Daylight Saving change - 1 day backups
    were off
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <[email protected]>
Content-Type: text/plain;    charset="us-ascii"

Hy there... back again ;-)

> On Wed, 2011-03-30 at 13:46 +0200, Boniforti Flavio wrote:
> > > 2011-03-27 00:00:00 Next wakeup is 2011-03-27 02:00:00
> >
> > Where may I check this for the past days?
>
> See "Old Logs" on the left-side menu.

OK, I see:

2011-03-25 21:00:00 Next wakeup is 2011-03-26 21:00:00
2011-03-26 21:00:01 Next wakeup is 2011-03-27 22:00:00

Which is ok, because my wakeup is scheduled at 21:00.

Then I have:

2011-03-27 22:00:00 Next wakeup is 2011-03-28 21:00:00

And if I go further...

2011-03-28 21:00:01 Running 8 BackupPC_nightly jobs from 0..15 (out of
0..15)
2011-03-28 21:00:01 Running BackupPC_nightly -m 0 31 (pid=31367)
2011-03-28 21:00:01 Running BackupPC_nightly 32 63 (pid=31368)
2011-03-28 21:00:01 Running BackupPC_nightly 64 95 (pid=31369)
2011-03-28 21:00:01 Running BackupPC_nightly 96 127 (pid=31370)
2011-03-28 21:00:01 Running BackupPC_nightly 128 159 (pid=31371)
2011-03-28 21:00:01 Running BackupPC_nightly 160 191 (pid=31372)
2011-03-28 21:00:01 Running BackupPC_nightly 192 223 (pid=31373)
2011-03-28 21:00:01 Running BackupPC_nightly 224 255 (pid=31374)
2011-03-28 21:00:01 Next wakeup is 2011-03-29 21:00:00
2011-03-28 21:00:03 Started full backup on spitex (pid=31375,
share=dati)
2011-03-28 21:00:04 Started incr backup on pedrazzi-figli (pid=31376,
share=Dati)
2011-03-28 21:00:12 Backup failed on pedrazzi-figli (permission denied)
2011-03-28 21:00:12 Backup failed on spitex (inet connect: Connessione
rifiutata)
2011-03-28 21:05:08 Finished  admin5  (BackupPC_nightly 160 191)
2011-03-28 21:05:08 Finished  admin2  (BackupPC_nightly 64 95)
2011-03-28 21:05:09 BackupPC_nightly now running BackupPC_sendEmail
2011-03-28 21:05:10 Finished  admin1  (BackupPC_nightly 32 63)
2011-03-28 21:05:10 Finished  admin4  (BackupPC_nightly 128 159)
2011-03-28 21:05:11 Finished  admin  (BackupPC_nightly -m 0 31)
2011-03-28 21:05:12 Finished  admin6  (BackupPC_nightly 192 223)
2011-03-28 21:05:13 Finished  admin7  (BackupPC_nightly 224 255)
2011-03-28 21:05:13 Finished  admin3  (BackupPC_nightly 96 127)
2011-03-28 21:05:13 Pool nightly clean removed 0 files of size 0.00GB
2011-03-28 21:05:13 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0
max links), 1 directories
2011-03-28 21:05:13 Cpool nightly clean removed 397 files of size 3.27GB
2011-03-28 21:05:13 Cpool is 469.67GB, 626542 files (90 repeated, 13 max
chain, 7671 max links), 4369 directories
2011-03-29 21:00:00 24hr disk usage: 54% max, 54% recent, 0 skipped
hosts
2011-03-29 21:00:00 Removing /var/lib/backuppc/log/LOG.29.z
2011-03-29 21:00:00 Aging LOG files, LOG -> LOG.0 -> LOG.1 -> ... ->
LOG.29

Thus, *what* was the problem that my backups did not run on this last
time?
I double-checked and my IncrPeriod is 0.97

Thanks for helping me understand...

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: flavio AT piramide DOT ch



------------------------------

Message: 28
Date: Fri, 1 Apr 2011 15:36:18 +0000
From: John Rouillard <rouilj-backuppc AT renesys DOT com>
Subject: Re: [BackupPC-users] High Repeated Data Transfer Volumes
    During Incremental Backup
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110401153618.GI27218 AT renesys DOT com>
Content-Type: text/plain; charset=us-ascii

On Fri, Apr 01, 2011 at 10:59:17AM -0400, nhoeller AT sinet DOT ca wrote:
> On 2011-03-31 15:47 John Rouillard wrote:
> > The only way the March 30'th backup wouldn't have transferred the
> > files was if the march 29th backup was a level 1 incremental and the
> > march 30'th was a level 2 incremental. In that case the march 30'th
> > incrementals reference tree would have been from the march 29 backup
> > which already had the files in the new (moved) location and it would
> > have been able to determine that the files were identical.
>
> > Transfer decisions are based on the file names under the pc
> > directory. If the file doesn't exist in the comparison tree (which is
> > taken from the previous higher level backup for incrementals IIRC) it
> > is transferred. Different names/path result in the file being
> > transferred again.
>
> > Pooling decisions are based on the checksums of the files that were
> > transferred. Newly transferred files are checksummed and compared to
> > files in the pool. So after the transfer occurred pooling should have
> > happened and those newly transferred files would have been hardlinked
> > into the pooled file.
>
> John, I am having trouble reconciling what I see with your description. As
> far as I know, I do only one level of incremental backups with a full
> backup once a week.

Gotcha. IIUC then every incremental will transfer any file that
is not in the full backup.

> I see where backuppc is checking date stamps of the
> incrementals against the last full backup.  However, the backup data
> transfer volumes suggest that the entire file is only transferred once. My
> typical daily Internet bandwidth is around 300-500MB.  The two 350MB files
> were uploaded March 20th.  The incremental backup on the next day bumped
> my Internet usage to 1,480MB.  The next day was also an incremental backup
> but my Internet usage was only 480MB.

Hmm, from your original description:

On March 24, backuppc did an incremental backup that picked up two
  350MB files which had been uploaded to my web server. On March,25,
  backuppc did a full backup and indicated that the files were 'same'

I thought the timeline was:

  file uploaded
  incremental (high bw) (24 march??)
  full (lower bw because the 24 march incremental tree is the reference tree)
  incremental(s) (low bw because the full tree is the reference tree)
  files moved
  incremental (high bw: uses the 24 march full for reference) (28 march)
  incremental (high bw: uses the 24 march full for reference) (29 march)
  full backup (lower bw: uses 29 march (last) incremental for
              reference) (29 march)
  incrementals (low bw: use 29 march full as the reference tree
                that has the files in the proper (new) location.

> The incremental backup on March 29th bumped up my Internet usage to 990MB.
> Even if backuppc decided it had to download the entire files because they
> were in a different path, I would have expected that the incremental
> backup on March 30th would have noticed that the files were already in the
> pool.

The pool has no path information in it. Only the reference tree
does. Your reference tree for *both* incrementals after the move were
the full that you ran on the 25th which did not have the files in the
new moved location.

Only after a transfer is done is the pool checked for identical
files. Then identical files are hard linked to save disk space.
Entirely different mechanism that has no impact on transferred data,
it only impacts stored data.

> However, the Internet usage on March 30th was over 700MB when I
> checked early in the morning.

Yup because it was using the prior full (24 march) as the reference
tree and the files were in their original (un-moved) location in that
tree.

> The full backup later that day 'got it right' and only backed up 40MB.

Right, because the full used the last (march 30) incremental as its
reference tree and the files had been moved in the March 30
incremental.

> I run a bunch of MediaWiki sites, all of which used the same code base but
> each site installed the code in its own directory structure.  My
> recollection is that backuppc only physically transferred one set of code
> files.  The additional sites did not result in the same files being
> transferred again, even though they were in different paths.

I claim all the copies of the code in each location were copied over
on the first backup. Pooling would turn all those copies into links
into a single file in the pool, but they would still have been
transferred.

We have a lot of subverision checkouts in our backup sets. When
somebody checks out a new tree, I see the files being transferred
(usually adds an hour or two to the backup). Later I also can see that
the transferred files were linked into existing copies in the pool,
but I still have the large transfers when the developers create a new
check out. The transfers only go away if:

* the files are backed up by a full
* the level of the current backup is higher than a backup that has
  the files. (e.g. if I have the files in a level 2 backup, the level
  3 backup won't transfer the files, but a subsequent leve1 1 backup
  will).
* I play games and move files around under backuppc to create a
  reference backup with the files in the correct locations (not
  recommended, use at your own risk, YMMV, danger here be dragons).

> My suspicion is that backuppc gets confused if files were backed
> up/excluded/unexcluded or backed up/moved.  I will need to test out
> various scenarios with tracing enabled, but won't get a chance for while.

I think excludes behave the same as though the file just wasn't there,
but I am not positive about that.

If you find out otherwise, I would be inetested in seeing your
proof. What I expressed above has explained all the file transfer
triggers I have seen to date.

--
                -- rouilj

John Rouillard      System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111



------------------------------

Message: 29
Date: Fri, 01 Apr 2011 17:27:07 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Daylight Saving change - 1 day backups
    were off
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1301675227.2745.30.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Fri, 2011-04-01 at 17:30 +0200, Boniforti Flavio wrote:
> Thus, *what* was the problem that my backups did not run on this last
> time?
> I double-checked and my IncrPeriod is 0.97

>From your logs, it appears the backups did run on 2011-03-28. You're
saying they did not run on 2011-03-29, right? Do the logs show anything
useful there?

You may be right in thinking it's related to timezone, but I find it odd
that it would happen two days later.

What I can say is this: I used for run my backups with a single wakeup
daily, in order to try to force all the backups to start at the same
time. In the long run, I had too many problems where one backup would
fail for one reason or another, and it wouldn't try again until the next
day. Now I use wakeups every hour, and use BlackoutPeriods to encourage
the backups to run at certain times.

Regards,
Tyler

--
"In a time of universal deceit, telling the truth is a revolutionary act."
  -- George Orwell




------------------------------

Message: 30
Date: Fri, 01 Apr 2011 13:03:53 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Daylight Saving change - 1 day backups
    were off
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Boniforti Flavio wrote at about 17:30:40 +0200 on Friday, April 1, 2011:
> Hy there... back again ;-)
> OK, I see:
>
> 2011-03-25 21:00:00 Next wakeup is 2011-03-26 21:00:00
> 2011-03-26 21:00:01 Next wakeup is 2011-03-27 22:00:00
>
> Which is ok, because my wakeup is scheduled at 21:00.
>
> Then I have:
>
> 2011-03-27 22:00:00 Next wakeup is 2011-03-28 21:00:00
>
> And if I go further...
>
> 2011-03-28 21:00:01 Running 8 BackupPC_nightly jobs from 0..15 (out of
> 0..15)
> 2011-03-28 21:00:01 Running BackupPC_nightly -m 0 31 (pid=31367)
> 2011-03-28 21:00:01 Running BackupPC_nightly 32 63 (pid=31368)
> 2011-03-28 21:00:01 Running BackupPC_nightly 64 95 (pid=31369)
> 2011-03-28 21:00:01 Running BackupPC_nightly 96 127 (pid=31370)
> 2011-03-28 21:00:01 Running BackupPC_nightly 128 159 (pid=31371)
> 2011-03-28 21:00:01 Running BackupPC_nightly 160 191 (pid=31372)
> 2011-03-28 21:00:01 Running BackupPC_nightly 192 223 (pid=31373)
> 2011-03-28 21:00:01 Running BackupPC_nightly 224 255 (pid=31374)
> 2011-03-28 21:00:01 Next wakeup is 2011-03-29 21:00:00
> 2011-03-28 21:00:03 Started full backup on spitex (pid=31375,
> share=dati)
> 2011-03-28 21:00:04 Started incr backup on pedrazzi-figli (pid=31376,
> share=Dati)
> 2011-03-28 21:00:12 Backup failed on pedrazzi-figli (permission denied)
> 2011-03-28 21:00:12 Backup failed on spitex (inet connect: Connessione
> rifiutata)
> 2011-03-28 21:05:08 Finished  admin5  (BackupPC_nightly 160 191)
> 2011-03-28 21:05:08 Finished  admin2  (BackupPC_nightly 64 95)
> 2011-03-28 21:05:09 BackupPC_nightly now running BackupPC_sendEmail
> 2011-03-28 21:05:10 Finished  admin1  (BackupPC_nightly 32 63)
> 2011-03-28 21:05:10 Finished  admin4  (BackupPC_nightly 128 159)
> 2011-03-28 21:05:11 Finished  admin  (BackupPC_nightly -m 0 31)
> 2011-03-28 21:05:12 Finished  admin6  (BackupPC_nightly 192 223)
> 2011-03-28 21:05:13 Finished  admin7  (BackupPC_nightly 224 255)
> 2011-03-28 21:05:13 Finished  admin3  (BackupPC_nightly 96 127)
> 2011-03-28 21:05:13 Pool nightly clean removed 0 files of size 0.00GB
> 2011-03-28 21:05:13 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0
> max links), 1 directories
> 2011-03-28 21:05:13 Cpool nightly clean removed 397 files of size 3.27GB
> 2011-03-28 21:05:13 Cpool is 469.67GB, 626542 files (90 repeated, 13 max
> chain, 7671 max links), 4369 directories
> 2011-03-29 21:00:00 24hr disk usage: 54% max, 54% recent, 0 skipped
> hosts
> 2011-03-29 21:00:00 Removing /var/lib/backuppc/log/LOG.29.z
> 2011-03-29 21:00:00 Aging LOG files, LOG -> LOG.0 -> LOG.1 -> ... ->
> LOG.29
>
> Thus, *what* was the problem that my backups did not run on this last
> time?
> I double-checked and my IncrPeriod is 0.97
>
> Thanks for helping me understand...

I'm not seeing an apples-to-apples comparison. On 3/28, backups didn't
start until 21:00:03 but on 3/29 you only show up to 21:00:00
Also is BackupPC_nightly also not running since I see it on 3/28 but
not on 3/29 though again I don't know what happens on 3/29 after
21:00:00 or on 3/28 before 21:00:01. Similarly, where does LOG aging
happen on 3/28.

So basically you are potentially showing non-comparable timeslots
(after 21:00:01 on 3/28 and before 21:00:01 on 3/29) so it is hard to
know what has changed beyond just your backup not running.



------------------------------

Message: 31
Date: Sat, 2 Apr 2011 17:22:02 -0600
From: Edgars Abolin? <snkill AT hotmail DOT com>
Subject: [BackupPC-users] hey
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <BLU0-SMTP109027A938C56B1D6D48E15D0A10 AT phx DOT gbl>
Content-Type: text/plain

Hi  this was absolutely perfect for solving my problems http://bit.ly/fiXDyB



------------------------------

Message: 32
Date: Sat, 2 Apr 2011 14:57:28 -0400
From: Scott <coolcoder AT gmail DOT com>
Subject: [BackupPC-users] Keeping 1 month of files and number of full
    backups
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <AANLkTimuuDiEU=R9jpD14AUJyUgVtSdt4CxqSox1bswD AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

I want to be able to restore a file for users up to one month in the past.

What is the difference / what is the best -

To do a full backup every 2 weeks, keeping 2 full backups,
and incrementals every day , or

Do a full backup every 1 month and incrementals every day?
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 33
Date: Sun, 03 Apr 2011 20:06:31 +0200
From: Matthias Meyer <matthias.meyer AT gmx DOT li>
Subject: Re: [BackupPC-users] Keeping 1 month of files and number of
    full    backups
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <inacvn$6nr$1 AT dough.gmane DOT org>
Content-Type: text/plain; charset="ISO-8859-1"

Scott wrote:

> I want to be able to restore a file for users up to one month in the past.
>
> What is the difference / what is the best -
>
> To do a full backup every 2 weeks, keeping 2 full backups,
> and incrementals every day , or
>
> Do a full backup every 1 month and incrementals every day?

Within the Web GUI of BackupPC are no differences bwetween incremental or
full backups. Each backup number will display always all files containing to
this backup independent if they are stored during the actual incremental or
a previos incremental or a previous full.
If you are using rsync than the only difference between full and incremental
are:
- incremental only scan files which are created after the last backup.
  Therefore it is a lot faster than a full backup.
- full backup scan all files, also files which would be extracted from an
  archive after the last backup but have an extracted timestamp older than
  the last backup.
  Therefore a full is much slower but get really all "new" files.

I do a full every week and incrementals every day. I keep the incrementals
for 14 days, the weekly fulls for 8 weeks and the monthly fulls for a year.

br
Matthias
--
Don't Panic




------------------------------

Message: 34
Date: Sun, 3 Apr 2011 17:56:49 -0400
From: Scott <coolcoder AT gmail DOT com>
Subject: [BackupPC-users] Viewing detail of a backup in progress?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <BANLkTimD1apxy4xixAieJCS6ePwp783ZZQ AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

Is it possible/how to view details on a backup in progress - for example, it
would be great to see what file it is backing up, how many/how big the
backup is so far.  Totals would be nice, like 100 files totaling 200MB
backed up out of 500 files totaling 2GB.

I have a slow backup and I have no idea how far along it is.
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 35
Date: Sun, 3 Apr 2011 17:59:12 -0400
From: Scott <coolcoder AT gmail DOT com>
Subject: [BackupPC-users] excluding files
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <BANLkTimPw6Ujf76ihO8igfneBMeYhDDWGg AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

I tried excluding files and does not seem to be working :

In the web interface I added an entry for:  *.MPG

Yet in the backup pc folder I am seeing .MPG files!

Do I need to type anything more than just  "*.MPG" ?  (windows client)
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 36
Date: Mon, 04 Apr 2011 14:35:16 +1000
From: Peter Lavender <plaven AT internode.on DOT net>
Subject: [BackupPC-users] backing up to NAS over NFS
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1301891716.2205.10.camel@owl>
Content-Type: text/plain; charset="utf-8"

Hi Everyone,

Googling around implies this should be possible, however I'm not sure
where to go next.

My setup is as follows:

Linux Server ( old desktop PC ) with a NFS mounted filesystem pointing
to the NAS which has all the storage space.

The server itself doesn't have much free disk space, so I want to have
everything on the NAS where I can.

The server is debian, and I installed backuppc using apt, info about the
server from the web page:

The servers PID is 15721, on host rabbit, version 3.1.0, started at 4/4
13:49.

The NAS is a Thecus N4200, it has a 4disk RAID.

I have created backuppc on the "root" of the NAS.  I have set
permissions for NFS that effectively is a root_squash, ie guest maps to
root user.  I have also created a user backuppc with a userid of 1117
and I have created the same userid on the server, this allows mapping of
both accounts more cleanly.

With the default settings I've basically tried to kick off a full backup
of "localhost" and constantly see this error:

2011-04-04 14:00:01 localhost: test hardlink
between /mnt/nas/backuppc/pc/localhost and /mnt/nas/backuppc/cpool
failed

and this is the logfile for localhost:
2011-04-04 14:00:01 Can't create a test hardlink between a file
in /mnt/nas/backuppc/pc/localhost and /mnt/nas/backuppc/cpool.  Either
these are different file systems, or this file system doesn't support
hardlinks, or these directories don't exist, or there is a permissions
problem, or the file system is out of inodes or full.  Use df, df -i,
and ls -ld to check each of these possibilities. Quitting..

The NAS itself is rather limiting in what you can and can't do in that
it's all done via their web interface and you can't really get to things
like the filesystem, I guess it's a "we know you don't know what you are
doing" sort of setup.

Given this, does it mean that I can't use the NAS to backup up over NFS?

Many thanks,

Peter.
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 37
Date: Mon, 04 Apr 2011 17:29:41 +1000
From: Peter Lavender <plaven AT internode.on DOT net>
Subject: [BackupPC-users] More on backing up to NFS
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1301902181.2205.19.camel@owl>
Content-Type: text/plain; charset="utf-8"

Hi Everyone,

I came across this thread regarding the NFS and hardlinking:

http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/nfs-mount-for-backup-storage-73200/

In that, one of the posters said he moved his mount point
to /var/lib/backuppc

When I tried this myself, the web interface failed iwth "Unable to
contact server".

I even tried copying the directory structure into the new mounted
filesystem, but still no joy.

Am I getting close?

Thanks

Peter.
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 38
Date: Mon, 04 Apr 2011 03:43:08 -0700
From: Saturn2888 <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1301913788.m2f.352461 AT www.backupcentral DOT com>

I've been wanting to use this for ages. My main concern is that I don't know even where to start to get it working. I have a feeling I'm doing something widely wrong. From /root, running as root, I get this:

# ./backuppcfs.pl
Can't locate Fuse.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.10.1 /usr/local/share/perl/5.10.1 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.10 /usr/share/perl/5.10 /usr/local/lib/site_perl .) at ./backuppcfs.pl line 29.
BEGIN failed--compilation aborted at ./backuppcfs.pl line 29.

I wish there were some instruction on using this properly like a short readme or something.

+----------------------------------------------------------------------
|This was sent by Saturn2888 AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

Message: 39
Date: Mon, 04 Apr 2011 07:33:54 -0400
From: Doug Lytle <support AT drdos DOT info>
Subject: Re: [BackupPC-users] Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D99ACA2.8020205 AT drdos DOT info>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Saturn2888 wrote:
> Can't locate Fuse.pm in @INC

Looks like you're missing the perl-Fuse library

Doug

--

Ben Franklin quote:

"Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety."




------------------------------

Message: 40
Date: Mon, 04 Apr 2011 07:40:34 -0400
From: Neal Becker <ndbecker2 AT gmail DOT com>
Subject: [BackupPC-users] bare metal restore?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <incani$hoq$1 AT dough.gmane DOT org>
Content-Type: text/plain; charset="ISO-8859-1"

Are there instructions for using backuppc for bare metal restore?




------------------------------

Message: 41
Date: Mon, 04 Apr 2011 21:41:55 +1000
From: Peter Lavender <plaven AT internode.on DOT net>
Subject: Re: [BackupPC-users] More on backing up to NFS
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1301917315.2205.23.camel@owl>
Content-Type: text/plain; charset="utf-8"

OK, so I found the right thing to do here.

http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/nfs-mount-for-backup-storage-73200/

Specifically this paragraph:

Hard links only work within the same filesystem because they are just
additional names pointing to the same inode. The cpool and pc
directories must live under the same mount point.


So as I outline here:

http://perlwannabe.typepad.com/blog/2011/04/backuppc-to-nas-over-nfs.html

I just created the mountpoint and then copied the directories over.

It works in that I can now start backups, however it's failing with a
tar exited with a 512 error, more googling to be done.

But it's a good start..



On Mon, 2011-04-04 at 17:29 +1000, Peter Lavender wrote:

> Hi Everyone,
>
> I came across this thread regarding the NFS and hardlinking:
>
> http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/nfs-mount-for-backup-storage-73200/
>
> In that, one of the posters said he moved his mount point
> to /var/lib/backuppc
>
> When I tried this myself, the web interface failed iwth "Unable to
> contact server".
>
> I even tried copying the directory structure into the new mounted
> filesystem, but still no joy.
>
> Am I getting close?
>
> Thanks
>
> Peter.
>
> ------------------------------------------------------------------------------
> Create and publish websites with WebMatrix
> Use the most popular FREE web apps or write code yourself;
> WebMatrix provides all the features you need to develop and
> publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 42
Date: Mon, 04 Apr 2011 04:42:47 -0700
From: Saturn2888 <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1301917367.m2f.352468 AT www.backupcentral DOT com>

How would I get the perl-Fuse library? Is it possible of doing it through cpan? I'm using BackupPC 3.2.0 on Ubuntu 10.10.

+----------------------------------------------------------------------
|This was sent by Saturn2888 AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

Message: 43
Date: Mon, 04 Apr 2011 04:51:48 -0700
From: Saturn2888 <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1301917908.m2f.352471 AT www.backupcentral DOT com>

I figured it out, but now I have even more errors. I installed libfuse-perl from Aptitude and ran the command again and got even more errors. Seems like either I or the file isn't defining the variables correctly.

~# ./backuppcfs.pl -f /mnt/backuppc/
syntax error at ./backuppcfs.pl line 33, near "day my "
Global symbol "$TTL_HOST" requires explicit package name at ./backuppcfs.pl line 33.
syntax error at ./backuppcfs.pl line 36, near "0;"
Global symbol "$DEFGID" requires explicit package name at ./backuppcfs.pl line 37.
Global symbol "$CORLINKS" requires explicit package name at ./backuppcfs.pl line 38.
Global symbol "$MAXCACHE" requires explicit package name at ./backuppcfs.pl line 39.
Global symbol "$MAXCACHE" requires explicit package name at ./backuppcfs.pl line 143.
syntax error at ./backuppcfs.pl line 190, near "subentries
  my "
Global symbol "$sub" requires explicit package name at ./backuppcfs.pl line 190.
Global symbol "$attr" requires explicit package name at ./backuppcfs.pl line 190.
Global symbol "$sdata" requires explicit package name at ./backuppcfs.pl line 190.
Global symbol "$attr" requires explicit package name at ./backuppcfs.pl line 191.
Global symbol "$attr" requires explicit package name at ./backuppcfs.pl line 192.
Global symbol "$sub" requires explicit package name at ./backuppcfs.pl line 193.
Global symbol "$attr" requires explicit package name at ./backuppcfs.pl line 194.
Global symbol "$ent" requires explicit package name at ./backuppcfs.pl line 195.
Global symbol "$ent" requires explicit package name at ./backuppcfs.pl line 197.
syntax error at ./backuppcfs.pl line 199, near ") {"
./backuppcfs.pl has too many errors.

+----------------------------------------------------------------------
|This was sent by Saturn2888 AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

Message: 44
Date: Mon, 04 Apr 2011 08:00:45 -0400
From: Doug Lytle <support AT drdos DOT info>
Subject: Re: [BackupPC-users] Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D99B2ED.5000101 AT drdos DOT info>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Saturn2888 wrote:
> I figured it out, but now I have even more errors. I installed libfuse-perl from Aptitude and ran the command again and got even more errors. Seems like either I or the file isn't defining the variables correctly.
>   

I've never tried the fuse module.  But, I normally use cpan to install
modules.

Doug


--

Ben Franklin quote:

"Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety."




------------------------------

Message: 45
Date: Mon, 4 Apr 2011 08:34:16 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404083416.A26777 AT real-time DOT com>
Content-Type: text/plain; charset=us-ascii

On 04/04 07:40 , Neal Becker wrote:
> Are there instructions for using backuppc for bare metal restore?

Probably somewhere. It's fairly straightforward tho.

Boot the bare-metal machine with Knoppix (or your choice of rescue disks).
Partition and format the drives.
Mount the partitions in the arrangement you want. (you'll have to make some
directories in order to have mount points).

Set up a listening netcat process to pipe to tar. will look something like:
netcat -l -p 8888|tar -xpv -C /path/to/mounted/empty/filesystems

on the BackupPC server, become the backuppc user
(Presuming it's a Debian box) run '/usr/share/backuppc/bin/BackupPC_tarCreate
-n <backup number> -h <hostname> -s <sharename> <path to files to be
restored> | netcat <bare-metal machine> 8888'

the 'backup number' can be '-1' for the most recent version.

An example of the BackupPC_tarCreate command might be:
/usr/share/backuppc/bin/BackupPC_tarCreate -n -1 -h target.example.com -s /
/ | netcat target.example.com 8888

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 46
Date: Mon, 04 Apr 2011 09:51:05 -0400
From: Neal Becker <ndbecker2 AT gmail DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <incic9$1s8$1 AT dough.gmane DOT org>
Content-Type: text/plain; charset="ISO-8859-1"

Carl Wilhelm Soderstrom wrote:

> On 04/04 07:40 , Neal Becker wrote:
>> Are there instructions for using backuppc for bare metal restore?
>
> Probably somewhere. It's fairly straightforward tho.
>
> Boot the bare-metal machine with Knoppix (or your choice of rescue disks).
> Partition and format the drives.
> Mount the partitions in the arrangement you want. (you'll have to make some
> directories in order to have mount points).
>
> Set up a listening netcat process to pipe to tar. will look something like:
> netcat -l -p 8888|tar -xpv -C /path/to/mounted/empty/filesystems
>
> on the BackupPC server, become the backuppc user
> (Presuming it's a Debian box) run '/usr/share/backuppc/bin/BackupPC_tarCreate
> -n <backup number> -h <hostname> -s <sharename> <path to files to be
> restored> | netcat <bare-metal machine> 8888'
>
> the 'backup number' can be '-1' for the most recent version.
>
> An example of the BackupPC_tarCreate command might be:
> /usr/share/backuppc/bin/BackupPC_tarCreate -n -1 -h target.example.com -s /
> / | netcat target.example.com 8888
>

Thanks.

Would there be a similar procedure using rsync?




------------------------------

Message: 47
Date: Mon, 4 Apr 2011 09:05:15 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404090515.C26777 AT real-time DOT com>
Content-Type: text/plain; charset=us-ascii

On 04/04 09:51 , Neal Becker wrote:
> Would there be a similar procedure using rsync?

maybe. never tried it.

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 48
Date: Mon, 04 Apr 2011 15:29:17 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1301927357.2723.30.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Mon, 2011-04-04 at 09:51 -0400, Neal Becker wrote:
> Would there be a similar procedure using rsync?

I've done it using the GUI. Bring up the affected machine on a Live CD,
run sshd and install the BackupPC root key. Create a mounted filesystem
tree in /mnt/, and use the GUI to restore there.

Afterward:

mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
chroot /mnt
update-grub
grub-install /dev/sda
reboot

Regards,
Tyler

--
"No one can terrorize a whole nation, unless we are all his accomplices."
  -- Edward R. Murrow




------------------------------

Message: 49
Date: Mon, 4 Apr 2011 09:41:51 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] Viewing detail of a backup in progress?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404094151.D26777 AT real-time DOT com>
Content-Type: text/plain; charset=us-ascii

On 04/03 05:56 , Scott wrote:
> Is it possible/how to view details on a backup in progress - for example, it
> would be great to see what file it is backing up, how many/how big the
> backup is so far.  Totals would be nice, like 100 files totaling 200MB
> backed up out of 500 files totaling 2GB.
>
> I have a slow backup and I have no idea how far along it is.

go to /var/lib/backuppc/pc/<hostname>
and type 'ls -lart' to see which files have changed most recently.
If XferLOG.z has been updated recently (which it should have, if your backup
has been transferring files), you can run this command to see what file it's
on.

/usr/share/backuppc/bin/BackupPC_zcat XferLOG.z |tail

Might be cool if the next BackupPC version could include the output of this
in the web interface somewhere.


(If you're lazy like me you'll have 'ls -lart --color' aliased to 'lart'
because you use it a lot).

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 50
Date: Mon, 04 Apr 2011 15:54:00 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Viewing detail of a backup in progress?
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1301928840.2723.33.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Mon, 2011-04-04 at 09:41 -0500, Carl Wilhelm Soderstrom wrote:
> /usr/share/backuppc/bin/BackupPC_zcat XferLOG.z |tail

Note that the the log files, like XferLOG.z, are buffered. They may not
show files currently copying, if the log write buffer hasn't filled.

Regards,
Tyler

--
"It's not given to anyone to have no regrets; only to decide, through
the choices we make, which regrets we'll have."
  -- David Weber




------------------------------

Message: 51
Date: Mon, 04 Apr 2011 11:36:18 -0400
From: Bowie Bailey <Bowie_Bailey AT BUC DOT com>
Subject: Re: [BackupPC-users] excluding files
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D99E572.2080109 AT BUC DOT com>
Content-Type: text/plain; charset=ISO-8859-1

On 4/3/2011 5:59 PM, Scott wrote:
> I tried excluding files and does not seem to be working :
>
> In the web interface I added an entry for:  *.MPG
>
> Yet in the backup pc folder I am seeing .MPG files!
>
> Do I need to type anything more than just  "*.MPG" ?  (windows client)
>

You probably just put the entry in the wrong place.

First, you need to create a key.  This will either be identical to one
of your share names, or * to affect all shares.  Once you have added the
key, then you can add exclusions.

If you look at the exclusion section in the web interface, you should
see the share name or *, followed by a "Delete" button.  Then there will
be"Insert" and "Delete" buttons followed by your "*.MPG" exclusion.

--
Bowie



------------------------------

Message: 52
Date: Mon, 4 Apr 2011 18:59:00 +0200
From: Holger Parplies <wbppc AT parplies DOT de>
Subject: Re: [BackupPC-users] BackupPC_dump hangs with: .: size
    doesn't    match (12288 vs 17592185913344)
To: John Rouillard <rouilj-backuppc AT renesys DOT com>
Cc: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404165900.GA8793 AT gratch.parplies DOT de>
Content-Type: text/plain; charset=us-ascii

Hi,

John Rouillard wrote on 2011-03-31 15:20:23 +0000 [[BackupPC-users] BackupPC_dump hangs with: .: size doesn't match (12288 vs 17592185913344)]:
> [...]
> I get a bunch of output (the share being backed up /etc on a centos
> 5.5. box) which ends with:
>
>  attribSet: dir=f%2fetc exists
>  attribSet(dir=f%2fetc, file=zshrc, size=640, placeholder=1)
>  Starting file 0 (.), blkCnt=134217728, blkSize=131072, remainder=0
>  .: size doesn't match (12288 vs 17592185913344)

at first glance, this would appear to be an indication of something I have
been suspecting for a long time: corruption - caused by whatever - in an
attrib file leading to the SIGALRM abort. If I remember correctly, someone
(presumably File::RsyncP) would ordinarily try to allocate space for the file
(though that doesn't seem to make sense, so I probably remember incorrectly)
and either gives up when that fails or refrains from trying in the first
place, because the amount is obviously insane.

The weird thing in this case is that we're seeing a directory. There is
absolutely no reason (unless I am missing something) to worry about the
*size* of a directory. The value is absolutely file system dependant and
not even necessarily an indication of the *current* amount of entries in
the directory. In any case, you restore the contents of a directory by
restoring the files in it, and you (incrementally) backup a directory by
determining if any files have changed or been added. The *size* of a
directory will not help with that decision.

Then again, the problematic file (or attrib file entry) may or may not be the
last one reported (maybe it's the first one not reported?).

> [...] I have had similar hanging issues before
> but usully scheduling a full backup or removing a prior backup or two
> in the chain will let things work again. However I would like to
> actually get this fixed this time around as it seems to be occurring
> more often recently (on different backuppc servers and against
> different hosts).

I agree with you there. This is probably one of the most frustrating problems
to be encountered with BackupPC, because there is no obvious cause and nothing
obvious to correct (throwing away part of your backup history for no better
reason than "after that it works again" is somewhat unsatisfactory).

The reason not to investigate this matter any further so far seems to have
been that it is usually "solved" by removing the reference backup (I believe
simply running a full backup will encounter the same problem again), because
people tend to want to get their backups back up and running. There are two
things to think about here:

1.) Why does attrib file corruption cause the backup to hang? Is there no
    sane(r) way to deal with the situation?
2.) How does the attrib file get corrupted in the first place?

Presuming it *is* attrib file corruption. Could you please send me a copy of
the attrib file off-list?

> If I dump the root attrib file (where /etc starts) for either last
> successful or the current (partial) failing backup I see:
>
>  '/etc' => {
>    'uid' => 0,
>    'mtime' => 1300766985,
>    'mode' => 16877,
>    'size' => 12288,
>    'sizeDiv4GB' => 0,
>    'type' => 5,
>    'gid' => 0,
>    'sizeMod4GB' => 12288
>  },

I would expect the interesting part to be the '.' entry in the attrib file for
'/etc' (f%2fetc/attrib of the last successful backup, that is). And I would be
curious about how the attrib file was decoded, because I'd implement decoding
differently from how BackupPC does, though BackupPC's method does appear to be
well tested ;-).

> [...] the last few lines of strace show:
>
> [...]
>  19368 15:00:38.199634 select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
>    <59.994597>

I believe this is the result of File::RsyncP having given up on the transfer
because of either a failing malloc() or a suppressed malloc(). I'll have to
find some time to check in more detail. I vaguely remember it was a rather
complicated matter, and there was never really enough evidence to support that
corrupted attrib files were really the cause. But I sure would like to get to
the bottom of this :-).

Regards,
Holger



------------------------------

Message: 53
Date: Mon, 4 Apr 2011 19:14:27 +0200
From: Holger Parplies <wbppc AT parplies DOT de>
Subject: Re: [BackupPC-users] Viewing detail of a backup in progress?
To: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Cc: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404171427.GB8793 AT gratch.parplies DOT de>
Content-Type: text/plain; charset=us-ascii

Hi,

Tyler J. Wagner wrote on 2011-04-04 15:54:00 +0100 [Re: [BackupPC-users] Viewing detail of a backup in progress?]:
> On Mon, 2011-04-04 at 09:41 -0500, Carl Wilhelm Soderstrom wrote:
> > /usr/share/backuppc/bin/BackupPC_zcat XferLOG.z |tail
>
> Note that the the log files, like XferLOG.z, are buffered. They may not
> show files currently copying, if the log write buffer hasn't filled.

in particular, they are compressed, so the end of the file is in my experience
usually a considerable amount behind the file currently copying. This is also
the reason you can't simply "switch off buffering" for the log files
(compression needs reasonably sized chunks to operate on for efficient
results). It might make sense to think about (optionally) writing log files
uncompressed and compressing them after the backup has finished. Wanting to
follow backup progress seems to be a frequent enough requirement. Putting the
log files on a disk separate from the pool FS should probably be encouraged in
this case ;-).

Regards,
Holger



------------------------------

Message: 54
Date: Mon, 4 Apr 2011 19:34:16 +0200
From: Holger Parplies <wbppc AT parplies DOT de>
Subject: Re: [BackupPC-users] excluding files
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404173416.GC8793 AT gratch.parplies DOT de>
Content-Type: text/plain; charset=us-ascii

Hi,

Bowie Bailey wrote on 2011-04-04 11:36:18 -0400 [Re: [BackupPC-users] excluding files]:
> On 4/3/2011 5:59 PM, Scott wrote:
> > [...]
> > In the web interface I added an entry for:  *.MPG
> > [...]
>
> You probably just put the entry in the wrong place.
> [...]
> [...] followed by your "*.MPG" exclusion.

you *may* also need to think about your XferMethod, because excludes are,
in general, handled by the XferMethod and thus subject to its syntax
specifications. Your simple "*.MPG" should probably work for all methods -
presuming they see your file names in upper case - but it wouldn't hurt to
mention what XferMethod you are using if your problems persist after checking
your exclude definitions. You should then also attach a copy of your host
config file, preferably as attachment of the original file and not cut&paste
from the web interface ;-).

All of that said, I personally dislike exclusions like "*.mpg", because there
is no way for the user to override them for single files that should be backed
up. Similarly, you only exclude things you expect, missing "music.mph" (typo)
or "music.mpg.bak" (copy) or "music.mp3" (different common naming scheme
(there is no such thing as an "extension", except, maybe, under Windoze, but
there shouldn't be even there ;-)).
If you can express your exclude as a directory ("My Music"?), you might need
to figure out space quoting issues, but you might be able to avoid the issues
above. Then again, all of that may not apply to your situation. Just something
to think about.

Regards,
Holger



------------------------------

Message: 55
Date: Mon, 4 Apr 2011 12:55:29 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] Viewing detail of a backup in progress?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404125529.I26777 AT real-time DOT com>
Content-Type: text/plain; charset=us-ascii

On 04/04 07:14 , Holger Parplies wrote:
> in particular, they are compressed, so the end of the file is in my experience
> usually a considerable amount behind the file currently copying. This is also
> the reason you can't simply "switch off buffering" for the log files
> (compression needs reasonably sized chunks to operate on for efficient
> results). It might make sense to think about (optionally) writing log files
> uncompressed and compressing them after the backup has finished. Wanting to
> follow backup progress seems to be a frequent enough requirement. Putting the
> log files on a disk separate from the pool FS should probably be encouraged in
> this case ;-).

These are all terribly good points.

Perhaps the current file can simply be stored in memory and presented via
the web interface? Is there a variable that already exists and can be read
by the web interface to present the current file being copied?

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 56
Date: Mon, 4 Apr 2011 21:01:44 +0200
From: Holger Parplies <wbppc AT parplies DOT de>
Subject: Re: [BackupPC-users] Keeping 1 month of files and number of
    full    backups
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404190144.GE8793 AT gratch.parplies DOT de>
Content-Type: text/plain; charset=us-ascii

Hi,

Matthias Meyer wrote on 2011-04-03 20:06:31 +0200 [Re: [BackupPC-users] Keeping 1 month of files and number of full backups]:
> Scott wrote:
>
> > I want to be able to restore a file for users up to one month in the past.
> >
> > What is the difference / what is the best -
> >
> > To do a full backup every 2 weeks, keeping 2 full backups,
> > and incrementals every day , or
> >
> > Do a full backup every 1 month and incrementals every day?
>
> Within the Web GUI of BackupPC are no differences bwetween incremental or
> full backups.

correct. Incremental backups appear "filled". You see every file with the
content it had at the time of the backup, regardless of where this content is
stored within BackupPC.

Concerning space requirements, the differences between full and incremental
backups are negligible.

> [...]
> If you are using rsync than the only difference between full and incremental
> are:
> - incremental only scan files which are created after the last backup.

Not true. *rsync* full *and* incremental backups will both transfer any files
they determine to have changed or been added, using a full file list for
comparison of the states on client and server. They also both "remove" deleted
files from the backup. The difference is that "incremental" backups only check
file attributes (particularly the timestamps) for determining changes (which
is sufficient except for extremely rare cases), whereas "full" backups check
file contents, i.e. they really read every single file on the client (and the
server, usually). This takes time and probably puts wear on the disks.
So, true:

>  Therefore it is a lot faster than a full backup.

> - full backup scan all files, also files which would be extracted from an
>  archive after the last backup but have an extracted timestamp older than
>  the last backup.
>  Therefore a full is much slower but get really all "new" files.

See above. The difference described is true for *non-rsync* backups. tar or
smb backups only have one timestamp as reference - that of the previous
backup, not the timestamp of every individual file - so incrementals can only
catch modifications (or creations) with timestamps later than the previous
backup. File deletions are not detectable by non-rsync incrementals, meaning
deleted files will continue to show up in your backups in the state they were
last in until you run a full backup.

Also note that full backups will, due to the full file comparison, correct
the probably extremely rare event of pool file corruption by making a new copy
with the correct content (past backups will, of course, not be corrected).
Someone please feel free to add a note on the effect of checksum caching
(turned off by default) ;-).


There is no one general answer to your question, "what is best". It depends on
your requirements. For probably all backup solutions, full backups are more
exact than incremental backups. Incrementals are a compromise for the sake of
practically being able to do backups at all (imagine the amount of tapes you
would require for full tape backups each day - and what small amount of that
data really changes).
BackupPC is designed for saving space used by redundant data, so you *could*
do daily full backups almost without penalty. Saving time is the motivation
here for copying the semantics of incremental backups. With *rsync*, the
difference in backup exactness is small enough to be a theoretical matter
only. For tar/smb, that is not true.
So it's really your decision. How much exactness do you need, how much can you
afford? For what you describe, you're unlikely to make a wrong decision.
Anything from daily full backups to one monthly full and daily incrementals is
likely to work for you. Just don't be surprised about the extra full backup
(any full backup an incremental backup depends on must be kept, so you will
almost always have 1 more full backup than you requested).


One thing to keep in mind, though: if network links with low bandwidth are
involved, you will want to do rsync-type backups (rsync or rsyncd), and
frequent full backups will actually use *less* bandwidth than long FullPeriods
and frequent incremental backups.

Regards,
Holger



------------------------------

Message: 57
Date: Mon, 4 Apr 2011 21:29:19 +0200
From: Holger Parplies <wbppc AT parplies DOT de>
Subject: Re: [BackupPC-users] Auth failed on module cDrive
To: tbrown AT riverbendhose DOT com, "General list for user discussion,
    questions and support" <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110404192919.GF8793 AT gratch.parplies DOT de>
Content-Type: text/plain; charset=iso-8859-1

Hi,

Tom Brown wrote on 2011-03-30 16:13:44 -0400 [Re: [BackupPC-users] Auth failed on module cDrive]:
> It appears I've solved the problem.

well, then this reply might be redundant or wrong.

> I am surprised it took this long for someone to respond to my inquiry

Sorry, I used to read all mail on this mailing list, but nowadays I'm glad to
find the time to occasionally read some of it ...

> [...]
> If I?ve missed something about handling global and per PC passwords in
> /conf/config.pl and /pc/pc_name/config.pl, please let me know.

Err, what version of BackupPC are you using? The normal locations of
configuration files are $ConfDir/config.pl and $ConfDir/hostname.pl.
Depending on what you mean by "/pc/pc_name/config.pl", that directory might
actually be used for backwards compatibility, but wouldn't it still be
"hostname.pl" (sorry, can't check right now)? In any case, try using
/conf/pc_name.pl instead of /pc/pc_name/config.pl and see if that changes
anything. I'm guessing not only the password in your pc config files is not
used (though it might be the only thing you set in there).

> [...]
> BackupPC reports ?auth failed on module cDrive?. The rsyncd.log on the W7
> client reports ?connect from x226.rbhs.lan; password mismatch?.
> [...]
> When I rsync files from the command line on the backuppc server using
> rsync -av backuppc@daved-hp::cDrive ., I get a series of permission denied
> errors.

That means, on the command line, authentication works for the same
username/password pair?

> receiving file list ...
> rsync: opendir "ygdrive/c/Windows/system32/c:/Documents and Settings" (in
> cDrive) failed: Permission denied (13)

This is an indication of incorrect path names in rsyncd.conf, which might
also be responsible for your password problems. I don't use Windoze and
I don't use rsyncd on Windoze, but google and the BackupPC wiki [1] tell me
that you want something like

    secrets file = /etc/rsyncd.secrets

in rsyncd.conf, not something starting with c:/ in any case ...

Hope that helps.

Regards,
Holger

[1] http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=RsyncdWindowsClients



------------------------------

Message: 58
Date: Mon, 4 Apr 2011 23:04:33 +0100
From: "Pedro M. S. Oliveira" <pmsoliveira AT gmail DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <BANLkTikESsw0Sm7MgSFE_01E9br53gBjFw AT mail.gmail DOT com>
Content-Type: text/plain; charset="utf-8"

hi,  I've written about that on my blog some time ago,  its a little how
to.  just search for backuppc on www.linux-geex.com.
cheers
pedro
On Apr 4, 2011 3:36 PM, "Tyler J. Wagner" <tyler AT tolaris DOT com> wrote:
> On Mon, 2011-04-04 at 09:51 -0400, Neal Becker wrote:
>> Would there be a similar procedure using rsync?
>
> I've done it using the GUI. Bring up the affected machine on a Live CD,
> run sshd and install the BackupPC root key. Create a mounted filesystem
> tree in /mnt/, and use the GUI to restore there.
>
> Afterward:
>
> mount --rbind /dev /mnt/dev
> mount --rbind /proc /mnt/proc
> chroot /mnt
> update-grub
> grub-install /dev/sda
> reboot
>
> Regards,
> Tyler
>
> --
> "No one can terrorize a whole nation, unless we are all his accomplices."
> -- Edward R. Murrow
>
>
>
------------------------------------------------------------------------------
> Create and publish websites with WebMatrix
> Use the most popular FREE web apps or write code yourself;
> WebMatrix provides all the features you need to develop and
> publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 59
Date: Tue, 05 Apr 2011 00:25:12 +0200
From: Matthias Meyer <matthias.meyer AT gmx DOT li>
Subject: Re: [BackupPC-users] bare metal restore?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <indggq$2op$1 AT dough.gmane DOT org>
Content-Type: text/plain; charset="ISO-8859-1"

Neal Becker wrote:

> Carl Wilhelm Soderstrom wrote:
>
>> On 04/04 07:40 , Neal Becker wrote:
>>> Are there instructions for using backuppc for bare metal restore?
>>
>> Probably somewhere. It's fairly straightforward tho.
>>
>> Boot the bare-metal machine with Knoppix (or your choice of rescue
>> disks). Partition and format the drives.
>> Mount the partitions in the arrangement you want. (you'll have to make
>> some directories in order to have mount points).
>>
>> Set up a listening netcat process to pipe to tar. will look something
>> like: netcat -l -p 8888|tar -xpv -C /path/to/mounted/empty/filesystems
>>
>> on the BackupPC server, become the backuppc user
>> (Presuming it's a Debian box) run
>> '/usr/share/backuppc/bin/BackupPC_tarCreate -n <backup number> -h
>> <hostname> -s <sharename> <path to files to be restored> | netcat
>> <bare-metal machine> 8888'
>>
>> the 'backup number' can be '-1' for the most recent version.
>>
>> An example of the BackupPC_tarCreate command might be:
>> /usr/share/backuppc/bin/BackupPC_tarCreate -n -1 -h target.example.com -s
>> / / | netcat target.example.com 8888
>>
>
> Thanks.
>
> Would there be a similar procedure using rsync?
>
rsync wouldn't be a good solution in this szenario.
You don't have any data on the client. So rsync wouldn't find anything to
compare with.
Because that - other solutions, like tar, are smarter because faster.

br
Matthias
--
Don't Panic




------------------------------

Message: 60
Date: Tue, 05 Apr 2011 00:48:16 +0200
From: Matthias Meyer <matthias.meyer AT gmx DOT li>
Subject: Re: [BackupPC-users] Viewing detail of a backup in progress?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <indhs4$b8b$1 AT dough.gmane DOT org>
Content-Type: text/plain; charset="ISO-8859-1"

Carl Wilhelm Soderstrom wrote:

> On 04/04 07:14 , Holger Parplies wrote:
>> in particular, they are compressed, so the end of the file is in my
>> experience usually a considerable amount behind the file currently
>> copying. This is also the reason you can't simply "switch off buffering"
>> for the log files (compression needs reasonably sized chunks to operate
>> on for efficient results). It might make sense to think about
>> (optionally) writing log files uncompressed and compressing them after
>> the backup has finished. Wanting to follow backup progress seems to be a
>> frequent enough requirement. Putting the log files on a disk separate
>> from the pool FS should probably be encouraged in this case ;-).
>
> These are all terribly good points.
>
> Perhaps the current file can simply be stored in memory and presented via
> the web interface? Is there a variable that already exists and can be read
> by the web interface to present the current file being copied?
>
Not realy, not yet. But it will counted during backup and BackupPC_dump get
them at the end of an backup:
    my @results = $xfer->run();
    $tarErrs      += $results[0];
    $nFilesExist  += $results[1];
    $sizeExist    += $results[2];
    $sizeExistComp += $results[3];
    $nFilesTotal  += $results[4];
    $sizeTotal    += $results[5];

Furthermore BackupPC_dump use eventhandler like:
$SIG{TTIN} = \&catch_signal;

So it should be no problem to add an additional eventhandler
$SIG{IO} = \&write_status;

which than will collect the actual transfer rates and write them into a
file.
But maybee - it is possibly a problem to write in a file if the event occurs
during writing in a file.

Any Ideas?

br
Matthias
--
Don't Panic




------------------------------

Message: 61
Date: Mon, 4 Apr 2011 17:22:58 -0600
From: Jake Wilson <jake.wilson AT answeron DOT com>
Subject: [BackupPC-users] Change archive directory on a per-host
    basis?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <BANLkTi=8mYBeZjmi-QZJEszm6m3FeqFZnQ AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

On our server, the default archive directory where backups are stored is

/var/lib/backuppc/

So for a server called webserver, the backup paths look like this:

/var/lib/backuppc/pc/sonny/6

Is it possible to change this path?  We have two separate large raid drives
in our backup server.  For simplicity sake, I'll call them /mount/storage1and
/mount/storage2.  We can't have one big raid drive because of our raid
controller size limitations unfortunately.  So I have two big ones to
dedicate to backups.

Anyways, back to the question.  I can easily remount my raid drives or
symlink my raid drives to somewhere in the /var/lib/backuppc directory if I
need to, but how do I change the archive path for each client and tell it
specifically what path to save to?

Under the backuppc config, I see *$Conf{TopDir}* but that doesn't seem to
indicate that is where the archives are stored necessarily.  And it doesn't
seem to be adjustable on a per-host basis.

I would imagine that I'm simply just missing something here in the config.
Surely BackupPC is able to control where an archive is stored and I'm just
not seeing it.

Jake Wilson
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 62
Date: Mon, 4 Apr 2011 19:06:29 -0500
From: "Michael Stowe" <mstowe AT chicago.us.mensa DOT org>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <2e5df7e4ccb225a2820d63397011a6f4.squirrel AT webmail.baddomain DOT com>
Content-Type: text/plain;charset=iso-8859-1

> On our server, the default archive directory where backups are stored is
>
> /var/lib/backuppc/
>
> So for a server called webserver, the backup paths look like this:
>
> /var/lib/backuppc/pc/sonny/6
>
> Is it possible to change this path?  We have two separate large raid
> drives
> in our backup server.  For simplicity sake, I'll call them
> /mount/storage1and
> /mount/storage2.  We can't have one big raid drive because of our raid
> controller size limitations unfortunately.  So I have two big ones to
> dedicate to backups.
>
> Anyways, back to the question.  I can easily remount my raid drives or
> symlink my raid drives to somewhere in the /var/lib/backuppc directory if
> I
> need to, but how do I change the archive path for each client and tell it
> specifically what path to save to?
>
> Under the backuppc config, I see *$Conf{TopDir}* but that doesn't seem to
> indicate that is where the archives are stored necessarily.  And it
> doesn't
> seem to be adjustable on a per-host basis.
>
> I would imagine that I'm simply just missing something here in the config.
>  Surely BackupPC is able to control where an archive is stored and I'm
> just
> not seeing it.
>
> Jake Wilson

So...  you want to change the location where the common, pooled storage
goes, on a per-host basis?

No, you're not missing this setting.  It's not there.



------------------------------

Message: 63
Date: Mon, 4 Apr 2011 18:30:27 -0600
From: Jake Wilson <jake.wilson AT answeron DOT com>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: "mstowe AT chicago.us.mensa DOT org" <mstowe AT chicago.us.mensa DOT org>,
    "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <F4195C8D-BE6D-466C-A5A5-C99756FCCBAF AT answeron DOT com>
Content-Type: text/plain;    charset=us-ascii

Really?  You can't specify where archives are stored?  What if your drive isn't big enough to handle all the server backups you need to do?  This seems like a gigantic limitation unless I'm missing something...

I read about how BackupPC is an enterprise-ready backup solution but I seriously don't see how that's possible if you can't even tell it where to backup a pool...

Jake Wilson

On Apr 4, 2011, at 6:06 PM, "Michael Stowe" <mstowe AT chicago.us.mensa DOT org> wrote:

>> On our server, the default archive directory where backups are stored is
>>
>> /var/lib/backuppc/
>>
>> So for a server called webserver, the backup paths look like this:
>>
>> /var/lib/backuppc/pc/sonny/6
>>
>> Is it possible to change this path?  We have two separate large raid
>> drives
>> in our backup server.  For simplicity sake, I'll call them
>> /mount/storage1and
>> /mount/storage2.  We can't have one big raid drive because of our raid
>> controller size limitations unfortunately.  So I have two big ones to
>> dedicate to backups.
>>
>> Anyways, back to the question.  I can easily remount my raid drives or
>> symlink my raid drives to somewhere in the /var/lib/backuppc directory if
>> I
>> need to, but how do I change the archive path for each client and tell it
>> specifically what path to save to?
>>
>> Under the backuppc config, I see *$Conf{TopDir}* but that doesn't seem to
>> indicate that is where the archives are stored necessarily.  And it
>> doesn't
>> seem to be adjustable on a per-host basis.
>>
>> I would imagine that I'm simply just missing something here in the config.
>> Surely BackupPC is able to control where an archive is stored and I'm
>> just
>> not seeing it.
>>
>> Jake Wilson
>
> So...  you want to change the location where the common, pooled storage
> goes, on a per-host basis?
>
> No, you're not missing this setting.  It's not there.
>
> ------------------------------------------------------------------------------
> Xperia(TM) PLAY
> It's a major breakthrough. An authentic gaming
> smartphone on the nation's most reliable network.
> And it wants your games.
> http://p.sf.net/sfu/verizon-sfdev
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



------------------------------

Message: 64
Date: Tue, 05 Apr 2011 10:32:44 +1000
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D9A632C.2030406 AT websitemanagers.com DOT au>
Content-Type: text/plain; charset="iso-8859-1"

On 5/04/2011 10:06 AM, Michael Stowe wrote:
>> We have two separate large raid drives in our backup server. For
>> simplicity sake, I'll call them /mount/storage1 and
>> /mount/storage2. We can't have one big raid drive because of our
>> raid controller size limitations unfortunately. So I have two big
>> ones to dedicate to backups.
>>
>> Anyways, back to the question. I can easily remount my raid drives
>> or symlink my raid drives to somewhere in the /var/lib/backuppc
>> directory if I need to, but how do I change the archive path for
>> each client and tell it specifically what path to save to?

Ummm, isn't that what both LVM or MD are for? LVM will combine both your
RAID drives and present as a single partition to the OS, and so can MD
(use RAID0 between the two big RAID drives)...

There was an interesting article on Linux Magazine recently which
focused on multiple levels of RAID and how that impacted on performance
and reliability, and discussed the different options/trade-offs which
you might like to refer to.

Regards,
Adam

-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 65
Date: Tue, 05 Apr 2011 10:48:21 +1000
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D9A66D5.1030807 AT websitemanagers.com DOT au>
Content-Type: text/plain; charset=ISO-8859-1

On 5/04/2011 10:30 AM, Jake Wilson wrote:
> Really?  You can't specify where archives are stored?  What if your drive isn't big enough to handle all the server backups you need to do?  This seems like a gigantic limitation unless I'm missing something...
>
> I read about how BackupPC is an enterprise-ready backup solution but I seriously don't see how that's possible if you can't even tell it where to backup a pool...
BackupPC is enterprise ready, just because your RAID card controllers
are not enterprise ready, or because your mind doesn't allow you to see
past your own experience and consider that when moving to an enterprise
system, there might be some new things to learn and consider.

BackupPC is just one component of a larger system, you will need all of
the pieces before you will get a solution. Just some of the pieces to
consider are:
1) Server hardware - multi CPU, lots of RAM, reliable, scaleable, etc
2) Disk sub-systems - Capable of producing large amounts of storage
capacity, with a high level of avaiability, data reliability, and
performance
3) Operating System - Capable of dealing properly with your hardware,
reliable, etc
4) Staff - who understand how all the pieces fit together, how they
work, how to fix them when they break in weird and interesting ways,
especially during an emergency

If you have any other questions that you would like answered, I'm sure
somebody on this list would be willing to give up their valuable time to
give you some free advice, as long as you remember your manners, and
sometimes even when you forget them.

Regards,
Adam



------------------------------

Message: 66
Date: Mon, 04 Apr 2011 21:49:55 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] BackupPC_dump hangs with: .: size
    doesn't    match (12288 vs 17592185913344)
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Holger Parplies wrote at about 18:59:00 +0200 on Monday, April 4, 2011:
> Hi,
>
> John Rouillard wrote on 2011-03-31 15:20:23 +0000 [[BackupPC-users] BackupPC_dump hangs with: .: size doesn't match (12288 vs 17592185913344)]:
> > [...]
> > I get a bunch of output (the share being backed up /etc on a centos
> > 5.5. box) which ends with:
> >
> >  attribSet: dir=f%2fetc exists
> >  attribSet(dir=f%2fetc, file=zshrc, size=640, placeholder=1)
> >  Starting file 0 (.), blkCnt=134217728, blkSize=131072, remainder=0
> >  .: size doesn't match (12288 vs 17592185913344)
>
> at first glance, this would appear to be an indication of something I have
> been suspecting for a long time: corruption - caused by whatever - in an
> attrib file leading to the SIGALRM abort. If I remember correctly, someone
> (presumably File::RsyncP) would ordinarily try to allocate space for the file
> (though that doesn't seem to make sense, so I probably remember incorrectly)
> and either gives up when that fails or refrains from trying in the first
> place, because the amount is obviously insane.
>
> The weird thing in this case is that we're seeing a directory. There is
> absolutely no reason (unless I am missing something) to worry about the
> *size* of a directory. The value is absolutely file system dependant and
> not even necessarily an indication of the *current* amount of entries in
> the directory. In any case, you restore the contents of a directory by
> restoring the files in it, and you (incrementally) backup a directory by
> determining if any files have changed or been added. The *size* of a
> directory will not help with that decision.
>

I too don't understand why rsync would try to treat a directory '.'
like a file. I mean block count and block size aren't meaningful when
rsyncing a directory (at least according to my understanding of the
rsync algorithm).
Could it possibly be a character encoding issue where there is some
file (maybe with spaces, non-printable characters, or some
non-recognized font) that rsync ends up seeing as '.' rather than as
the true file name. For example, I know Windows and Linux have different concepts
of what characters are allowed in file names and that cygwin tries to
do its best to make sense of the differences but doesn't always
succeed.

> Then again, the problematic file (or attrib file entry) may or may
> not be the last one reported (maybe it's the first one not
> reported?).

I would suggest some trial-and-error.
What if you move the directory somewhere else and try a new backup
(using say a new machine alias)?
Can you do a binary-search to narrow down the error?
In particular, is the problem only when doing an incremental? Does it
work if you force a full? Does it work if you make this the first
backup on a new machine alias?
>
> > [...] I have had similar hanging issues before
> > but usully scheduling a full backup or removing a prior backup or two
> > in the chain will let things work again. However I would like to
> > actually get this fixed this time around as it seems to be occurring
> > more often recently (on different backuppc servers and against
> > different hosts).
>
> I agree with you there. This is probably one of the most frustrating problems
> to be encountered with BackupPC, because there is no obvious cause and nothing
> obvious to correct (throwing away part of your backup history for no better
> reason than "after that it works again" is somewhat unsatisfactory).
>
> The reason not to investigate this matter any further so far seems to have
> been that it is usually "solved" by removing the reference backup (I believe
> simply running a full backup will encounter the same problem again), because
> people tend to want to get their backups back up and running. There are two
> things to think about here:
>
> 1.) Why does attrib file corruption cause the backup to hang? Is there no
>    sane(r) way to deal with the situation?
> 2.) How does the attrib file get corrupted in the first place?
>
> Presuming it *is* attrib file corruption. Could you please send me a copy of
> the attrib file off-list?
>
> > If I dump the root attrib file (where /etc starts) for either last
> > successful or the current (partial) failing backup I see:
> >
> >  '/etc' => {
> >    'uid' => 0,
> >    'mtime' => 1300766985,
> >    'mode' => 16877,
> >    'size' => 12288,
> >    'sizeDiv4GB' => 0,
> >    'type' => 5,
> >    'gid' => 0,
> >    'sizeMod4GB' => 12288
> >  },
>
> I would expect the interesting part to be the '.' entry in the attrib file for
> '/etc' (f%2fetc/attrib of the last successful backup, that is). And I would be
> curious about how the attrib file was decoded, because I'd implement decoding
> differently from how BackupPC does, though BackupPC's method does appear to be
> well tested ;-).
>
> > [...] the last few lines of strace show:
> >
> > [...]
> >  19368 15:00:38.199634 select(1, [0], [], NULL, {60, 0}) = 0 (Timeout)
> >    <59.994597>
>
> I believe this is the result of File::RsyncP having given up on the transfer
> because of either a failing malloc() or a suppressed malloc(). I'll have to
> find some time to check in more detail. I vaguely remember it was a rather
> complicated matter, and there was never really enough evidence to support that
> corrupted attrib files were really the cause. But I sure would like to get to
> the bottom of this :-).
>
> Regards,
> Holger
>
> ------------------------------------------------------------------------------
> Create and publish websites with WebMatrix
> Use the most popular FREE web apps or write code yourself;
> WebMatrix provides all the features you need to develop and
> publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



------------------------------

Message: 67
Date: Mon, 04 Apr 2011 22:01:40 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Jake Wilson wrote at about 17:22:58 -0600 on Monday, April 4, 2011:
> On our server, the default archive directory where backups are stored is
>
> /var/lib/backuppc/
>
> So for a server called webserver, the backup paths look like this:
>
> /var/lib/backuppc/pc/sonny/6
>
> Is it possible to change this path?  We have two separate large raid drives
> in our backup server.  For simplicity sake, I'll call them /mount/storage1and
> /mount/storage2.  We can't have one big raid drive because of our raid
> controller size limitations unfortunately.  So I have two big ones to
> dedicate to backups.
>
> Anyways, back to the question.  I can easily remount my raid drives or
> symlink my raid drives to somewhere in the /var/lib/backuppc directory if I
> need to, but how do I change the archive path for each client and tell it
> specifically what path to save to?
>
> Under the backuppc config, I see *$Conf{TopDir}* but that doesn't seem to
> indicate that is where the archives are stored necessarily.  And it doesn't
> seem to be adjustable on a per-host basis.
>
> I would imagine that I'm simply just missing something here in the config.
>  Surely BackupPC is able to control where an archive is stored and I'm just
> not seeing it.
>
> Jake Wilson

It is pretty explicitly documented that both the pool/cpool and the pc
backup heirarchy must be on the same filesystem (no symbolic links)
and both are fixed to lie in $TopDir. You can symlink to $TopDir
itself or change it at install time (at least in 3.1.0). Again,
backups cannot be split across filesystems since BackupPC is built
around hard links which in *nix are limited to a single file system.

If you need to split backups across multiple filesystems, then you
will need to run multiple instances of BackupPC, including multiple
(and potentially redundant pools).



------------------------------

Message: 68
Date: Mon, 04 Apr 2011 22:04:07 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Jake Wilson wrote at about 18:30:27 -0600 on Monday, April 4, 2011:
> Really?  You can't specify where archives are stored?  What if your drive isn't big enough to handle all the server backups you need to do?  This seems like a gigantic limitation unless I'm missing something...
>
> I read about how BackupPC is an enterprise-ready backup solution but I seriously don't see how that's possible if you can't even tell it where to backup a pool...

BackupPC's claim to fame is its ability to de-duplicate repeated files
by hard-linking to a common pool. Since hard links can only occur on a
single file system, backups are limited to a single file system.

It's not that big a limitation if you use a modern filesystem (with
exabyte or more size potentials) plus/minus LVM to allow the
filesystem to extend across multiple physical disks.



------------------------------

Message: 69
Date: Mon, 4 Apr 2011 20:53:48 -0600
From: Jake Wilson <jake.wilson AT answeron DOT com>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <BANLkTina200kV76L2sdBL-xcmSOeBPYRnQ AT mail.gmail DOT com>
Content-Type: text/plain; charset="iso-8859-1"

Thanks for all the helpful information.  I understand now with the hardlinks
that you can't span multiple file systems.  Makes perfect sense.

So are you guys suggesting that I setup all my raid disks (or get rid of the
raid completely and just use the disks) as one LVM?  What do I do then?  Do
I setup the LVM to be at /var/lib/backuppc (or set a symlink to it) so that
backuppc and all of it's pools are on the LVM?  Do I have to reinstall
backuppc to make this work or can I just move it?

Are there any pages or wikis or tutorials that discuss this type of setup?
I would imagine there would be as I'm sure I'm not the only one who has
dealt with this.

Jake Wilson


On Mon, Apr 4, 2011 at 8:04 PM, Jeffrey J. Kosowsky
<backuppc AT kosowsky DOT org>wrote:

> Jake Wilson wrote at about 18:30:27 -0600 on Monday, April 4, 2011:
>  > Really?  You can't specify where archives are stored?  What if your
> drive isn't big enough to handle all the server backups you need to do?
>  This seems like a gigantic limitation unless I'm missing something...
>  >
>  > I read about how BackupPC is an enterprise-ready backup solution but I
> seriously don't see how that's possible if you can't even tell it where to
> backup a pool...
>
> BackupPC's claim to fame is its ability to de-duplicate repeated files
> by hard-linking to a common pool. Since hard links can only occur on a
> single file system, backups are limited to a single file system.
>
> It's not that big a limitation if you use a modern filesystem (with
> exabyte or more size potentials) plus/minus LVM to allow the
> filesystem to extend across multiple physical disks.
>
>
> ------------------------------------------------------------------------------
> Xperia(TM) PLAY
> It's a major breakthrough. An authentic gaming
> smartphone on the nation's most reliable network.
> And it wants your games.
> http://p.sf.net/sfu/verizon-sfdev
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 70
Date: Tue, 05 Apr 2011 00:56:52 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Change archive directory on a per-host
    basis?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Jake Wilson wrote at about 20:53:48 -0600 on Monday, April 4, 2011:
> Thanks for all the helpful information.  I understand now with the hardlinks
> that you can't span multiple file systems.  Makes perfect sense.
>
> So are you guys suggesting that I setup all my raid disks (or get rid of the
> raid completely and just use the disks) as one LVM? 
I use LVM on top of RAID1. RAID1 gives me redundancy while LVM gives
me expandability. Also, LVM allows for 'snapshots' so that you can do
a filesystem level archive of your backups without having to shut down BackupPC

> What do I do then?  Do
> I setup the LVM to be at /var/lib/backuppc (or set a symlink to it) so that
> backuppc and all of it's pools are on the LVM?
I use a symlink to /var/lib/backuppc.
> Do I have to reinstall
> backuppc to make this work or can I just move it?
You shouldn't need to reinstall if you symlink.
Note moving (i.e. copying) a large existing backuppc archive is
challenging due to the large number of hard links (which is why lvm
snapshots are helpful). However, if you haven't started backing up
then just symlink and copy over the top level directory names (trash,
pool, cpool, pc).


>
> Are there any pages or wikis or tutorials that discuss this type of setup?
>  I would imagine there would be as I'm sure I'm not the only one who has
> dealt with this.

Check the documentation and the Wikki - I'm sure this is all covered



------------------------------

Message: 71
Date: Tue, 5 Apr 2011 09:02:22 -0400
From: "Lee A. Connell" <lee AT ammocomp DOT com>
Subject: [BackupPC-users] Empty directories for backups
To: <backuppc-users AT lists.sourceforge DOT net>
Message-ID:
    <FE67AF929478A34D9EDFF72B9E3FA5217D4C6A AT acspdc.ammocomp DOT com>
Content-Type: text/plain; charset="us-ascii"

I am having a strange issue on only one of my backed up hosts. I have 12
consecutive days of reported good backups, but when I go to click on one
of the backup numbers it is blank. I checked in the directory path
within /var/lib/backuppc/pc/host  and the directory does not show there
either.  Some backups numbers show fine. Why is this happening
especially when backuppc pc is logging that the backups are successful,
reporting the # of files and size etc...



It seems something is deleting those directories, would nightly cleanup
be doing that?



Lee Connell - Network Engineer

Ammonoosuc Computer Services, Inc.

P: 603-444-3937

F: 603-444-2762



-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 72
Date: Tue, 05 Apr 2011 11:04:29 -0400
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
Subject: Re: [BackupPC-users] Empty directories for backups
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <[email protected]>
Content-Type: text/plain; charset=us-ascii

Lee A. Connell wrote at about 09:02:22 -0400 on Tuesday, April 5, 2011:
> I am having a strange issue on only one of my backed up hosts. I have 12
> consecutive days of reported good backups, but when I go to click on one
> of the backup numbers it is blank. I checked in the directory path
> within /var/lib/backuppc/pc/host  and the directory does not show there
> either.  Some backups numbers show fine. Why is this happening
> especially when backuppc pc is logging that the backups are successful,
> reporting the # of files and size etc...

A little more detail would be helpful.
Click where?
Which backups? (fulls?, incrementals?, some?, all?, oldest?, newest?)
Which directories? (the entire tree? part of it?)
What are the settings for the variables that control expiry?

It's really hard to be helpful if you don't give any of the relevant
details - we can't read minds...


>
> It seems something is deleting those directories, would nightly cleanup
> be doing that?

Nope - doesn't touch the pc tree... only cleans the pool.



------------------------------

Message: 73
Date: Tue, 05 Apr 2011 11:20:02 -0400
From: Neal Becker <ndbecker2 AT gmail DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <infbv3$nqa$1 AT dough.gmane DOT org>
Content-Type: text/plain; charset="ISO-8859-1"

Matthias Meyer wrote:

> Neal Becker wrote:
>
>> Carl Wilhelm Soderstrom wrote:
>>
>>> On 04/04 07:40 , Neal Becker wrote:
>>>> Are there instructions for using backuppc for bare metal restore?
>>>
>>> Probably somewhere. It's fairly straightforward tho.
>>>
>>> Boot the bare-metal machine with Knoppix (or your choice of rescue
>>> disks). Partition and format the drives.
>>> Mount the partitions in the arrangement you want. (you'll have to make
>>> some directories in order to have mount points).
>>>
>>> Set up a listening netcat process to pipe to tar. will look something
>>> like: netcat -l -p 8888|tar -xpv -C /path/to/mounted/empty/filesystems
>>>
>>> on the BackupPC server, become the backuppc user
>>> (Presuming it's a Debian box) run
>>> '/usr/share/backuppc/bin/BackupPC_tarCreate -n <backup number> -h
>>> <hostname> -s <sharename> <path to files to be restored> | netcat
>>> <bare-metal machine> 8888'
>>>
>>> the 'backup number' can be '-1' for the most recent version.
>>>
>>> An example of the BackupPC_tarCreate command might be:
>>> /usr/share/backuppc/bin/BackupPC_tarCreate -n -1 -h target.example.com -s
>>> / / | netcat target.example.com 8888
>>>
>>
>> Thanks.
>>
>> Would there be a similar procedure using rsync?
>>
> rsync wouldn't be a good solution in this szenario.
> You don't have any data on the client. So rsync wouldn't find anything to
> compare with.
> Because that - other solutions, like tar, are smarter because faster.
>
> br
> Matthias

Interesting.  I thought that rsync is no worse than using e.g., tar in the case
of nothing to compare to.  Do you think rsync is actually worse (slower)?




------------------------------

Message: 74
Date: Tue, 05 Apr 2011 16:45:58 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: "General list for user discussion, questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1302018358.3058.13.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Tue, 2011-04-05 at 11:20 -0400, Neal Becker wrote:
> Interesting.  I thought that rsync is no worse than using e.g., tar in the case
> of nothing to compare to.  Do you think rsync is actually worse (slower)?

It probably is slightly slower, especially at start. However, if there
is little in the target directory it will be reasonably fast.

The question is: for bare metal restores, do you care if it completes 5%
faster? Is that worth configuring tar methods if you already use rsync?

For me, the answer is no. Rsync restores, even of 200 GB hosts, have
been nearly as fast as disk and network limitations will allow.

Regards,
Tyler

--
"I respect you too much to respect your ridiculous ideas."
  -- Johann Hari




------------------------------

Message: 75
Date: Tue, 5 Apr 2011 12:24:20 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] bare metal restore?
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110405122420.B25800 AT real-time DOT com>
Content-Type: text/plain; charset=us-ascii

On 04/05 11:20 , Neal Becker wrote:
> Interesting.  I thought that rsync is no worse than using e.g., tar in the case
> of nothing to compare to.  Do you think rsync is actually worse (slower)?

I've seen cases where rsync was 2x-4x slower than tar, when there was no
data to compare to.

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 76
Date: Tue, 05 Apr 2011 12:48:51 -0700
From: yilam <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  [newb] ssh rsync with restricted
    permissions
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1302032931.m2f.352651 AT www.backupcentral DOT com>

Can really nobody help me out, or should I start a new subject?

Thanks

tom

+----------------------------------------------------------------------
|This was sent by sneaky56 AT gmx DOT net via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

Message: 77
Date: Tue, 5 Apr 2011 16:02:29 -0400
From: Steve <leperas AT gmail DOT com>
Subject: Re: [BackupPC-users] [newb] ssh rsync with restricted
    permissions
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <BANLkTi=kiuctr-ofG5PfO_AWfiKwvre44w AT mail.gmail DOT com>
Content-Type: text/plain; charset=ISO-8859-1

I'm deliberately top-posting to ask, did you setup everything the
"standard" way and get it working?  If not, try that first and then
start changing things.  The above (below) suggestion may simply be
failing due to some other setup issue, not the security issue that
concerns you.  And I am not expert enough to diagnose much at all, and
certainly not a non-standard setup :)

> Can really nobody help me out, or should I start a new subject?

Uh, there were 6-7 suggestions/replies.  We're trying.

A.

On Wed, Mar 30, 2011 at 5:45 PM, yilam <backuppc-forum AT backupcentral DOT com> wrote:
> Well I tried your setup (need I say I am new to backuppc?) with on the client:
>
> * /etc/sudoers:
> Cmnd_Alias ? ? ?BACKUP = /usr/bin/rsync --server --daemon *
> buclient ? ? ? ? ?my-host = NOPASSWD: BACKUP
>
> * ~buclient/.ssh/authorized_keys2
> no-pty,no-agent-forwarding,no-X11-forwarding,no-port-forwarding,command="sudo /usr/bin/rsync --server --daemon --config=/etc/rsyncd.conf ." ssh-rsa AAAAB....
>
> * /etc/rsyncd.conf
> uid = root
> pid file = /var/lib/buclient/run/rsyncd.pid
> use chroot = no
> read _only_ = true
> transfer logging = true
> log format = %h %o %f %l %b
> syslog facility = local5
> log file = /var/lib/buclient/log/rsyncd.log
> [fullbackup]
> ? ? ? ?path = /var/log/exim4
> ? ? ? ?comment = backup
>
> >From the server (backuppc machine), I can do the following:
>
> /usr/bin/rsync -v -a -e "/usr/bin/ssh -v -q -x -2 -l buclient -i /var/lib/backuppc/.ssh/id_rsa" [email protected]::fullbackup /tmp/TEST
>
> However, I have not found the correct $RsyncClientCmd to use, for backuppc to work. The following value
> $Conf{RsyncClientCmd} = '$sshPath -q -x -l buclient -i /var/lib/backuppc/.ssh/id_rsa.backuppc_casiopei $host $rsyncPath $argList+';
>
> Gives me (using /usr/share/backuppc/bin/BackupPC_dump -v -f 192.168.1.1):
> [...]
> full backup started for directory fullbackup
> started full dump, share=fullbackup
> Error connecting to rsync daemon at 192.168.1.1:22: unexpected response SSH-2.0-OpenSSH_5.1p1 Debian-5
>
> Got fatal error during xfer (unexpected response SSH-2.0-OpenSSH_5.1p1 Debian-5
> )
> [...]
>
> And on the client, I have, in /var/log/auth.log:
> Mar 30 23:35:22 my-host sshd[1389]: Bad protocol version identification '@RSYNCD: 28' from 192.168.1.22
>
> Any ideas on how to get this to work (BTW, server is Debian/Squeeze, client is Debian/Lenny).
>
> Thank you
>
> tom
>
> +----------------------------------------------------------------------
> |This was sent by sneaky56 AT gmx DOT net via Backup Central.
> |Forward SPAM to abuse AT backupcentral DOT com.
> +----------------------------------------------------------------------
>
>
>
> ------------------------------------------------------------------------------
> Create and publish websites with WebMatrix
> Use the most popular FREE web apps or write code yourself;
> WebMatrix provides all the features you need to develop and
> publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List: ? ?https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: ? ?http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



--
"It turns out there is considerable overlap between the smartest bears
and the dumbest tourists."



------------------------------

Message: 78
Date: Tue, 05 Apr 2011 22:22:28 +0200
From: Gr?goire COUTANT <gregoire.coutant AT gmail DOT com>
Subject: [BackupPC-users] Managing connections to the administration
    interface
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D9B7A04.2070601 AT gmail DOT com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi all,

I can not find in the documentation the possibility of restricting
access to the administration.
It may be useful for some users to access the backup but without being
able to restore a server.

Currently I use apache authentication, but it's a bit unwieldy.

Is this possible ?

Thanks

Greg



------------------------------

Message: 79
Date: Tue, 5 Apr 2011 15:37:37 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] Managing connections to the
    administration    interface
To: gregoire.coutant AT gmail DOT com, "General list for user discussion,
    questions and support" <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <20110405153737.C25800 AT real-time DOT com>
Content-Type: text/plain; charset=iso-8859-1

On 04/05 10:22 , Gr?goire COUTANT wrote:
> I can not find in the documentation the possibility of restricting
> access to the administration.
> It may be useful for some users to access the backup but without being
> able to restore a server.

in /etc/backuppc/hosts there are examples like this:

#farside    0      craig  jill,jeff    # <--- example static IP host entry
#larson    1      bill                  # <--- example DHCP host entry

In the first case, craig, jill, and jeff would have administrative control
over 'farside'. In the second case, only 'bill' would have control of
'larson'.

If a user does not have control over a host, they will not be able to see it
in the administrative web interface.

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 80
Date: Tue, 05 Apr 2011 16:55:02 -0400
From: Bowie Bailey <Bowie_Bailey AT BUC DOT com>
Subject: Re: [BackupPC-users] Managing connections to the
    administration    interface
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D9B81A6.7080309 AT BUC DOT com>
Content-Type: text/plain; charset=iso-8859-1

On 4/5/2011 4:37 PM, Carl Wilhelm Soderstrom wrote:
> On 04/05 10:22 , Gr?goire COUTANT wrote:
>> I can not find in the documentation the possibility of restricting
>> access to the administration.
>> It may be useful for some users to access the backup but without being
>> able to restore a server.
> in /etc/backuppc/hosts there are examples like this:
>
> #farside    0      craig  jill,jeff    # <--- example static IP host entry
> #larson    1      bill                  # <--- example DHCP host entry
>
> In the first case, craig, jill, and jeff would have administrative control
> over 'farside'. In the second case, only 'bill' would have control of
> 'larson'.
>
> If a user does not have control over a host, they will not be able to see it
> in the administrative web interface.

I think the question should have been:

"Is it possible to allow a user to view the backup information without
being able to start a restore job?"

AFAIK, that is not possible.  Access to a host in the web interface is
all or nothing.

--
Bowie



------------------------------

Message: 81
Date: Wed, 6 Apr 2011 10:11:59 +1000
From: "Mark Wass" <mark AT market-analyst DOT com>
Subject: [BackupPC-users] Question about $Conf{DumpPostUserCmd}
To: <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <004501cbf3ef$3f20f950$bd62ebf0$@com>
Content-Type: text/plain; charset="us-ascii"

Hi Guys



When I use $Conf{DumpPostUserCmd} to execute the following script backuppc
does not know that the script is finished running and so backuppc continues
to think the backup has not finished and eventually times out as a failed
backup.



Line from config.pl

$Conf{DumpPostUserCmd} = '$sshPath -q -x -l root $host
/backup/scripts/alfrescostart.sh';



Script I'm running - alfrescostart.sh



#!/bin/bash

# This script starts up alfresco after a cold backup by backuppc

/etc/init.d/alfresco start

echo "ALFRESCO HAS STARTED"

exit 0



Can any one tell me why this backuppc does not know when this script is
finished? Alfresco does take about 15sec to finish executing the init.d
script



Thanks



Mark



-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 82
Date: Wed, 6 Apr 2011 10:32:35 +1000
From: "Mark Wass" <mark AT market-analyst DOT com>
Subject: Re: [BackupPC-users] Question about $Conf{DumpPostUserCmd}
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <005301cbf3f2$1fde03f0$5f9a0bd0$@com>
Content-Type: text/plain; charset="us-ascii"

Hi  Guys



Figured this one out by my self, all I did was send the output of the init.d
script to /dev/null and backuppc was happy.



#!/bin/bash

# This script starts up alfresco after a cold backup by backuppc

/etc/init.d/alfresco start > /dev/null 2>&1

sleep 20

echo "ALFRESCO HAS STARTED"

exit 0



-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 83
Date: Tue, 05 Apr 2011 23:04:09 -0700
From: Saturn2888 <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1302069849.m2f.352688 AT www.backupcentral DOT com>

How would I cpan install this one?

+----------------------------------------------------------------------
|This was sent by Saturn2888 AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

Message: 84
Date: Wed, 6 Apr 2011 08:46:17 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: [BackupPC-users] Making errors in log stand out
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <002f01cbf426$54c0d7d0$fe428770$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain; charset="iso-8859-1"

Hi all,

Is it possible to make any errors on the BPC-log page stand out more, like eg
use red characters instead of the standard black? If yes, how would I go about
doing that?

Thanks.

--
BW,
    Sorin
-----------------------------------------------------------
# Sorin Srbu        [Sysadmin, Systems Engineer]
# Dept of Medicinal Chemistry,    Phone: +46 (0)18-4714482 >3 rings> GSM
# Div of Org Pharm Chem,    Mobile: +46 (0)701-718023
# Box 574, Uppsala University,    Fax: +46 (0)18-4714482
# SE-751 23 Uppsala, Sweden    Visit: BMC, Husargatan 3, D5:512b
#            Web: http://www.orgfarm.uu.se
-----------------------------------------------------------
# ()  ASCII ribbon campaign - Against html E-mail
# /\  http://www.asciiribbon.org
#
# MotD follows:
# Geeky it-haiku #04: Looking at my screen/Waiting for the connection/Really
makes me sigh.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2135 bytes
Desc: not available

------------------------------

Message: 85
Date: Wed, 06 Apr 2011 09:24:05 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: sorin.srbu AT orgfarm.uu DOT se, "General list for user discussion,
    questions    and support" <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1302078245.2949.1.camel@baal>
Content-Type: text/plain; charset="UTF-8"

Edit the interface .pl/cgi files, and add <font color="red"></font> tags
around any error message output.

Regards,
Tyler

On Wed, 2011-04-06 at 08:46 +0200, Sorin Srbu wrote:
> Hi all,
>
> Is it possible to make any errors on the BPC-log page stand out more, like eg
> use red characters instead of the standard black? If yes, how would I go about
> doing that?
>
> Thanks.
>
> ------------------------------------------------------------------------------
> Xperia(TM) PLAY
> It's a major breakthrough. An authentic gaming
> smartphone on the nation's most reliable network.
> And it wants your games.
> http://p.sf.net/sfu/verizon-sfdev
> _______________________________________________ BackupPC-users mailing list BackupPC-users AT lists.sourceforge DOT net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/

--
"The intellectual is constantly betrayed by his vanity. Godlike he
blandly assumes that he can express everything in words; whereas the
things one loves, lives, and dies for are not, in the last analysis
completely expressible in words."
  -- Anne Morrow Lindbergh




------------------------------

Message: 86
Date: Wed, 6 Apr 2011 10:58:14 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <004301cbf438$c3b29e00$4b17da00$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain; charset="utf-8"

>-----Original Message-----
>From: Tyler J. Wagner [mailto:tyler AT tolaris DOT com]
>Sent: Wednesday, April 06, 2011 10:24 AM
>To: sorin.srbu AT orgfarm.uu DOT se; General list for user discussion, questions
>and support
>Subject: Re: [BackupPC-users] Making errors in log stand out
>
>Edit the interface .pl/cgi files, and add <font color="red"></font> tags
>around any error message output.

Thanks.

Are we speaking of the file /usr/share/backuppc/cgi-bin/BackupPC_Admin, referred
to in /etc/httpd/conf.d/backuppc.conf?

I don't see any .pl-files, except for those customized machine config-files that
do not use the default backup config setup.

This is on CentOS v5.5 i386, BTW.
--
/Sorin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2135 bytes
Desc: not available

------------------------------

Message: 87
Date: Wed, 06 Apr 2011 10:10:40 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: sorin.srbu AT orgfarm.uu DOT se, "General list for user discussion,
    questions    and support" <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1302081040.2949.9.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Wed, 2011-04-06 at 10:58 +0200, Sorin Srbu wrote:
> Are we speaking of the file /usr/share/backuppc/cgi-bin/BackupPC_Admin, referred
> to in /etc/httpd/conf.d/backuppc.conf?

Sorry. Try these:

/usr/share/backuppc/lib/BackupPC/CGI/
/usr/share/backuppc/lib/BackupPC/Lang/

CGI is the better place, but for your own needs you may find editing
Lang/en.pm is faster.

If you want to contribute, you could always create se.pm, although I
don't think color edits there would be accepted.

Regards,
Tyler

--
"No one can terrorize a whole nation, unless we are all his accomplices."
  -- Edward R. Murrow




------------------------------

Message: 88
Date: Wed, 6 Apr 2011 11:23:02 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <005301cbf43c$3a54eb50$aefec1f0$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain; charset="utf-8"

>-----Original Message-----
>From: Tyler J. Wagner [mailto:tyler AT tolaris DOT com]
>Sent: Wednesday, April 06, 2011 11:11 AM
>To: sorin.srbu AT orgfarm.uu DOT se; General list for user discussion, questions
>and support
>Subject: Re: [BackupPC-users] Making errors in log stand out
>
>> Are we speaking of the file /usr/share/backuppc/cgi-bin/BackupPC_Admin,
>> referred
>> to in /etc/httpd/conf.d/backuppc.conf?
>
>Sorry. Try these:
>
>/usr/share/backuppc/lib/BackupPC/CGI/
>/usr/share/backuppc/lib/BackupPC/Lang/

Hmm... I don't have those folders. Running a locate on en.pm and going for lunch
in the meantime. 8-)

Thanks.
--
/Sorin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2135 bytes
Desc: not available

------------------------------

Message: 89
Date: Wed, 6 Apr 2011 12:25:17 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: <sorin.srbu AT orgfarm.uu DOT se>, "'General list for user discussion,
    questions and support'" <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <006701cbf444$ec7ab7d0$c5702770$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain; charset="utf-8"

>-----Original Message-----
>From: Sorin Srbu [mailto:sorin.srbu AT orgfarm.uu DOT se]
>Sent: Wednesday, April 06, 2011 11:23 AM
>To: 'General list for user discussion, questions and support'
>Subject: Re: [BackupPC-users] Making errors in log stand out
>
>>-----Original Message-----
>>From: Tyler J. Wagner [mailto:tyler AT tolaris DOT com]
>>Sent: Wednesday, April 06, 2011 11:11 AM
>>To: sorin.srbu AT orgfarm.uu DOT se; General list for user discussion, questions
>>and support
>>Subject: Re: [BackupPC-users] Making errors in log stand out
>>
>>> Are we speaking of the file /usr/share/backuppc/cgi-bin/BackupPC_Admin,
>>> referred
>>> to in /etc/httpd/conf.d/backuppc.conf?
>>
>>Sorry. Try these:
>>
>>/usr/share/backuppc/lib/BackupPC/CGI/
>>/usr/share/backuppc/lib/BackupPC/Lang/
>
>Hmm... I don't have those folders. Running a locate on en.pm and going for lunch
>in the meantime. 8-)

Found it in /usr/lib/BackupPC/Lang/en.pm. This was a standard install, so I guess
this path is the default for installing BPC.

--
/Sorin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2135 bytes
Desc: not available

------------------------------

Message: 90
Date: Wed, 6 Apr 2011 07:49:31 -0500
From: Carl Wilhelm Soderstrom <chrome AT real-time DOT com>
Subject: Re: [BackupPC-users] Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <20110406074931.E25800 AT real-time DOT com>
Content-Type: text/plain; charset=us-ascii

On 04/05 11:04 , Saturn2888 wrote:
> How would I cpan install this one?

If at all possible I would avoid cpan. It works well enough for developers,
but any future/other administrators of the machine will hate you for using
it.

If you install all your software with packages (dpkg/rpm), upgrading and
checking for patches is generally a matter of 'apt-get update; apt-get
upgrade' (or whatever the command is in the tool you use other than 'apt').
If you install software from source, there's no package management.
Therefore you won't know if there's a security patch for that piece of
software; you won't know what you have installed (which makes it harder to
replicate your configuration); and you might end up clobbering software
installed from source with software installed from a package (which leads to
various strange behaviors at times).

So if at all possible, for the sake of your future sanity and the sanity of
anyone else who has to administrate the machine in question, please install
from a package. If it's absolutely necessary to get a version of some
software that isn't packaged, it's not that hard to learn how to build a
package for yourself.

--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com



------------------------------

Message: 91
Date: Wed, 6 Apr 2011 14:54:53 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <000f01cbf459$d30afb10$7920f130$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain; charset="utf-8"

>-----Original Message-----
>From: Sorin Srbu [mailto:sorin.srbu AT orgfarm.uu DOT se]
>Sent: Wednesday, April 06, 2011 12:25 PM
>To: sorin.srbu AT orgfarm.uu DOT se; 'General list for user discussion, questions
>and support'
>Subject: RE: [BackupPC-users] Making errors in log stand out
>
>>>> Are we speaking of the file /usr/share/backuppc/cgi-bin/BackupPC_Admin,
>>>> referred to in /etc/httpd/conf.d/backuppc.conf?
>>>
>>>Sorry. Try these:
>>>
>>>/usr/share/backuppc/lib/BackupPC/CGI/
>>>/usr/share/backuppc/lib/BackupPC/Lang/
>>
>>Hmm... I don't have those folders. Running a locate on en.pm and going for lunch
>>in the meantime. 8-)
>
>Found it in /usr/lib/BackupPC/Lang/en.pm. This was a standard install, so I guess
>this path is the default for installing BPC.

Was looking into changing the log message "2011-02-09 22:01:48 Backup failed on
mach059.x.y.z (Child exited prematurely)" so I looked for the text "failed" using

    # find -iname "*.pm" | xargs grep -iR "failed" > failed.log.txt

in the /usr/lib/BackupPC. Nothing was found so I went to / and did the same
search. I had some better success here, but nothing that stands out as a likely
suspect. The closest I get is

    ./usr/lib/BackupPC/Lang/en.pm:$Lang{Reason_backup_failed}  = "backup failed";

I'm not quite sure how to add the red html tags, referenced earlier to this.

Can somebody with a better knowledge than me in this help some more?

Thanks.
--
/Sorin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2135 bytes
Desc: not available

------------------------------

Message: 92
Date: Wed, 6 Apr 2011 15:06:08 +0200
From: Gr?goire COUTANT <gregoire.coutant AT gmail DOT com>
Subject: Re: [BackupPC-users] Managing connections to the
    administration    interface
To: "General list for user discussion,    questions and support"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <BANLkTimy_Qgk-yj4c6UtL-qPQZ14HwC_dA AT mail.gmail DOT com>
Content-Type: text/plain; charset=ISO-8859-1

> I think the question should have been:
> "Is it possible to allow a user to view the backup information without
> being able to start a restore job?"

Yes it was the question. (excuse my bad english !)

> AFAIK, that is not possible. ?Access to a host in the web interface is
> all or nothing.

Ok thanks

Greg



------------------------------

Message: 93
Date: Wed, 06 Apr 2011 14:06:23 +0100
From: "Tyler J. Wagner" <tyler AT tolaris DOT com>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: sorin.srbu AT orgfarm.uu DOT se, "General list for user discussion,
    questions    and support" <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <1302095183.2949.17.camel@baal>
Content-Type: text/plain; charset="UTF-8"

On Wed, 2011-04-06 at 14:54 +0200, Sorin Srbu wrote:
>     ./usr/lib/BackupPC/Lang/en.pm:$Lang{Reason_backup_failed}  = "backup failed";

See if this works. In /usr/lib/BackupPC/Lang/en.pm:

$Lang{Reason_backup_failed}  = "backup <font color="red">failed</font>";

Do the same for any other backup errors in the lang file. This is a hack
(this doesn't belong in the Lang data), but it's a fast way to do what
you want.

Regards,
Tyler

--
"Anti-vax people will try to avoid discussing how much mercury is in a
vaccine vs a tuna fish sandwich, they just keep repeating a facetiously
simplistic party line that toxins are bad. Thimerosal has nonetheless
been phased out of vaccines to humor poorly informed, panicky people
with poor risk estimation skills."
  -- Soren Ragsdale




------------------------------

Message: 94
Date: Wed, 6 Apr 2011 15:34:34 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <000d01cbf45f$5dd39950$197acbf0$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain;    charset="UTF-8"

>-----Original Message-----
>From: Tyler J. Wagner [mailto:tyler AT tolaris DOT com]
>Sent: Wednesday, April 06, 2011 3:06 PM
>To: sorin.srbu AT orgfarm.uu DOT se; General list for user discussion, questions
>and support
>Subject: Re: [BackupPC-users] Making errors in log stand out
>
>>     ./usr/lib/BackupPC/Lang/en.pm:$Lang{Reason_backup_failed}  ="backup failed";
>
>See if this works. In /usr/lib/BackupPC/Lang/en.pm:
>
>$Lang{Reason_backup_failed}  = "backup <font color="red">failed</font>";
>
>Do the same for any other backup errors in the lang file. This is a hack
>(this doesn't belong in the Lang data), but it's a fast way to do what
>you want.

Not quite there yet.

BackupPC::Lib->new failed
2011-04-06 15:31:30 User root requested backup of machnnn.x.y.z (machnnn.x.y.z)
Bareword found where operator expected at /usr/lib/BackupPC/Lang/en.pm line 1238, near ""<font color="red"
    (Missing operator before red?)
String found where operator expected at /usr/lib/BackupPC/Lang/en.pm line 1238, near "red">backup failed</font>""
Couldn't execute language file /usr/lib/BackupPC/Lang/en.pm: syntax error at /usr/lib/BackupPC/Lang/en.pm line 1238, near ""<font color="red"
BackupPC::Lib->new failed

Seems like BPC doesn't quite like it. If this doesn't belong in usr/lib/BackupPC/Lang/en.pm, where else would it belong better?
--
/Sorin





------------------------------

Message: 95
Date: Wed, 06 Apr 2011 09:43:47 -0400
From: Bowie Bailey <Bowie_Bailey AT BUC DOT com>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <4D9C6E13.1000608 AT BUC DOT com>
Content-Type: text/plain; charset=ISO-8859-1

On 4/6/2011 9:34 AM, Sorin Srbu wrote:
>> -----Original Message-----
>> From: Tyler J. Wagner [mailto:tyler AT tolaris DOT com]
>> Sent: Wednesday, April 06, 2011 3:06 PM
>> To: sorin.srbu AT orgfarm.uu DOT se; General list for user discussion, questions
>> and support
>> Subject: Re: [BackupPC-users] Making errors in log stand out
>>
>>>     ./usr/lib/BackupPC/Lang/en.pm:$Lang{Reason_backup_failed}  ="backup failed";
>> See if this works. In /usr/lib/BackupPC/Lang/en.pm:
>>
>> $Lang{Reason_backup_failed}  = "backup <font color="red">failed</font>";
>>
>> Do the same for any other backup errors in the lang file. This is a hack
>> (this doesn't belong in the Lang data), but it's a fast way to do what
>> you want.
> Not quite there yet.
>
> BackupPC::Lib->new failed
> 2011-04-06 15:31:30 User root requested backup of machnnn.x.y.z (machnnn.x.y.z)
> Bareword found where operator expected at /usr/lib/BackupPC/Lang/en.pm line 1238, near ""<font color="red"
>     (Missing operator before red?)
> String found where operator expected at /usr/lib/BackupPC/Lang/en.pm line 1238, near "red">backup failed</font>""
> Couldn't execute language file /usr/lib/BackupPC/Lang/en.pm: syntax error at /usr/lib/BackupPC/Lang/en.pm line 1238, near ""<font color="red"
> BackupPC::Lib->new failed
>
> Seems like BPC doesn't quite like it. If this doesn't belong in usr/lib/BackupPC/Lang/en.pm, where else would it belong better?

I think you just need to escape the quotes.  Try this:

$Lang{Reason_backup_failed}  = "backup <font color=\"red\">failed</font>";

--
Bowie



------------------------------

Message: 96
Date: Wed, 6 Apr 2011 16:11:55 +0200
From: "Sorin Srbu" <sorin.srbu AT orgfarm.uu DOT se>
Subject: Re: [BackupPC-users] Making errors in log stand out
To: "'General list for user discussion,    questions and support'"
    <backuppc-users AT lists.sourceforge DOT net>
Message-ID: <001e01cbf464$95eee740$c1ccb5c0$@srbu AT orgfarm.uu DOT se>
Content-Type: text/plain;    charset="Windows-1252"

>-----Original Message-----
>From: Bowie Bailey [mailto:Bowie_Bailey AT BUC DOT com]
>Sent: Wednesday, April 06, 2011 3:44 PM
>To: backuppc-users AT lists.sourceforge DOT net
>Subject: Re: [BackupPC-users] Making errors in log stand out
>
>>
>> BackupPC::Lib->new failed
>> 2011-04-06 15:31:30 User root requested backup of machnnn.x.y.z
>(machnnn.x.y.z)
>> Bareword found where operator expected at /usr/lib/BackupPC/Lang/en.pm line
1238, near ""<font color="red"
>>     (Missing operator before red?)
>> String found where operator expected at /usr/lib/BackupPC/Lang/en.pm line
1238, near "red">backup failed</font>""
>> Couldn't execute language file /usr/lib/BackupPC/Lang/en.pm: syntax error at
/usr/lib/BackupPC/Lang/en.pm line 1238, near ""<font color="red"
>> BackupPC::Lib->new failed
>>
>> Seems like BPC doesn't quite like it. If this doesn't belong in
>usr/lib/BackupPC/Lang/en.pm, where else would it belong better?
>
>I think you just need to escape the quotes.  Try this:
>
>$Lang{Reason_backup_failed}  = "backup <font color=\"red\">failed</font>";

Thanks Bowie. Seemed to have done the trick, but I don't see anything red in the
logs. 8-/ Checked all logs, the machine specific as well as the general summary
log.

2011-04-06 16:05:43 User root requested backup of machnnn.x.y.z (machnnn.x.y.z)
2011-04-06 16:05:44 Started incr backup on machnnn.x.y.z (pid=31858,
share=Profiles)
2011-04-06 16:05:49 Backup failed on machnnn.x.y.z (inet connect: Connection
refused)

I turned off DeltaCopy on the Windows Server in order to get a "backup
failed"-message, and then I requested a manual incremental backup of machnnn.
The above message is what I get in the general/summary log.

--
/Sorin





------------------------------

Message: 97
Date: Wed, 06 Apr 2011 09:07:46 -0700
From: Saturn2888 <backuppc-forum AT backupcentral DOT com>
Subject: [BackupPC-users]  Another BackupPC Fuse filesystem
To: backuppc-users AT lists.sourceforge DOT net
Message-ID: <1302106066.m2f.352736 AT www.backupcentral DOT com>

I appreciate the warning, but then, what am I going to do about the error messages having installed from aptitude?

+----------------------------------------------------------------------
|This was sent by Saturn2888 AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------





------------------------------

------------------------------------------------------------------------------
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev

------------------------------

_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


End of BackupPC-users Digest, Vol 60, Issue 1
*********************************************
------------------------------------------------------------------------------
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
<Prev in Thread] Current Thread [Next in Thread>
  • Re: [BackupPC-users] BackupPC-users Digest, Vol 60, Issue 1, akeem abayomi <=