> BackupPC_refCountUpdate: doing fsck on <host> #1188 since there are no poolCnt files
> BackupPC_refCountUpdate: doing fsck on <host> #1190 since there are no poolCnt files
> ...
> BackupPC_refCountUpdate: host <host> got 0 errors (took 5 secs)
The backups in question seem to be fully intact; some are full backups, some are incremental. It's just on a minority of backups (appx. 15 out of 350 backups), and fortunately on small ones where fsck does not take ages, so it does not bother me too much. Nevertheless, can the missing poolCnt data be recomputed? fsck seems to do the counting from scratch; can this be stored?
This is perfectly ok. BackupPC 4.0.0alpha3 and prior 4.x versions didn't store reference counts per backup. Only the reference counts for the entire host were maintained (in addition to the pool totals for all hosts). In 4.0.0, I changed that so reference counts were also stored per-backup (which makes it easier to delete backups and to recompute the per-host ref counts). So BackupPC_refCountUpdate is simply adding reference counts to backups done by BackupPC 4.0.0alpha3. It's a one-time thing.
There might be an issue that an incremental done by BackupPC 4.0.0alpha3 with no changes will have an empty backup tree, and BackupPC_refCntUpdate will continually report that there are no poolCnt files for that backup. That's benign. In 4.0.0, BackupPC_dump flags that by creating a file "HOST/NNN/refCnt/noPoolCntOk, which makes BackupPC_refCntUpdate quietly ignore that backup. Perhaps I should have BackupPC_refCntUpdate notice that legacy case and create the noPoolCntOk file...
> BackupPC_refCountUpdate: missing pool file 00000000000000000000000000000000 count 30
> BackupPC_refCountUpdate: missing pool file 0601e1b90a7f92ce4cffa588ef2cc9da count 1
> ...
> BackupPC_refCountUpdate: missing pool file ea1bd7ab2e0000000000000000000000 count 1
This is a bug in rsync-bpc (and BackupPC::XS) that was fixed a couple of weeks ago. It happened about 2% of the time the attrib file for a large directory was written (attrib file sizes >256k, approx 5k files depending on file name lengths). If the md5 digest of the last file written to the 256k staging buffer exactly lined up with the end of the buffer, the digest for that file wasn't written correctly (yes, I had "<" instead of "<="... doh!).
Future backups with 4.0.0 (assuming the same file exists on the client) will be updated with the correct digest, but the old backups will still have the wrong one. The errors will go away when the corresponding backups eventually expire.
Craig