Hi,
I have asked, because I have learnt about a tool, which
calculates and store checksums of individual files as extended
attributes. The basic idea was to check after a data transfer
easily. I could imagine that similar concept could help track
nasty things happening on both backuppc side and client side. Like
checking sums of files which shouldn't change etc. The point is,
when one instance of hardlink is corrupted, all others are too.
I believe, that ZFS can help with that, ZFS is not always
possible. Something like mentioned above would make it less
dependet on filesystem.
Jan
On 01/02/2017 01:47 PM, Andreas Piening
wrote:
Hi Jan,
the file based deduplication is based on checksums,
so if a new file is stored with the same name and file-size it
will only be stored as a new file if the checksum is different.
If the checksum is different, a hard link will be used to point
at the already existing copy.
But these checksums are used for deduplication only
and as far as I know there is no additional integrity check, for
example on a restore.
Honestly I don’t think it is really needed. I’m
using a ZFS volume for backuppc which has build in block level
checksums for integrity.
Probably this is an option for you?
Kind regards
Andreas
Hi,
does
backuppc do some data integrity checks on stored files
or files
to-be-stored?
Something like regular md5sum checks.
Jan
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/