BackupPC-users

Re: [BackupPC-users] File restore integrity

2010-06-17 11:28:43
Subject: Re: [BackupPC-users] File restore integrity
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Thu, 17 Jun 2010 11:26:29 -0400
Jonathan Schaeffer wrote at about 16:29:19 +0200 on Thursday, June 17, 2010:
 > Hi all,
 > 
 > I'm administrating a BackupPC server and I'm concerned about the security of 
 > the 
 > whole system.
 > 
 > I configured the linux clients as unpriviledged users doing sudos for rsyncs 
 > to 
 > limit the risk of intrusion from the backupPC server to the clients as 
 > described 
 > in the FAQ : 
 > http://backuppc.sourceforge.net/faq/ssh.html#how_can_client_access_as_root_be_avoided
 > 
 > But I found a simple way to screw up the client when the backupPC server is 
 > corrupted :
 > 
 > It is easy to empty some (or all) files of a backup :
 > 
 > root@backuppc:/data/backuppc/pc/172.16.2.44/3/f%2f/fhome/fjschaeff# cat 
 > /dev/null > f.bashrc
 > 
 > And then, when the client restores the file, it gets an empty file.
 > 
 > Is there a checking mechanism to ensure the integrity of the restored files 
 > ? 
 > i.e. the server can check that the files he is about to restore is the same 
 > as 
 > the one he stored previously ?
 > 

Not automatically or officially. Though it might be a good feature to
add in the future.

If you use rsync checksum caching, I have written a routine that
allows to check some or all pool or pc files for consistency between
the full file md4 checksum stored by rsync and the actual file
content.

One could also do other checks such as checking the pool file name
against its contents using the partial file md5sum that backuppc
uses. One could also check the file size stored in the attrib file
against the actual size.

Here is my routine for verifying the rsync checksum digests:
----------------------------------------------------------
#!/usr/bin/perl
#Validate rsync digest

use strict;
use Getopt::Std;

use lib "/usr/share/BackupPC/lib";
use BackupPC::Xfer::RsyncDigest;
use BackupPC::Lib;
use File::Find;

use constant RSYNC_CSUMSEED_CACHE     => 32761;
use constant DEFAULT_BLOCKSIZE     => 2048;


my $dotfreq=100;
my %opts;
if ( !getopts("cCpdv", \%opts) || @ARGV !=1
         || ($opts{c} + $opts{C} + $opts{p} > 1)
         || ($opts{d} + $opts{v} > 1)) {
    print STDERR <<EOF;
usage: $0 [-c|-C|-p] [-d|-v] [File or Directory]
  Verify Rsync digest in compressed files containing digests.
  Ignores directories and files without digests
  Only prints if digest does not match content unless verbose flag
  (firstbyte = 0xd7)
  Options:
    -c   Consider path relative to cpool directory
    -C   Entry is a single cpool file name (no path)
    -p   Consider path relative to pc directory
    -d   Print a '.' for every $dotfreq digest checks
    -v   Verbose - print result of each check;

EOF
exit(1);
}

die("BackupPC::Lib->new failed\n") if ( !(my $bpc = BackupPC::Lib->new) );
#die("BackupPC::Lib->new failed\n") if ( !(my $bpc = BackupPC::Lib->new("", "", 
"", 1)) ); #No user check

my $Topdir = $bpc->TopDir();
my $root;
$root = $Topdir . "/pc/" if $opts{p};
$root = "$bpc->{CPoolDir}/" if $opts{c};
$root =~ s|//*|/|g;

my $path = $ARGV[0];
if ($opts{C}) {
        $path = $bpc->MD52Path($ARGV[0], 1, $bpc->{CPoolDir});
        $path =~ m|(.*/)|;
        $root = $1; 
}
else {
        $path = $root . $ARGV[0];
}
my $verbose=$opts{v};
my $progress= $opts{d};

die "$0: Cannot read $path\n" unless (-r $path);


my ($totfiles, $totdigfiles, $totbadfiles) = (0, 0 , 0);
find(\&verify_digest, $path); 
print "\n" if $progress;
print "Looked at $totfiles files including $totdigfiles digest files of which 
$totbadfiles have bad digests\n";
exit;

sub verify_digest {
        return -200 unless (-f);
        $totfiles++;
        return -200 unless -s > 0;
        return -201 unless BackupPC::Xfer::RsyncDigest->fileDigestIsCached($_); 
#Not cached type (i.e. first byte not 0xd7); 
        $totdigfiles++;

        my $ret = BackupPC::Xfer::RsyncDigest->digestAdd($_, DEFAULT_BLOCKSIZE, 
RSYNC_CSUMSEED_CACHE, 2);  #2=verify
#Note setting blocksize=0, results in using the default blocksize of 2048 also, 
but it generates an error message
#Also leave out final protocol_version input since by setting it undefined we 
make it read it from the digest.
        $totbadfiles++ if $ret!=1;

        (my $file = $File::Find::name) =~ s|$root||;
        if ($progress && !($totdigfiles%$dotfreq)) {
                print STDERR "."; 
                ++$|; # flush print buffer
        }
        if ($verbose || $ret!=1) {
                my $inode = (stat($File::Find::name))[1];
                print "$inode $ret $file\n";
        }
        return $ret;
}

# Return codes:
# -100: Wrong RSYNC_CSUMSEED_CACHE or zero file size
# -101: Bad/missing RsyncLib
# -102: ZIO can't open file
# -103: sysopen can't open file
# -104: sysread can't read file
# -105: Bad first byte (not 0x78, 0xd6 or 0xd7)
# -106: Can't seek to end of file
# -107: First byte not 0xd7
# -108: Error on readin digest
# -109: Can't seek when trying to position to rewrite digest data (shouldn't 
happen if only verifying)
# -110: Can't write digest data (shouldn't happen if only verifying)
# -111: Can't seek looking for extraneous data after digest (shouldn't happen 
if only verifying)
# -112: Can't truncate extraneous data after digest (shouldn't happen if only 
verifying)
# -113: If can't sysseek back to file beginning (shouldn't happen if only 
verifying)
# -114: If can't write out first byte (0xd7) (shouldn't happen if only 
verifying)
# 1: Digest verified
# 2: Digest wrong


------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>