BackupPC-users

[BackupPC-users] Migrate local data into BackupPC pool for remote client

2016-05-17 12:22:50
Subject: [BackupPC-users] Migrate local data into BackupPC pool for remote client
From: cardiganimpatience <backuppc-forum AT backupcentral DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Tue, 17 May 2016 09:22:30 -0700
It worked! Thanks so much for spelling it out for me!

My previous backups are not tar'd so I'm having to first compress them into a 
tar.gz to make it work. Is it possible to import them directly from the 
(uncompressed) source? Could I change the entry for TarClientCmd from 'zcat' to 
'cat'? If so what would follow? Just the path to the root?

Thanks,
Mike


On 2016-05-14 07:38, Johan Ehnberg wrote: 
Hi, 

You are correct. The script as it is, expects a .tar.gz file in 
$FILEAREA/target. However, this is the file, not a directory. The script 
manages it as a symlink to the actual file so that you do not have to 
manually input in BackupPC for every host separately. 

Looking at your results (zcat complaining about the directory) I would 
assume all you have to do is point zcat at the tar.gz file 
($FILEAREA/target) instead of the directory containing it ($FILEAREA). 

The file is for the whole host run as a partial full dump, so it's all 
or nothing in a single run. Any subsequent runs with different mounts 
will either replace the previous ones or not be used depending on your 
BackupPC settings. 

The script itself simply runs BackupPC_dump -f -v HOSTNAME. That works 
nicely manually as well for a single host, just set the symlink to point 
at your tar.gz file: 
ln -s HOSTNAME.tar.gz target 

Using the script with the variables you propose should work with one change: 
FILEAREA=/tmp/bpctar 

Furthermore, that works only for a single host since you are pointing at 
the actual .tar.gz file instead of $FILEAREA/target symlink. You can 
literallt set TarClientCmd to 'zcat /tmp/bpctar/target' and the script 
will handle it for you, enabling you to run it for many hosts also. 

Some further notes that may be of interest: 

You can also use .tar files, simply change 'zcat' to 'cat'. It has to be 
in tar format for BackupPC to understand the input, though. This can be 
faster if your files already exist in directories. 

Beyond that, if you are seeding in a batch manner for many hosts or 
large amounts of files which already exist in directories, and do not 
want to create tar.gz files, you can also try using 'tar' instead of 
'cat'. This requires you to use tar's various flags to tune the paths to 
match the actual host to be backed up. 

I assume the tar.gz file you use was not created by BackupPC. Thus, the 
next likely thing to do after a successful seed, is to ensure that the 
paths that you get in BackupPC from the seeding match those that you get 
when backing up the actual host. 

I updated the script with improved documentation with the help of your 
experiences. Thanks! 

Best regards, 
Johan 

On 2016-05-14 00:03, cardiganimpatience wrote: 
Hey thanks a lot for the response Johan! It's taken me a while to figure out 
how this is supposed to work and I'm getting closer but still not there. 

The file refers to 'tar dumps' but it's unclear to me what that means. Does it 
assume that my local backups are in a tar.gz format? They are not. They're 
uncompressed and simply live in a folder named after the hostname. 

So I created a tar.gz of one of my folders and tried to work on that but 
BackupPC_dump doesn't seem to find it: 

Running: zcat /home/backup/my_sql_server/my_sql_server_copy 
full backup started for directory /samba_shares/ 
started full dump, share=/samba_shares/ 
Xfer PIDs are now 6134,6133 
xferPids 6134,6133 
cmdExecOrEval: about to exec zcat /home/backup/my_sql_server/my_sql_server_copy 
gzip: /home/backup/my_sql_server/my_sql_server_copy is a directory -- ignored 
Tar exited with error 512 () status 

What is the expected value of TarClientCmd? Is it the name of the .gz file? Or 
a folder which contains the .gz file(s) 
Same question for FILEAREA. 

If I were to run BackupPC_dump directly what values can I pass to it? 

It appears the script is looking for a .tar.gz file named after the hostname. 
Is that accurate? Am I able to import one folder/mount at a time or is it an 
all-or-nothing deal? 

If I'm guessing correctly would the following import local files into the 
BackupPC pool for the server named "my_web_server"? 

# - Set TarClientCmd to 'zcat /tmp/bpctar/my_web_server.tar.gz' (as set in 
FILEAREA below) 
... 
TARGETS="my_sql_server" # Manual target list 
... 
# Your environment 
FILEAREA=/tmp/bpctar/my_web_server.tar.gz 
### unused for seed ### NEWBPC=/mnt/backuppc # Where new backuppc dir is 
mounted, if moving 
### unused for seed ### OLDBPC=/srv/backuppc # Where current backuppc dir is 
mounted, if moving 
### unused for seed ### BPCLNK=/var/lib/backuppc # Where config.pl to in the 
config, if moving 
BPCBIN=/usr/share/BackupPC/bin # Where BackupPC_* scripts are located 
BPCUSR=backuppc # User that runs BackupPC 

Thanks again for your help! 

On 2016-05-08 08:24, Johan Ehnberg wrote: 
Migrate local data into BackupPC pool for remote client 
Hi, 

Version 4 supports matching files from the pool. 

If you are using version 3, the path has to be the same, so you would 
have to process the tar file to match the host to be backed up. This 
works fine, I used a similar method here: 

http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/
 

You may be able to use tar with --strip-components to work around tar 
extra paths on the fly. 

Good luck! 

Johan 


On 2016-05-06 17:19, cardiganimpatience wrote: 
BackupPC is installed and working great for new hosts. Is there a way to take 
the hundreds of GB from old hosts that exist on the backup server and import 
them into the BackupPC storage pool? 

The old backup system uses rsync to dump all files to a local disk on the same 
server where BackupPC is installed, albeit with incorrect file ownership. I 
don't want to re-transfer that data over our narrow bandwidth connection if I 
don't have to. I believe rsync will sort out permissions and timestamps on its 
own. 

So far I've created a host in BackupPC and changed the transfer type to 'tar' 
and successfully imported one of its mount points, but now the tar sharename is 
called "/home/backup/<servername>/cur/<share>/", where the actual share on the 
host is simply called /<share>. 

My intention is to flip the Xfer method from 'tar' to 'rsync' after I get most 
of the larger shares imported via local tar. Is it necessary to associate the 
imported files with a specific host or does hard-linking take care of all that? 

Thanks!

+----------------------------------------------------------------------
|This was sent by itismike AT gmail DOT com via Backup Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------



------------------------------------------------------------------------------
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/