BackupPC-users

Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-10 09:31:14
Subject: Re: [BackupPC-users] Best to Move a Large Pool to Different FS?
From: <stephen AT physics.unc DOT edu>
To: Christian Völker <chrischan AT knebb DOT de>
Date: Tue, 10 Nov 2015 08:54:23 -0500
Christian,

If you can resign yourself to staying on ext4, dumping and restoring the filesystem may be your best bet. You can resize it after (and/or before if needed) to the desired size.

If that's not possible, the second best solution is to stand up the new server from scratch with no data, disable backups on your old server, wait for the new server to build a sufficient backup history (1, 2, 6 months, whatever), then kill off the old server. If you do this, consider starting with BackupPC v4 on the new server to eliminate this problem in the future.

But it is possible to move the backups if you're determined. Here's my cookbook method that I've successfully used in the past. The last time I did this, I moved ~6TB cpool on a server with 4GB of RAM. It did take days (about 5 days if I remember, over 1Gb enet) during which time I stopped as many processes as possible, including BackupPC, in order to keep the data static.

Also note that I install BackupPC to /opt/BackupPC and store my BackupPC data in /srv/BackupPC, which are non-standard locations. Adjust to suit your needs. Also note that this worked *for me*. I cannot guarantee it will work for you. YMMV; good luck.

---------
cat migrate-from-old-server

BackupPC uses hard links within the (c)pool filesystem for deduplication. This makes normal copying with rsync difficult (it uses too much RAM keeping track of inodes attempting to preserve hard links).

The following procedure works. It assumes only the cpool has valid data. If not compressing, adjust to use pool instead. If using both, do both.

0. Install BackupPC on the new server. Disable BackupPC on new and old servers. The data should be quiescent while being manipulated.

1. Configure the new storage on the new server. The use of external (SAS, fc, iscsi) array is also highly recommended to ease future mobility. the use of LVM is also highly recommended (lvextend, lvremove ftw). If there is a chance of wanting to shrink the filesystem, remember that XFS can grow, but cannot shrink, therefore you may wish to consider ext4 if in doubt.

2. Temporarily mount the old storage on the new server. This may require being creative. It may also require a multi-step process (that is, running these steps on the old server to create a new, more mobile, datastore which can be moved to the new server). Hint: DAS is fast but NFS does work.

3. Ensure that new storage is mounted at /srv/BackupPC and the old storage is mounted at /old/BackupPC.

4. Open a screen session, as root, on the new server. This is recommended because some of these steps take a long time to complete.

5. Copy the cpool using any technique, ignoring hard links. We'll use tar:
 cd /old/BackupPC
 tar -cpf - . | ( cd /srv/BackupPC ; tar -xvpf - )

6. Copy the pc directory using BackupPC_tarPCCopy:
 su -c /bin/bash backup
 cd /srv/BackupPC
 mkdir -p pc
 chown backup pc
 cd pc
 /opt/BackupPC/bin/BackupPC_tarPCCopy /old/BackupPC/pc | tar xvPf -

This can be done without mounting both sets of storage on the new server (piping commands over ssh) but with additional complexity.

7. Copy the BackupPC status file (/var/BackupPC/status.pl) and /etc/BackupPC directory from old to new server; set permissions identically. If you make changes, consult and understand docs (insufficient retention period(s) can quickly remove data you just took care to copy!).

8. Start BackupPC on new server. Test GUI, confirm settings are correct. Test restores, backups. Fix problems. Lather, rinse, repeat. Fin! Enjoy favorite beverage.

Cheers, Stephen

On Mon, 9 Nov 2015, Christian Völker wrote:

Hi all,

I want to transfer my pool from ext4 to xfs. The pool is around 1.3TB
with approx 15 hosts backing up.

Well, the obvious rsync -avH is called to be too memory consuming
because of the hardlinks.

So I started the way listed here:
http://roland.entierement.nu/blog/2013/12/02/rsyncing-a-backuppc-storage-pool-efficiently.html

The rsync of the cpool is done (took quite a while!)

I started the "store-hardlinks.pl" and up to now top tells me:

top - 20:01:12 up  3:32,  2 users,  load average: 1.36, 1.08, 1.03
Tasks: 107 total,   1 running, 106 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.3%us,  0.8%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.2%si,
0.0%st
Mem:   8061568k total,  7778968k used,   282600k free,   442732k buffers
Swap:  4063228k total,     7672k used,  4055556k free,    12848k cached

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
1644 root      20   0 5760m 5.5g 1008 D  1.3 71.5   7:09.30 store-hardlinks

So it consumes already nearly 6GB of memory!

Anyone else having a better idea how to transfer in an reasonable amount
of time with reasonable memory consumption?

Greetings

Christian


------------------------------------------------------------------------------
Presto, an open source distributed SQL query engine for big data, initially
developed by Facebook, enables you to easily query your data on Hadoop in a
more interactive manner. Teradata is also now providing full enterprise
support for Presto. Download a free open source copy now.
http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
------------------------------------------------------------------------------
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/