BackupPC-users

Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-05 07:30:42
Subject: Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)
From: Kurt Tunkko <kurt.tunko AT web DOT de>
Date: Tue, 05 Aug 2008 13:30:17 +0200
Hello Holger,

thanks for your detailled answer, even when I got the feeling now, that 
I don't want to copy the pool-data :-/

As far I understood, keeping hardlinks and copy the massive amount of 
files may be a problem.

Other options:

1) Using dd and copy the old harddrive to the new one. Because the old 
harddrive uses LVM and the new one is a RAID, I don't know if this will work

2) Using LVM and append the RAID to the LVM-Volume - this sounds like a 
good solution, but I just want to have /var/lib/backuppc on the RAID, no 
other files.

3) Rename the 'old' /var/lib/backuppc directory (keeping it until 
everything is working) and mount the RAID to /var/lib/backuppc.
This means starting with a new (empty) pool - after some time all 
backups should be completed and as soon as everything is working and I'm 
sure I don't need the old backups I could delete the old 
/var/lib/backuppc directory.

I'll Option 3 and let you know how and if this is working.

- Kurt



Holger Parplies wrote:
> Hi,
> 
> Kurt Tunkko wrote on 2008-08-04 23:00:35 +0200 [[BackupPC-users] Just to make 
> sure: How to move/copy /var/lib/backuppc to another place (RAID1)]:
>> [...]
>> I found: 'change archive directory' on the backuppc wiki
>> http://backuppc.wiki.sourceforge.net/change+archive+directory
>>
>> Option 1 suggest to use:
>>
>>      cp -pR /var/lib/backuppc /mnt/md0
>>
>> while Option 3 suggest to move the directory to another place.
>>
>> In order to be safe when something bad happens while transferring the 
>> data to the RAID, I dont want to use 'move'.
> 
> ['mv'? Really? Just suppose *that* gets interrupted part way through ...]
> 
>> Just to make sure that I don't do something stupid before copying tons 
>> of GB of backup-data I would like have a short feedback regarding the 
>> command in option 1. Will this do the job?
> 
> It is *guaranteed* not to. Whoever put that in the wiki either does not have
> the slightest clue what he is writing about, or he is talking about an empty
> pool (read: BackupPC was freshly installed and *no backups done*) and didn't
> make that unmisunderstandably clear.
> 
> Even the potentially correct 'cp -dpR ...' will not work in the general case.
> The command from the wiki does *not* preserve hard links. Your pool will
> explode to at least twice the size, and that's assuming every pooled file is
> only used once (which would practically mean you've only got one backup). If
> you've got the space, you *could* get away with it, because future backups
> would be pooled, but for current backups, the benefits of pooling would be
> forever lost.
> The next run of BackupPC_nightly would empty the pool (so you might as well
> not copy it in the first place), and the files would need to be re-compressed
> during future backups.
> So: while it is conceivable that someone might use this as a last resort, you
> don't want to migrate your pool like this.
> 
> The version that *does* preserve hard links (cp -d option) will work for
> structures upto a certain limited size.
> 
> There seem to be people on the list who repeatedly insist that it worked for
> them, so it will work for you (despite the thread already containing an
> explanation to the contrary). Apparently it has even made it into the wiki.
> 
> 
> On the other hand, there have also been countless reports of problems with
> *any* file-based copying of the BackupPC pool using general-purpose tools -
> cp, rsync, and tar spring to mind. They either run out of memory or take long
> (read: days to weeks, meaning they are usually aborted at some point; I'm not
> sure if they would eventually finish). This is, basically, due to the fact
> that you cannot create a hard link to an inode based on the inode number. You
> need a path name, i.e. a name of the file to link to. For a few hundred files
> with a link count of more than one, it's no problem to store the information
> in memory (and that is what general-purpose tools are probably expecting). For
> 100 million files with more than one link, that obviously won't work any more.
> Add to that the delays of chaotically seeking from one end of the pool disk to
> the other (the kernel needs to look up the paths you're linking to, and
> there's not much chance of finding anything in the cache ...), and you'll get
> an idea of where the problem is. Lots of memory will help, preferably enough
> to fit the pool into cache altogether ;-).
> 
> 
> Your best options remain to either do a block level copy of the file system
> (dd) or to start over. You can, of course, *try* cp/rsync/tar and hope your
> pool is small enough (hint: count your pool files). I'm not saying it never
> works, only to be aware of what you're facing. Remember, for "cp" you need
> "-d", for "rsync" "-H" and for "dd" a destination partition at least as big
> as the source file system. I haven't heard reports of problems with file
> system resizers and BackupPC pools, but I'd be cautious just the same.
> 
>> Can I just remount my RAID to 
>> /var/lib/backuppc afterwards and be sure that everything is working?
> 
> If anyone has good ideas how to *test* the result of copying a pool, I'd also
> be interested (and please don't suggest 'diff -r'). I can imagine a lot of
> things going wrong that BackupPC would *not* notice.
> 
> Regards,
> Holger
> 


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/