BackupPC-users

Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-10-01 23:42:56
Subject: Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy
From: Holger Parplies <wbppc AT parplies DOT de>
To: Mike Dresser <mdresser_l AT windsormachine DOT com>
Date: Sun, 2 Oct 2011 05:39:15 +0200
Hi,

Mike Dresser wrote on 2011-09-29 14:11:20 -0400 [[BackupPC-users] Fairly large 
backuppc pool (4TB) moved with backuppc_tarpccopy]:
> [...]
> Old disks were 10 x 1TB in raid10, new is 6 x 3TB's in raid6 (which in 
> itself has been upgraded many times).. the new raid6 is FAR faster in 
> both iops and STR than the old one.

trust the list to jump on that part (which doesn't seem to be a question) and
ignore ...

> [...] Did see a few errors, all of them were related to the attrib files,
> similar to "Can't find xx/116/f%2f/fvar/flog/attrib in pool, will copy file"
> [...]
> Out of curiosity, where are those errors (the attrib in pool ones) 
> coming from?

(which is a question, and a good one).

I can't promise that this is the correct answer, but it's a possibility: prior
to BackupPC 3.2.0, *top-level* attrib files (i.e. those for the directory
containing all the share subdirectories) were linked into the pool with an
incorrect digest, presuming there was more than one share. This would mean that
BackupPC_tarPCCopy would not find the content in the pool, because it would
look for a file with the *correct* digest (i.e. file name). Please note that
your quote above does *not* reference a *top-level* attrib file (that would be
"xx/116/attrib"), and, beyond that, you don't seem to have multiple shares,
so it might well be a different problem.

According to the ChangeLog, Jeffrey should have pointed this out, because he
discovered the bug and supplied a patch ;-).

I notice this problem on my pool when investigating where the longest hash
collision chain comes from: it's a chain of top-level attrib files - all for
the same host and with different contents and thus certainly different digests.

> I still have the old filesystem online if it's something I 
> should look at.

I don't think it's really important. If the attrib file was not in the pool
previously, then that may simply have wasted a small amount of space. As I
understand the matter, the file will remain unpooled in the copy. You could
fix that with one of Jeffrey's scripts or just live with a few wasted bytes.
If you are running a BackupPC version < 3.2.0, pooling likely won't work for
those attrib files anyway.

It might be interesting to determine whether the non-top-level attrib files
you got errors for are also, in fact, pooled under an incorrect pool file
name, though that would involve finding the pool file by inode number and
calculating the correct pool hash (or ruling out the existance of a pool file
due to a link count of 1 :-).

> So it IS possible to move a large number of files, it just takes 
> awhile.

Sure, why shouldn't it be? However, I would have recommended Jeffrey's script
(or mine, though I'm not sure what state it is in) to cut down the duration by
many hours.

> I will know in a couple more days if it was fully successful, 
> but i see no reason why it won't work.

Neither do I. Please do sum up how long it finally takes.

Regards,
Holger

------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/