BackupPC-users

Re: [BackupPC-users] Remote mirror sanity checks

2010-02-28 13:45:19
Subject: Re: [BackupPC-users] Remote mirror sanity checks
From: "Johannes H. Jensen" <joh AT pseudoberries DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Sun, 28 Feb 2010 19:44:00 +0100
$ df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/BACKUP   37429248 4138304 33290944   12% /backup

31 is an error code as far as I can see. It's the same for all the
link attempts...

Best regards,

Johannes H. Jensen



On Sun, Feb 28, 2010 at 7:29 PM, Michael Kuss <michael.w.kuss AT gmail DOT com> 
wrote:
>
>
> On Sat, Feb 27, 2010 at 5:22 PM, dan <dandenson AT gmail DOT com> wrote:
>>
>> Are you running into the actual hardlink limit or an inode limit? ext3 has
>> a hard coded hardlink limit but hardlinks are also limited by available
>> inodes.  you can check your available inodes with
>>
>> tune2fs -l /dev/disk|grep -e "Free inodes" -e "Inode count"
>
>  df -i does the same, and you don't need to be root.
>
> Michael
>
>>
>> if you have very few or none left then this is your problem.  You cant
>> change the inode count on an existing ext3 filesystem as far as I know but
>> if you re-create the filesystem you can do
>> mkfs.ext3 -N ##### /dev/disk
>> change the ##### to suite your needs.  You should know the current number
>> for the tune2fs command above.  I would just take your current filesystem
>> usage (lets say 62% for the math) then take the `current number` * 3 / .62
>> so that you have enough inodes for today PLUS you are compensated for when
>> the disks are fuller.
>>
>>
>>
>> On Sat, Feb 27, 2010 at 6:12 AM, Johannes H. Jensen
>> <joh AT pseudoberries DOT com> wrote:
>>>
>>> Thank you for your input,
>>>
>>> On Sat, Feb 27, 2010 at 3:38 AM, dan <dandenson AT gmail DOT com> wrote:
>>> > if [ -e /var/lib/backuppc/testfile ];
>>> >    then rsync xxxx;
>>> >    else echo "uh oh!";
>>> > fi
>>> >
>>> > should make sure that the filesystem is mounted.
>>>
>>> Yes, that's definitely a good idea. However it does not check to make
>>> sure that the integrity of the BackupPC pool is okay. If only a small
>>> subset of the backup pool gets removed/corrupted/etc, this would still
>>> get reflected in the remote mirror. I would prefer some
>>> BackupPC-oriented way of doing this (maybe BackupPC_serverMesg status
>>> info?) if someone could provide me with the details.
>>>
>>> > you could also first do a try run
>>> > rsync -avnH --delete /source /destination > /tmp/list
>>> > then identify what will be deleted:
>>> > cat /tmp/list|grep deleting|sed 's/deleting /\//g'
>>> >
>>> > now you have a list of everything that WOULD be deleted with the
>>> > --delete
>>> > option.  Run your normal sync and save this file for later
>>> >
>>> > You could save take this file list and send it to the remote system
>>> >
>>> > scp /tmp/list remotehost:/list-`date -%h%m%s`
>>> >
>>> > on remote system
>>> >
>>> > cat /list-* | xargs rm
>>> >
>>> > to delete the file list.  You could do this weekly or monthly or
>>> > whenever
>>> > you needed.
>>>
>>> That's a good idea. My original thought was to manually run the rsync
>>> with the --delete option once a week or so, but we've already run into
>>> filesystem (ext3) problems where we exceed the maximum links after a
>>> few days because we don't --delete... I guess we could use another
>>> filesystem with a higher limit instead...
>>>
>>>
>>> Best regards,
>>>
>>> Johannes H. Jensen
>>>
>>>
>>>
>>> > On Fri, Feb 26, 2010 at 6:27 AM, Johannes H. Jensen
>>> > <joh AT pseudoberries DOT com>
>>> > wrote:
>>> >>
>>> >> Hello,
>>> >>
>>> >> We're currently syncing our local BackupPC pool to a remote server
>>> >> using rsync -aH /var/lib/backuppc/ remote:/backup/backuppc/
>>> >>
>>> >> This is executed inside a script which takes care of stopping BackupPC
>>> >> while rsync is running as well as logging and e-mail notification. The
>>> >> script nightly as a cronjob.
>>> >>
>>> >> This works fairly well, except it won't remove old backups from the
>>> >> remote server. Apart from using up unnecessary space, this has also
>>> >> caused problems like hitting the remote filesystems hard link limit.
>>> >>
>>> >> Now I'm aware of rsync's --delete option, but I find this very risky.
>>> >> If for some reason the local backup server fails and
>>> >> /var/lib/backuppc/ is somehow empty (disk fail etc), then --delete
>>> >> would cause rsync to remove *all* of the mirrored files on the remote
>>> >> server. This kind of ruins the whole point of having a remote
>>> >> mirror...
>>> >>
>>> >> So my question is then - how to make sure that the local backup pool
>>> >> is sane and up-to-date without risking loosing the entire remote pool?
>>> >>
>>> >> I have two ideas of which I'd love some input:
>>> >>
>>> >> 1. Perform some sanity check before running rsync to ensure that the
>>> >> local backuppc directory is indeed healthy. How this sanity check
>>> >> should be performed I'm unsure of. Maybe check for existence of some
>>> >> file or examine the output of `BackupPC_serverMesg status info'?
>>> >>
>>> >> 2. Run another instance of BackupPC on the remote server, using the
>>> >> same pc and hosts configuration as the local server but with
>>> >> $Conf{BackupsDisable} = 2 in the global config. This instance should
>>> >> then keep the remote pool clean (with BackupPC_trashClean and
>>> >> BackupPC_nightly), or am I mistaken? Of course, this instance also has
>>> >> to be stopped while rsyncing from the local server.
>>> >>
>>> >> If someone could provide some more info on how this can be done
>>> >> safely, it would be greatly appreciated!
>>> >>
>>> >>
>>> >> Best regards,
>>> >>
>>> >> Johannes H. Jensen
>>> >>
>>> >>
>>> >>
>>> >> ------------------------------------------------------------------------------
>>> >> Download Intel&#174; Parallel Studio Eval
>>> >> Try the new software tools for yourself. Speed compiling, find bugs
>>> >> proactively, and fine-tune applications for parallel performance.
>>> >> See why Intel Parallel Studio got high marks during beta.
>>> >> http://p.sf.net/sfu/intel-sw-dev
>>> >> _______________________________________________
>>> >> BackupPC-users mailing list
>>> >> BackupPC-users AT lists.sourceforge DOT net
>>> >> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> >> Wiki:    http://backuppc.wiki.sourceforge.net
>>> >> Project: http://backuppc.sourceforge.net/
>>> >
>>> >
>>> >
>>> > ------------------------------------------------------------------------------
>>> > Download Intel&#174; Parallel Studio Eval
>>> > Try the new software tools for yourself. Speed compiling, find bugs
>>> > proactively, and fine-tune applications for parallel performance.
>>> > See why Intel Parallel Studio got high marks during beta.
>>> > http://p.sf.net/sfu/intel-sw-dev
>>> > _______________________________________________
>>> > BackupPC-users mailing list
>>> > BackupPC-users AT lists.sourceforge DOT net
>>> > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> > Wiki:    http://backuppc.wiki.sourceforge.net
>>> > Project: http://backuppc.sourceforge.net/
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Download Intel&#174; Parallel Studio Eval
>>> Try the new software tools for yourself. Speed compiling, find bugs
>>> proactively, and fine-tune applications for parallel performance.
>>> See why Intel Parallel Studio got high marks during beta.
>>> http://p.sf.net/sfu/intel-sw-dev
>>> _______________________________________________
>>> BackupPC-users mailing list
>>> BackupPC-users AT lists.sourceforge DOT net
>>> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:    http://backuppc.wiki.sourceforge.net
>>> Project: http://backuppc.sourceforge.net/
>>
>>
>>
>> ------------------------------------------------------------------------------
>> Download Intel&#174; Parallel Studio Eval
>> Try the new software tools for yourself. Speed compiling, find bugs
>> proactively, and fine-tune applications for parallel performance.
>> See why Intel Parallel Studio got high marks during beta.
>> http://p.sf.net/sfu/intel-sw-dev
>> _______________________________________________
>> BackupPC-users mailing list
>> BackupPC-users AT lists.sourceforge DOT net
>> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:    http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>
>
> ------------------------------------------------------------------------------
> Download Intel&#174; Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/