BackupPC-users

Re: [BackupPC-users] errors in cpool after e2fsck corrections

2009-01-18 13:07:50
Subject: Re: [BackupPC-users] errors in cpool after e2fsck corrections
From: Matthias Meyer <matthias.meyer AT gmx DOT li>
To: backuppc-users AT lists.sourceforge DOT net
Date: Sun, 18 Jan 2009 19:03:07 +0100
Johan Ehnberg wrote:

> Matthias Meyer wrote:
>>> Matthias Meyer wrote:
>>>> Thanks for your sympathy :-)
>>>> I would believe the filesystem should be ok in the meantime. e2fsck
>>>> needs to run 3 or 4 times and need in total more than 2 days. After
>>>> this lost+found contains approximately 10% of my data :-( No chance to
>>>> reconstruct all of them.
>>>>
>>>> 1) So you would recommend:
>>>> mv /var/lib/backuppc/cpool /var/lib/backuppc/cpool.sav
>>>> mkdir /var/lib/backuppc/cpool
>>>> I would believe that the hardlinks
>>>> from /var/lib/backuppc/pc/<host>/<backup-number> than will point to
>>>> cpool.sav instead cpool?
>>>> The disadvantage is that up to now every file have to be created in the
>>>> new cpool. No one of the existing files (in cpool.sav) can be reused.
>>>> By deleting of old backups during the next month, the cpool.sav should
>>>> be empty and can be deleted than.
>>>>
>>>> 2) I would believe that every backuped file will be checked against
>>>> cpool. Is it not identical than a new file will be created in cpool.
>>>> During the deletion of old backups also old, (maybee corrupt) files in
>>>> cpool will be deleted. So possible corrupt files in cpool will
>>>> disappear automaticly during the next month.
>>>>
>>>> Which strategy would you prefer?
>>>>
>>>> Thanks
>>> In 1) I was a bit vague: I meant moving all data (to be used only if
>>> needed, including cpool) and making fresh backups altogether. And
>>> exactly that will make it effortless for you - the new pool is clean.
>>>
>>> In 2) you are correct unless you are using checksum caching. To clean
>>> unused files you need nightly, and to use that you want a clean pool.
>>>
>>> Go for 2) if there are few errors that you can correct yourself to keep
>>> BackupPC running smoothly with an unbroken line of backups.
>>>
>>> However, 10 GB sounds like you'll save time and trouble by allowing
>>> backuppc to make new backups - if you can afford the bandwidth. At the
>>> same time you won't have to worry about many factors that could go
>>> wrong.
>>>
>>> Regards,
>>> Johan
>>>
>> ok.I wil give 2) a chance and will test it for at least one month.
>> 
>> Should I delete all directories in /var/lib/backuppc/cpool/?/?/?/* or
>> would BackupPC_nightly do this job?
>> Should I reactivate BackupPC_nightly?
>> 
>> Regards
>> Matthias
> 
> In 2) you should not delete anything - only when filesystem errors are
> causing trouble. You need the nightly.
> 
> Other than that - read the other posts too, they have good pointers to
> actually dealing with the problem behind all this as well as some ideas
> about how to get the pool in order! If your data is not critical you are
> of course at liberty to play around. In a production system I would
> assume a months testing is not acceptable on loose grounds.
> 
> Good luck!
> 
> /johan

Fine. I will not delete anything but verify if nightly do this job.
The data are not really critical. So I will play around :-)
I will give an actual state in the next days.
br
Matthias
-- 
Don't Panic


------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/