Jeffrey J. Kosowsky wrote at about 20:26:35 -0400 on Thursday, October 30, 2008:
> John Rouillard wrote at about 20:13:15 +0000 on Thursday, October 30, 2008:
> > On Thu, Oct 30, 2008 at 10:04:26AM -0400, Jeffrey J. Kosowsky wrote:
> > > Holger Parplies wrote at about 11:29:49 +0100 on Thursday, October 30,
> 2008:
> > > > Hi,
> > > >
> > > > Jeffrey J. Kosowsky wrote on 2008-10-30 03:55:16 -0400
> [[BackupPC-users] Duplicate files in pool with same CHECKSUM and same
> CONTENTS]:
> > > > > I have found a number of files in my pool that have the same
> checksum
> > > > > (other than a trailing _0 or _1) and also the SAME CONTENT. Each
> copy
> > > > > has a few links to it by the way.
> > > > >
> > > > > Why is this happening?
> > > >
> > > > presumably creating a link sometimes fails, so BackupPC copies the
> file,
> > > > assuming the hard link limit has been reached. I suspect problems
> with your
> > > > NFS server, though not a "stale NFS file handle" in this case,
> > > > since the file succeeds. Strange.
> > >
> > > Yes - I am beginning to think that may be true. However as I mentioned
> > > in the other thread, the syslog on the nfs server is clean and the one
> > > on the client shows only about a dozen or so nfs timeouts over the
> > > past 12 hours which is the time period I am looking at now. Otherwise,
> > > I don't see any nfs errors.
> > > So if it is a nfs problem, something seems to be happening somewhat
> > > randomly and invisibly to the filesystem.
> >
> > IIRC you are using a soft nfs mount option right? If you are writing
> > to an NFS share that is not recommended. Try changing it to a hard
> > mount and see if the problem goes away. I only used soft mounts on
> > read only filesystems.
>
> True -- I changed it to 'hard' but am still encountering the
> problem... FRUSTRATING...
>
> It's really weird in that it seems to work the first time a directory
> is read but after a directory has been read a few times, it starts
> messing up. It's almost like the results are being stored in cache and
> then the cache is corrupted.
In fact, I have found two ways to assuredly allow me to read the
directory again (at least for a few minutes or tries until it gets
corrupted again):
1. Remount the nfs share
2. Read the directory directly on the server (without nfs)
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|