BackupPC-users

Re: [BackupPC-users] Problems with hardlink-based backups...

2009-08-31 18:04:06
Subject: Re: [BackupPC-users] Problems with hardlink-based backups...
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Mon, 31 Aug 2009 17:59:20 -0400
Les Mikesell wrote at about 16:36:37 -0500 on Monday, August 31, 2009:
 > Jeffrey J. Kosowsky wrote:
 > > 
 > > I have seen problems where the attrib files are not synchronized with
 > > the backups or when the pc tree is broken. In fact, that is the reason
 > > I wrote several of my routines to identify and fix such problems. Now
 > > true, the cause is typically due to crashes or disk/filesystem issues
 > > outside of the direct scope of BackupPC but there are real-world
 > > synchronization and integrity issues that can arise.
 > 
 > But nothing you've proposed will make any difference in this respect. 
 > Well, maybe different, but nothing to enforce additional synchronization 
 > with the file content.

Except that it is easier to back up a database than to back up
thousands if not millions of scattered attrib files. Also, there are
well-known tools for checking database consistency while you need to
write custom ones for attrib files. 

But the main point is that I was never claiming this as a particular
advantage of databases, I was merely answering your question about
whether I had ever had problems with attrib files.

 > > Also, the slowness of reconstructing incrementals whether in the
 > > web-based view or in the fuser view is in a large part due to both the
 > > slow nature of attrib files and the fact that you have to crawl
 > > backwards through the backup tree to find which is the latest visible
 > > version of each file. Just think how many attrib files you need to
 > > find, open, read, decompress, parse, etc. just in order to see what
 > > files are visible and think how this scales when you have large
 > > directories and many levels of incrementals.
 > 
 > OK, but I almost never do that, and probably wouldn't even if it were 
 > faster.  And I don't use incremental levels because even without the 
 > attrib files you'd still have to do extra work to merge the directories.

OK. Then we have different use cases. For example. I like to use the fuser
implementation to look for old files or old versions of files.

 > 
 > > Indeed, there is no
 > > question in my mind that a single well-constructed relational database
 > > would be orders of magnitude faster here.
 > 
 > Until you go to get the data, which is kind of the point.

I can only tell you how slow and non-optimized the current
implementation is. Do you really believe that a relational database
wouldn't be significantly faster than the current approach
finding/opening/reading/decompressing/parsing multiple layers of
attrib files?

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>