BackupPC-users

Re: [BackupPC-users] Problems with hardlink-based backups...

2009-08-31 17:17:18
Subject: Re: [BackupPC-users] Problems with hardlink-based backups...
From: Peter Walter <pwalter AT itlsys DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Mon, 31 Aug 2009 17:14:20 -0400
Les Mikesell wrote:
> Peter Walter wrote:
>   
>> Les Mikesell wrote:
>>     
>>> Peter Walter wrote:
>>>   
>>>       
>>>> For me, the matter could be resolved if a 
>>>> way was found to at least backup a backuppc server in a reasonable 
>>>> fashion without requiring particular filesystems and utilities such as 
>>>> zfs send/receive.
>>>>     
>>>>         
>>> But there is a reasonable way: unmount the partition and image-copy the 
>>> raw disk or partition.  Given that the issue with other approaches is 
>>> that the head has to seek all over the place to access the same amount 
>>> of data through the filesystem, this solves the problem neatly with one 
>>> linear pass.  Or, get the same effect by raid-mirroring to your backup 
>>> device so you only have to unmount momentarily to fail/remove the other 
>>> copy.   Zfs improves on this since it has an incremental mode that is 
>>> still based on the block device.
>>>
>>>       
>> Yeah - but you need physical access to do that, and presumes your 
>> storage devices are "real". My needs are to backup a backuppc server 
>> where the server doing the backup is at a remote location from the 
>> backuppc server, and physical access to either server is difficult - I 
>> am dozens (sometimes hundreds) or miles away from either server. In 
>> addition, I have access to "cloud storage" I would like to take 
>> advantage of, but can't because of the hardlink issue. My (klugey) 
>> solution at present is to use a backuppc server to backup the backuppc 
>> server, but even incrementals take days to run.
>>     
>
> You can always trade bandwidth for access.  If you have sufficient 
> bandwidth you can do an image copy anywhere you want - even to a huge 
> file in cloud storage.  If you don't have the bandwidth - or enough 
> space for that image file, then you need to use something specialized to 
> work around your problem, like zfs incremental send/receive.  But, if 
> your backup window permits 2 runs, the easy solution is to just run a 
> 2nd backuppc server hitting the same targets with rsync and forget about 
> copying the server itself.
>
>   
Terabyte image copies between servers are not feasible with the WAN 
bandwidth I have available. The second backup server does not (and 
cannot)  backup the original targets directly - the second backup server 
may only access the primary backup servers remotely, not the targets 
that the primary backup servers access. Using zfs is not an option 
because I don't control the configuration of the primary backup server, 
except that I am allowed to configure backuppc on it.

I am therefore restricted to copying the primary backup server itself.  
The intent is not to be able to recover the targets directly - the aim 
is to recover the primary backup server, and, from there, recover the 
targets. If I had a method of simply backing up the changed files on the 
backup server, and a method of dumping the hardlinks in such a manner 
that they could be reconstituted later, then that would suffice.

Peter

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>