I want a backup that gives me the opportunity to get the server back and
running within a few minutes + download time of the image + restore time from
partimage.
It is ok to loose the files created since the backup-run last night, I see this
more as a "live insurance". The documents are already backed up with daily and
weekly backup-sets via backuppc.
Anyways, I think I should keep at least one additional copy of the
image-backup, if the latest backup has not been created correctly I can use the
image created one night before. It is ok for me to loose 50 GB backup space if
I can sleep better.
Your suggestion with rsyncing from the snapshot has its benefits: I can mount
the nts-partition (aka C:\ drive) of the windows server 2008 via ntfs and do
the backup from the mountpoint. This way I would save much space and get a
incremental backup-set for let's say 2 weeks in the past for "allmost free".
But I asked here on the list some weeks ago if it is reliable to do a backup
this way without loosing any file permissions, shadow copies, file-attributes
or something like that. I have allready done this with linux-servers and know
that it works there perfectly well (ok I need to install grub manually after
the files are restored), but I'm quite unsure about windoze. I have not yet
tried to create a new dev, format it with mkfs.ntfs and sync the files back and
try to boot from it. But no one told me that he ever have successfully tried
this or that it should work at all and I have learned to know windows boot
drives to be very "fragile".
But I can't believe that I'm the only one who needs to do backups of virtual
windows machines over the network who is not willing to pay 900 bucks for a
acronis true image licence per server! And the only difference there would be
that acronis is able to store incremental diffs for a already created backup
but after a week or so I need to do a full backup there too. The performance
and space-efficiency of acronis is better that with partimage but not that "I
would spend over 1000 EUR for that" better...
I'm allready happy if I get my rsync to make differential transfers of my image
files, no matter if I waste several gigs of space...
Andreas
Am 12.05.2012 um 15:28 schrieb Tim Fletcher:
> On 12/05/12 11:57, Andreas Piening wrote:
>> Hi Les,
>>
>> I allready thought about that and I agree that the handling of large image
>> files is problematic in general. I need to make images for the windows-based
>> virtual machines to get them back running when a disaster happens. If I go
>> away from backuppc for transfering these images, I don't see any benefits
>> (maybe because I just don't know of a image solution that solves my problems
>> better).
>> As I already use backuppc to do backups of the data partitions (all linux
>> based) I don't want my backups to become more complex than necessary.
>> I can live with the amount of harddisk space the compressed images will
>> consume and the IO while merging the files is acceptable for me, too.
>> I can tell the imaging software (partimage) to cut the image into 2 GB
>> volumes, but I doubt that this enables effective pooling, since the system
>> volume I make the image from has temporary files, profiles, databases and so
>> on stored. If every image file has changes (even if there are only a few
>> megs altered), I expect the rsync algorithm to be less effective than
>> comparing large files where it is more likely to have a "unchanged" long
>> part which is not interrupted by artificial file size boundaries resulting
>> from the 2 GB volume splitting.
>>
>> I hope I made my situation clear.
>> If anyone has experiences in large image file handling which I may benefit
>> from, please let be know!
>
> The real question is what are you trying to do, do you want a backup (ie
> another single copy of a recent version of the image file) or an archive (ie
> a series of daily or weekly snapshots of the images as they change)?
>
> BackupPC is designed to produce archives mainly of small to medium sized
> files and it stores the full file not changes (aka deltas) and so for large
> files (multi gigabyte in your case) that change each backup it is much less
> efficient.
>
> To my mind if you already have backuppc backing up your data partitions and
> the issue is that you want to back up the raw disk images from your virtual
> machines OS disks the best thing to snapshot them as you have already setup
> and then simply rsync that snapshot to another host which will just transfer
> the deltas between the diskimages. This will leave you with backuppc
> providing an ongoing archive for your data partitions and a simple rsync
> backup for your root disks that will at worse mean you lose a days changes in
> case of a total failure.
>
> --
> Tim Fletcher<tim AT night-shade.org DOT uk>
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|