On 5/25/2010 12:28 PM, Frank J. Gómez wrote:
>
> The last successful full backup of RED5 took just over 8 hours. 37,457
> files totaling just under 25 GB were backed up at a rate of 0.87
> MB/sec. RED5 is a laptop, and it's only on the network for about 8
> hours per day, so I have to be able to do better than this.
Is that the initial or 2nd run with this batch of files? The 3rd and
subsequent runs should be faster if you have enabled checksum caching.
Or is that a typical amount of change between backups?
> On to the hardware: The current BackupPC server is not a dedicated
> machine; I plan to change that with the new machine, so right away I
> should see some improvements in performance. The tower is running on a
> single hard drive, which is living a little more dangerously than I'd
> like. I want hard disk redundancy via RAID on the new machine, but I
> understand RAID5 is slow for writes and I don't know much about the
> different possible RAID configurations.
Considering how cheap disks are these days, I like simple raid1 mirrors
where they are practical for the total size you need. Building
something new today, I'd be tempted to use laptop drives except that the
ones over 640G use 4k sectors that are a problem for Linux. There are
some nice 2-bay swappable enclosures that fit in the space of a floppy
drive.
> I think the hard drive is part
> of the bottleneck in the old system; the BackupPC pool lives on an
> external USB drive.
USB is as much the bottleneck as the drive. If you use externals, use
ESATA even if you have to add a card for it.
> I've read that for increased performance, it's
> recommended that the pool be on a separate disk from the operating
> system -- does this also apply for RAIDed systems? On BackupPC systems,
> where do the bottlenecks tend to be: hard drive, memory, processor, network?
Moving a disk head is orders of magnitude slower than any other computer
operation and backuppc does enough of that on its own so you don't want
anything else competing. And if you use raid, you want a version that
lets the heads work mostly independently on reads. More memory is good
since it is used as disk cache and helps eliminate some seeks for reads.
You probably can't overload a 100M network, but avoid wireless if
possible. There is some CPU use for compression and ssh encryption but
anything reasonably current is OK.
> Lastly, am I wrongheaded in trying to solve this problem with BackupPC?
> Is there a better solution for a transient host with this much data to
> back up?
Backuppc's rsync will be slower than stock because it is written in
perl, not the latest flavor, and works against a compressed copy on the
server. If speed of transfer from one or a few machines is your main
goal, you might provide server space for a full uncompressed copy where
you can rsync - then let backuppc back that up to keep its history in a
more efficient format.
--
Les Mikesell
lesmikesell AT gmail DOT com
------------------------------------------------------------------------------
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|