Re: [BackupPC-users] Poor BackupPC Performance
2011-07-27 15:45:38
> Depending on how comfortable you are building your own packages,
> Fedora has 3.2.1 almost ready to go. We had to package two perl
> modules for the added FTP support.
>
> If you are willing to try them but don't want to build yourself I
> could probably build them for you.
>
> Thanks,
> Richard
Sure, I'll try NetBackup 3.2.1. I have not build packages before myself,
although I wouldn't mind first using Fedora to see if the performance will
actually be good for me.
> What is the CPU utilization during the BackupPC backup? What is the
> network utilization? What is the disk utilization? Is the machine
> swapping? Is it doing something else?
The CPU utilization used to be very high until I disabled gzip compression last
week. This did not affect performance, just processor usage. The machine is not
swapping and has free memory even. The server i
s a clean CentOS install with minimal packages installed, although on Debian
the server did not perform better. I have installed packages via 1) EPEL and 2)
Debian repository.
During back-ups the load often goes up to an average of 2 or 3, cpu-utilisation
is still somewhat high, although much lower than before disabling gzip
compression.
> Start with "vmstat 1" on the BackupPC box and watch it. See what the
> usage pattern looks like. Try to find out what is limiting your
> performance.
I will read into vmstat to see how it can help me see if the problem is
IO-related or what it's related to. Thanks for the tip.
> We've been trying to tell you that you're seeing abnormal performance. I
> get much better performance with a single 1.5GHz VIA processor, 512MB RAM
> and a single SATA spindle. Something is wrong here. We are not going to
> be able to tell you what: you will have to dig a little deeper and see
> what your machine is doing during a backup.
Yes, I seem to be poor at troubleshooting bad performance with BackupPC.
> Lots of tiny files, especially combined with a shortage of RAM (rsync
> transfers
> the entire directory listing and holds it in memory before starting the
> transfer).
>
> Huge sparse files (but not many other things handle them well either).
>
> Running in a VM. Worst case is probably a VM with an LVM on a virtual disk
> with sparse allocation (growing as needed).
>
> Anything else with activity on the same physical disk competing for head
> position.
The server has enough RAM for sure. There are a lot of tiny files, but only
because BackupPC is said to back-up all files/directories starting from /. A
fresh CentOS/Debian install seems to have already 30.000 files. The most
crowded server has 200.000 files.
Thanks for all your replies. I am purchasing 2 new servers next month and I am
planning to install Backup 3.2.1 on one of these and benchmark backing up a
fresh Linux install.
------------------------------------------------------------------------------
Got Input? Slashdot Needs You.
Take our quick survey online. Come on, we don't ask for help often.
Plus, you'll get a chance to win $100 to spend on ThinkGeek.
http://p.sf.net/sfu/slashdot-survey
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|
|
|