>
> Details please! Is the client Windows or Linux? What backup method are
> you using (rsync, smb, etc)? How are the client and server connected
> (LAN, vpn, ssh tunnel)?
>
> This is not normal. My 300GB backup only takes 15 hours. Something is
> slowing down the transfer.
>
> --
> Bowie
>
Linux client, Debian 6.0.1, rsync.
Connected via SSH, both servers are connected to 1gbit switch.
root@backuppc:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 1.4G 17G 8% /
tmpfs 502M 0 502M 0% /lib/init/rw
udev 497M 112K 497M 1% /dev
tmpfs 502M 0 502M 0% /dev/shm
/dev/mapper/vg0-lvol0
197G 1.1G 194G 1% /var/lib/backuppc
root@backuppc:~# date
Thu Jun 30 23:35:52 CEST 2011
root@backuppc:~#
Back-up is still running...
root@artemis:~# ps aux | grep rsyn
root 1674 8.8 1.4 19384 7240 ? Ss 19:00 24:23 /usr/bin/rsync
--server --sender --numeric-ids --perms --owner --group -D --links --hard-links
--times --block-size=2048 --recursive --ignore-times . /
root 4726 0.0 0.1 7548 852 pts/2 S+ 23:36 0:00 grep rsyn
root@artemis:~# date
Thu Jun 30 23:36:39 CEST 2011
I don't understand how back-up could be so slow and at the same time provide
high cpu usage,
1800 backuppc 20 0 90320 33m 1308 R 78.9(% CPU) 3.4 178:52.82
BackupPC_dump
On Thu, Jun 30, 2011 at 03:39:37PM -0500, Les Mikesell wrote:
>
> Running in a VM imposes a lot of overhead. Running LVM on top of a file
> based disk image pretty much guarantees that your disk block writes
> won't be aligned with the physical disk which makes things much slower.
> Can you at least give the vm a real partition if that isn't one
> already? And you definitely need to be sure you aren't sharing that
> physical disk with anything else. More ram would probably help just by
> providing more filesystem buffering even if you don't see it being used
> otherwise. You can turn off compression, but unless CPU capacity is the
> problem it won't help and might make things worse due to more physical
> disk activity.
>
I did not use LVM before repartitioning my back-up disk and it was the same
speed. People told me to use LVM, so I did. I will try to turn off compression
and see how this affects performance.
> Backuppc will never be as fast as other systems, but the main situations
> where the difference should be big are where you have a huge number of
> small files (enough that the copy of the directory that is transferred
> first pushes into swap) or when copying huge files with differences
> where the server has to uncompress the existing copy and reconstruct it.
>
> After you have completed 2 fulls, you may see a speed increase on
> unchanged files if you are using the --checksum-seed option.
>
Yes, I am aware that speed would improve after full back-up has been completed
because incremental back-ups only include 5% of the files or so. How would
BackupPC never be as fast as other systems? Because of deduplication or?
I am using a fairly regular BackupPC configuration file
(http://www.howtoforge.com/linux_backuppc) and really hope one of you guys
could help me find out why the performance is so poor and how I could improve
it.
--
------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|