BackupPC-users

Re: [BackupPC-users] Backing up many, many files over a medium speed link - kills CPU and fails halfway. Any help?

2009-11-24 20:26:53
Subject: Re: [BackupPC-users] Backing up many, many files over a medium speed link - kills CPU and fails halfway. Any help?
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Wed, 25 Nov 2009 12:24:52 +1100
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

GB wrote:
> Hi all,
>
> So here's my backup scenario. I have one host with about 1.5TB of files,
> split into two major directories (and into many subdirectories
> thereafter). There's about 370,000 files per "major" directory. I need
> to do full/incremental backups of this host. Under normal conditions, I
> can get about 1MB/sec from it, so for a 'full' backup, I have no problem
> running it for 2-3 days at a time. The problem is that the backups
> usually time out somewhere halfway. One of two things happen: either the
> server load hits 20-30+ on a 15min interval and we're forced to kill
> rsync (using v3.0.6 in rsyncd mode with --enable-checksum-seed set to
> 32k), or it'll just get some network error along the way.
>
> Not to mention that if the backup runs for too long, the two start
> running in parallel and that really hurts the server (I have the host
> virtual-aliased to two hostnames, so that I can successfully back up
> both directories separately - since rsyncd can't support two different
> directories per 'share').
>
> Can anybody suggest some intelligent solutions for doing this? How can I
> speed up rsync / set it to use less CPU? I searched for some solutions,
> but everyone talks about limiting rsync _transfer_ speed, which is
> hardly the issue here - I'm backing up over a cable modem, heh.
>
> Any help appreciated. Thanks!

Look at swap space usage on the backuppc server as well as your backuppc
client. If you run out of ram on either side, you will see a massive
slow down in performance.

BTW, I have a mail server with the following stats:
Files: approx 300,000
Size: 26GB
New Files: 400M - 500M / day
Time for a full backup is around 2 hours, backup is done over a 8M DSL
link but --bwlimit=640

BTW, did you mean you have 370,000 files in a single directory, or you
have at the top level two directories, and under those a number of
sub-directories each with a number of files totalling 370,000 ?
ie, 370,000 files in a single directory would be very in-efficient...
depending on your FS of choice...

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAksMh2MACgkQGyoxogrTyiWxdgCcCdQY5tKvQ72wIZrPVLvwSwLm
kaYAn2RmZ5MrHnK7LeukXHOMAJIUIE8Y
=QVDQ
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/