BackupPC-users

Re: [BackupPC-users] extremely long backup time

2013-05-30 09:39:23
Subject: Re: [BackupPC-users] extremely long backup time
From: "Phil K." <phillip.kennedy AT yankeeairmuseum DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Thu, 30 May 2013 08:10:55 -0400
Just to take things in a different direction;

What do your transfer logs say? Is this an OS disk, or is it strictly data? If you're seeing strings of errors when reading files (crypto and AV related files are notorious for this) You may want to adjust your include / exclude files. This will improve read time and, by proxy, transfer times.
~Phil

Nicola Scattolin <nick AT ser-tec DOT org> wrote:
Il 30/05/2013 12:56, Adam Goryachev ha scritto:
On 30/05/13 18:13, Nicola Scattolin wrote:
Il 30/05/2013 10:04, Adam Goryachev ha scritto:
On 30/05/13 16:57, Nicola Scattolin wrote:
hi,
i have a problem in full backups of a 2TB disk.
when backuppc do fullbackup it takes on average 1866.0 minutes while the
incremental backup takes around 20 minutes.
do you think there is something wrong or it's just for the amount of
data to be backupd?
Most likely this is a limitation of bandwidth, CPU, or memory on either
the backuppc server, or the machine being backed up.

Have you enabled checksum-seed in your config?
Are you even using rsync?

Remember a full backup will read the full content of every file (talking
about rsync because I will assume that is what you are using) on both
the client and backuppc server. A incremental only looks at file
attributes such as size and timestamp.

Can you be more detailed about your configuration, and during a full
backup look at memory utilisation on both backuppc server and the client.

PS, this question is asked regularly, so you should also look at the
archives to see the previous discussions (which have been very detailed,
and sometimes heated).

Regards,
Adam

i use smb to transfer file, and there are not be cpu or bandwidth
limitation, it's a local server.
where is the checksum-seed option? i can't find it

OK, so this is even more obvious.

An incremental will only look at the timestamp, and transfer all files
newer than the timestamp of the previous backup.
A full will transfer ALL files, therefore this is disk I/O + network
bandwidth limited.

2TB of data will take 335 minutes at 1Gbps (assuming you can read from
the source disk at least 1Gbps, and write to the destination disk at
1Gbps, and utilise 100% of source/destination disk bandwidth as well as
100% of network bandwidth, and there was nil overhead for handling each
individual filename/etc...

You are getting just under 20MB/sec, which is probably not unreasonable.

As mentioned, if you want it faster, you will need to determine where
the bottleneck is, which means looking at disk IO (most likely), network
bandwidth, CPU (especially if you use compression on the backuppc
server), etc...

Regards,
Adam


i have checked the disk usage and the i/o that backuppc output me in the
summary page, and 7.37 is Mb/sec is the value i got.
The server is virtualized but the hardisk is linked directly to the
virtual machine in mirroring raid, do you thing is a good speed or could
be better?

--
Phil Kennedy
Yankee Air Museum
Systems Admin
Phillip.kennedy AT yankeeairmuseum DOT org

Sent from my Android phone with K-9 Mail.
------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/