BackupPC-users

Re: [BackupPC-users] extremely long backup time

2013-05-30 09:31:42
Subject: Re: [BackupPC-users] extremely long backup time
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Thu, 30 May 2013 22:52:06 +1000
On 30/05/13 21:53, Nicola Scattolin wrote:
> Il 30/05/2013 12:56, Adam Goryachev ha scritto:
>> On 30/05/13 18:13, Nicola Scattolin wrote:
>>> Il 30/05/2013 10:04, Adam Goryachev ha scritto:
>>>> On 30/05/13 16:57, Nicola Scattolin wrote:
>>>>> hi, i have a problem in full backups of a 2TB disk. when
>>>>> backuppc do fullbackup it takes on average 1866.0 minutes
>>>>> while the incremental backup takes around 20 minutes. do you
>>>>> think there is something wrong or it's just for the amount
>>>>> of data to be backupd?
>>>> Most likely this is a limitation of bandwidth, CPU, or memory
>>>> on either the backuppc server, or the machine being backed up.
>>>>
>>>> Have you enabled checksum-seed in your config? Are you even
>>>> using rsync?
>>>>
>>>> Remember a full backup will read the full content of every file
>>>> (talking about rsync because I will assume that is what you are
>>>> using) on both the client and backuppc server. A incremental
>>>> only looks at file attributes such as size and timestamp.
>>>>
>>>> Can you be more detailed about your configuration, and during a
>>>> full backup look at memory utilisation on both backuppc server
>>>> and the client.
>>>>
>>>> PS, this question is asked regularly, so you should also look
>>>> at the archives to see the previous discussions (which have
>>>> been very detailed, and sometimes heated).
>>>>
>>>> Regards, Adam
>>>>
>>> i use smb to transfer file, and there are not be cpu or
>>> bandwidth limitation, it's a local server. where is the
>>> checksum-seed option? i can't find it
>>
>> OK, so this is even more obvious.
>>
>> An incremental will only look at the timestamp, and transfer all
>> files newer than the timestamp of the previous backup. A full will
>> transfer ALL files, therefore this is disk I/O + network bandwidth
>> limited.
>>
>> 2TB of data will take 335 minutes at 1Gbps (assuming you can read
>> from the source disk at least 1Gbps, and write to the destination
>> disk at 1Gbps, and utilise 100% of source/destination disk
>> bandwidth as well as 100% of network bandwidth, and there was nil
>> overhead for handling each individual filename/etc...
>>
>> You are getting just under 20MB/sec, which is probably not
>> unreasonable.
>>
>> As mentioned, if you want it faster, you will need to determine
>> where the bottleneck is, which means looking at disk IO (most
>> likely), network bandwidth, CPU (especially if you use compression
>> on the backuppc server), etc...
>>
>> Regards, Adam
>>
>>
> i have checked the disk usage and the i/o that backuppc output me in
> the summary page, and 7.37 is Mb/sec is the value i got. The server
> is virtualized but the hardisk is linked directly to the virtual
> machine in mirroring raid, do you thing is a good speed or could be
> better?



Well, you failed to provide complete information in the original post, you said:

2TB disk fullbackup it takes on average 1866.0 minutes
So, 2000000MB / (1866*60 secs) = 17.86MB/sec

From the above, it would sound like the 2TB disk is only 50% full (approx)
7.37MB/s * 1866 mins * 60 sec/min = 825GB used space....

In any case, I would expect you can backup the full 2TB of data in much less than 31 hours that it is taking you to backup only 825GB. I would suggest you investigate where the bottleneck is.

Are the two machines on the same LAN? What speed?
Can the VM actually get decent disk performance? Don't just use dd, test random read speed.
What speed can you transfer files with smbclient between the backuppc server and this VM?
Actually look at, and provide information about CPU utilisation on both backuppc server and VM
Same for disk IO, and bandwidth, and memory usage

Consider changing backup protocol to something more efficient. Maybe tar would be more efficient (or less), or perhaps rsync (reduces network bandwidth at cost of CPU, but also provides better backups, and less disk load on the backuppc server due to checksum-seed option).

You will actually need to do a lot more work before any really useful comments/suggestions can be made. You should verify the achievable performance outside of backuppc first to ensure you don't have a real problem somewhere else (eg, virtualisation layer). Also, consider other loads on the same physical machine, especially if the disk is shared to other VM's what disk IO they are doing.

Regards,
Adam


--
Adam Goryachev
Website Managers
www.websitemanagers.com.au

------------------------------------------------------------------------------
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/