Re: [Bacula-users] Network performance
2010-08-11 10:52:46
Thomas Mueller wrote:
> Am Tue, 10 Aug 2010 15:13:07 +0100 schrieb Hugo Silva:
>
>> Hello,
>>
>> I'm backing up a server in Germany from a director in The Netherlands.
>> Using bacula, I can't seem to get past ~3000KB/s.
>>
>> Here's an iperf result:
>> [ 3] local [fd-addr] port 16625 connected with [dir-addr] port 5001 [
>> ID] Interval Transfer Bandwidth [ 3] 0.0-10.1 sec 110
>> MBytes 91.2 Mbits/sec
>
> you speak of a server in germany and director in netherlands. the sd is
> also on the director machine. fd sends data to sd directly - could also be
> a routing issue.
>
> and: as in many other threads mentioned, backing up a filesystem with
> thounds or millions of files can't be compared to a sequential read with
> dd.
>
> and: did you ran the btape tests on the sd to check the performance?
>
>
> - Thomas
Hi,
Thank you for your input.
The SD is also in the director machine, indeed. I don't think it's a
routing issue - the iperf test was done between these two machines with
excellent results.
I'm using disk storage; btape doesn't seem to be of help:
btape: btape.c:302 btape only works with tape storage.
I am aware that a dd test vs many small files isn't comparable - but at
least it rules out the SD storage. (and see below)
My interest is in knowing if there are known ways people use to speed up
the backup process when done over the internet. This is my first bacula
configuration backing up FDs in remote countries.
Consider the following:
# zfs create storage/test
# zfs set mountpoint=/test storage/test
# zfs set compression=off storage/test
# dd if=/dev/urandom of=/test/testfile bs=128k count=4096
4096+0 records in
4096+0 records out
536870912 bytes transferred in 7.020243 secs (76474691 bytes/sec)
Now at the director, I create a FileSet backing up this one file.
To aid bacula even more, I'll first put it in the OS cache:
# dd if=/test/testfile of=/dev/null bs=128k
4222+0 records in
4222+0 records out
553385984 bytes transferred in 2.910288 secs (190148180 bytes/sec)
And finally, the backup job, using this FileSet:
FileSet {
Name = "TestFileSet"
Include {
Options {
#Compression=gzip
Signature=SHA1
Onefs=yes
Honor nodump flag=yes
Noatime=yes
}
File = /test/testfile
}
}
Notice the read bytes/sec on the second dd.
At this point, consider that:
* An iperf test used the link at ~93%.
* The SD hdd is capable of writing at least 70MB/s.
* The FD hdd (ok, zfs cache) is capable of reading at least 180MB/s.
It follows, I believe, that this test should show transfer rates close
to 100mbits. This is one big file, and the hdd is perfectly capable of
sustaining 12.5MB/s sequential read (far more, as demonstrated)
However..
Traffic Peak Total
em0 in 4.863MB/s 4.863MB/s 16.461GB
out 137.977KB/s 137.977 KB/s 495.591MB
To the three points made above, consider that:
* Bacula is using the network link at ~38.4% during this test.
I had to disable the Maximum Network Buffer Size in the mean time,
coincidence or not the director started throwing out "unknown errors"
while connecting to storage, so this test is run with default buffer
sizes (which shouldn't be a problem - I got 91-93% of the max link
speed with iperf using default buffer sizes)
This test:
* Uses TLS encryption [encrypted comms]
* Uses PKI encryption [encrypted backup data]
* Does not use compression
I don't think TLS/PKI is the cause - there's plenty of CPU% while it's
running. Could investigate this further.
Not sure what to try next. Any suggestions?
Thanks for reading.
Hugo
------------------------------------------------------------------------------
This SF.net email is sponsored by
Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|
|
|