Re: [Bacula-users] Slow restore performance
2008-08-21 08:08:42
Dear Christoph
Yes the interleaving of the concurrent jobs slows down the performance.
I use job migration for my full backups from disc to tape and the
migration actually removes the interleaving and if the disc volume is 40
GB and there is a small 1 GB full backup in the volume it takes about 40
minutes - a 20 GB in the same volume also takes about 40 minutes. The
tape unit max transfer rate is about 50 to 60 GB / hour almost matching
the discs.
The great benefit is restores are quick - I leave incremental backups on
disc. The discs are 1TB firewire 400.
Regards
Stephen Carr
Christoph Litauer wrote:
> Christoph Litauer schrieb:
>> Dear bacula users,
>>
>> I am running bacula 2.4.2 on a linux box. My backups are written to
>> disk1, a raid that is able to read/write about 80MB/s.
>>
>> I just did a 'restore all' to another local connected disk (disk2)
>> (80MB/s read/write). While the restore was running I watched the
>> throughput using iostat. I saw disk1 was reading about 70-80 MB/s but
>> disk2 was just writing with 5-6MB/s.
>> It seems as if bacula does not just read the essential data from the
>> backup device, instead it reads nearly all backup data and then writes
>> the data that should be restored?
>>
>> My disk device is configured
>>
>> #
>> # File Storage
>> #
>> Device {
>> Name = FileStorage
>> Device Type = File
>> Media Type = File
>> Archive Device = /storage
>> LabelMedia = yes;
>> Random Access = Yes;
>> AutomaticMount = yes;
>> RemovableMedia = no;
>> AlwaysOpen = no;
>> Block Positioning = yes;
>> Maximum Volume Size = 10737418240 # 10GB
>> }
>>
>> I think a restore rate of 5-6MB/s is rather slow and I'm shure bacula
>> could do better. Any configuration mistake?
>>
>
> Ok, I'll try an answer myself: As far as I read the documentation the
> performance gap between reading and writing results from bacula not
> storing the exact position for each file. Instead just the job position
> is stored in the database so that restore jobs have to read the whole
> job data.
>
> As a conclusion: As far as I understand this fact, interleaving
> (concurrent writing of up to 20 clients) causes significant performance
> slowdowns while restoring? And would spooling help? Does spooling
> unscramble the interleaved data?
>
--
Stephen Carr
Computing Officer
School of Civil and Environmental Engineering
The University of Adelaide
Tel +618-8303-4313
Fax +618-8303-4359
Email sgcarr AT civeng.adelaide.edu DOT au
CRICOS Provider Number 00123M
-----------------------------------------------------------
This email message is intended only for the addressee(s) and
contains information that may be confidential and/or copyright.
If you are not the intended recipient please notify the sender
by reply email and immediately delete this email. Use, disclosure
or reproduction of this email by anyone other than the intended
recipient(s) is strictly prohibited. No representation is made
that this email or any attachments are free of viruses. Virus
scanning is recommended and is the responsibility of the recipient.
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|
|
|