Bacula-users

Re: [Bacula-users] Bacula Linux and the LTO-4 tape speed

2012-08-21 13:09:15
Subject: Re: [Bacula-users] Bacula Linux and the LTO-4 tape speed
From: Alan Brown <ajb2 AT mssl.ucl.ac DOT uk>
To: lst_hoe02 AT kwsoft DOT de
Date: Tue, 21 Aug 2012 18:06:51 +0100
On 21/08/12 17:46, lst_hoe02 AT kwsoft DOT de wrote:

>> That means there's some room for improvement in despooling speed,
>> but the big bottleneck at the moment is disk-fd and fd->sd, not
>> sd->tape - the best achieved there is a sustained 52MB/s and that
>> virtually maxes out a 1Gb/s NIC. Even if the network block sizing is
>> optimized, I need to look at 10Gb NICs and for simultaneous
>> spool/despooling.
>
> We plan to takle this one with parallel running jobs doing spooling to
> SSD and despooling to tape. As far as i understand spooling/despooling
> can happen in parallel if using different jobs.

I already do this (7 drives, 6 pools), but it still means individual 
TB-class jobs take too long to run.

>> WRT "offsite backups" - I'm more inclined to use a good firesafe in
>> another building than pass media to a 3rd party company. Far too
>> many people backup to tape but then don't take care of the media.
>
> Our's are in a Bank safe at the other end of the town and in the
> future they are also encrypted thanks to Bacula :-)

In the present they can be encrypted too, for LTO4 and higher. Hardware 
encryption is a lot faster than software.





------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>