Bacula-users

Re: [Bacula-users] tuning lto-4

2011-12-01 11:22:20
Subject: Re: [Bacula-users] tuning lto-4
From: Brian Debelius <bdebelius AT intelesyscorp DOT com>
To: gary artim <gartim AT gmail DOT com>
Date: Thu, 01 Dec 2011 11:20:40 -0500
I believe (its been a while since I have needed to change my 
configuration) that my LTO-3 drive does not do hardware compression on 
blocks over 512K.  I am using 256K blocks right now, and I did not see 
any improvement above that.  I am using spooling on a pair of striped 
hard disks, and despooling happens at 65-80MB/s


On 12/1/2011 10:50 AM, gary artim wrote:
> thank much! will try testing with btape. btw, I ran with 20GB maximum
> file size/2MB max block (see bacula-sd.conf below) and got these
> results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- now if I
> can just double the speed I could backup 15TB in about 45/hrs. I don't
> have that much data yet, but I'm hovering at 2TB and looking to expand
> sharply over time. I'm not doing any networking, it just straight from
> a raid 5 to a autochanger/lto-4. gary
>
>    Build OS:               x86_64-redhat-linux-gnu redhat
>    JobId:                  6
>    Job:                    Prodbackup.2011-11-30_18.49.24_06
>    Backup Level:           Full
>    Client:                 "bacula-fd" 5.0.3 (04Aug10)
> x86_64-redhat-linux-gnu,redhat,
>    FileSet:                "FileSetProd" 2011-11-30 15:23:58
>    Pool:                   "FullProd" (From Job FullPool override)
>    Catalog:                "MyCatalog" (From Client resource)
>    Storage:                "LTO-4" (From Job resource)
>    Scheduled time:         30-Nov-2011 18:49:15
>    Start time:             30-Nov-2011 18:49:26
>    End time:               30-Nov-2011 20:14:56
>    Elapsed time:           1 hour 25 mins 30 secs
>    Priority:               10
>    FD Files Written:    35,588
>    SD Files Written:    35,588
>    FD Bytes Written:    257,543,092,723 (257.5 GB)
>    SD Bytes Written:    257,548,504,514 (257.5 GB)
>    Rate:                   50203.3 KB/s
>    Software Compression:   None
>    VSS:                    no
>    Encryption:             no
>    Accurate:               no
>    Volume name(s):         f2
>    Volume Session Id:   2
>    Volume Session Time:    1322707293
>    Last Volume Bytes:   257,600,822,272 (257.6 GB)
>    Non-fatal FD errors:    0
>    SD Errors:              0
>    FD termination status:  OK
>    SD termination status:  OK
>    Termination:            Backup OK
>
> bacula-sd.conf:
> Device {
>    Name = LTO-4
>    Media Type = LTO-4
>    Archive Device = /dev/nst0
>    AutomaticMount = yes;               # when device opened, read it
>    AlwaysOpen = yes;
>    RemovableMedia = yes;
>    RandomAccess = no;
>    #Maximum File Size = 12GB
>    Maximum File Size = 20GB
>    #Maximum Network Buffer Size = 65536
>    Maximum block size = 2M
>    #Spool Directory = /db/bacula/spool/LTO4
>    #Maximum Spool Size     = 200G
>    #Maximum Job Spool Size = 150G
>    Autochanger = yes
>    Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
>    Alert Command = "sh -c 'smartctl -H -l error %c'"
> }
>
>
>
> On Wed, Nov 30, 2011 at 11:48 PM, Andrea Conti<alyf AT alyf DOT net>  wrote:
>> On 30/11/11 19.43, gary artim wrote:
>>> Thanks much, I'll try today the block size change first. Then try the
>>> spooling. Dont have any unused disk, but may have to try on a shared
>>> drive.
>>> The "maximum file size" should be okay? g.
>> Choosing a max file size is mainly a tradeoff between write performance
>> (as the drive will stop and restart at the end of each file to write an
>> EOF mark) and restore performance (as the drive can only seek to a file
>> mark and then sequentially read through the file until the relevant data
>> bocks are found).
>>
>> I usually set maximum file size so that there are 2-3 filemarks per tape
>> wrap (3GB for LTO3, 5GB for LTO4), but if you don't plan to do regular
>> restores, or if you always restore the whole contents of a volume, 12GB
>> is fine.
>>
>> Anyway, with the figures you're citing your problem is *not* maximum
>> file size.
>>
>> Try to assess tape performance alone with btape test (which has a
>> "speed" command); you can try different block sizes and configuration
>> and see which one gives the best results.
>>
>> Doing so will give you a clear indication on whether your bottleneck is
>> in tape or disk throughput.
>>
>> andrea
>>
>> ------------------------------------------------------------------------------
>> All the data continuously generated in your IT infrastructure
>> contains a definitive record of customers, application performance,
>> security threats, fraudulent activity, and more. Splunk takes this
>> data and makes sense of it. IT sense. And common sense.
>> http://p.sf.net/sfu/splunk-novd2d
>> _______________________________________________
>> Bacula-users mailing list
>> Bacula-users AT lists.sourceforge DOT net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure
> contains a definitive record of customers, application performance,
> security threats, fraudulent activity, and more. Splunk takes this
> data and makes sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-novd2d
> _______________________________________________
> Bacula-users mailing list
> Bacula-users AT lists.sourceforge DOT net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users