Bacula-users

Re: [Bacula-users] Configuring autochanger with SAS LTO-5 drives

2017-06-03 04:07:03
Subject: Re: [Bacula-users] Configuring autochanger with SAS LTO-5 drives
From: Kern Sibbald <kern AT sibbald DOT com>
To: Ivan Adzhubey <iadzhubey AT rics.bwh.harvard DOT edu>, bacula-users AT lists.sourceforge DOT net
Date: Sat, 3 Jun 2017 10:05:47 +0200
Hello,

Thanks for pointing this out. I have made a few updates to the 7.9 manual (on www.bacula.org).

In fact, it is the Minimum Block Size that is used to force fixed block size. That said, for any modern tape drive (LTO) variable block sizes are the best to use so you should leave Minimum Block Size at 0.

The current maximum for Maximum Block Size is 4,000,000. If you want to use something bigger than this, you must edit the source code yourself and set a larger value. If you do so, you will possibly encounter two problems:

1. Writing blocks bigger than 1GB can potentially lead to more drive data errors and thus possible data loss.

2. Your restore times for individual files will be longer since the index granularity for file restoration is the Maximum File Size.

Some people recommend very large block sizes -- 2-4MB. You are welcome to do it, but the performance gain is not enormous, and I personally prefer to be conservative and to know that my data is safe.

Best regards,

Kern


On 06/02/2017 07:31 PM, Ivan Adzhubey wrote:
Hi Rudolf,

Thanks for the prompt reply. Please scroll down for inline comments.

On Friday, June 02, 2017 09:52:58 AM Cejka Rudolf wrote:
Ivan Adzhubey wrote (2017/06/01):
b) What is the effect of MaximumFileSize option and what would be its
optimal value for my IBM LTO-5 SAS drives? I have used 8GB value found in
one of the list posts, while the documentation suggests 2GB for LTO-4.
But even set at 8GB this would create lots of EOF marks on a 1.5TB tape,
do we really need so many?
Hi, I do use 16 GB. Every EOF mark means around 3 seconds delay. So if you
have over 200 files on the tape using 8 GB, it is around 10 minutes extra
per tape.
Searching throughout the list, I am seeing widely varying numbers quoted for
working LTO tape configurations. I guess what it means is that the parameter is
not critical. But thanks for the estimate, looks like 10GB value makes sense.

c) Should I try increasing the tape block size? Set it to fixed? I know
this
I think that there is no reason to use fixed tape block size. On the other
side, increasing tape block size would help, there are many discussions
about that.
Documentation states (rather vaguely, I admit) that you only would want to set
MaximumBlockSize to use fixed block sizes:

"Maximum block size = size-in-bytes On most modern tape drives, you will not
need to specify this directive. If you do so, it will most likely be to use
fixed block sizes (see Minimum block size above)."

http://www.bacula.org/7.0.x-manuals/en/main/Storage_Daemon_Configuratio.html#SECTION001730000000000000000

I am not sure I understand what the documentation is trying to tell us but, as
others have mentioned already, whatever it was it is probably no longer true
for more recent drive models/HBAs and server hardware configurations.

One related question. Default maximum block size is actually specified as
126*512 = 64512. This is not equal neither 64KB (base 1000) nor 64K (base
1024). Do you know why it is short two 512 blocks? Should I reserve the same
amount in my maximum block size setting as well?

topic has been rather controversial, so any recent experience from similar
drives/system would be appreciated. I plan to use spooling to a dedicated
RAID volume, so hopefully hard drives should not be a bottleneck.
Hard drives are potential bottleneck, they can not handle several write
streams and at the same time one read stream going 150-300 MB/s. Rather
use SSD drives.
I have seen the SSD argument brought up several times but was never able to
see any supporting benchmarks. Maybe I misunderstand something important but
how RAID volume can be a limiting factor when our 3Ware/LSI SAS RAID
controller easily provides 350 MB/sec sustained read rates (750 MB/sec peak)?
I do not plan to run concurrent spooling/despooling jobs on the same
partition, so any drop in performance due to concurrency is unlikely as well,
even though my benchmarks suggest sustained 150 MB/sec speed for concurrent
read/write on our RAID6 array.

I would love to see any numbers for recent SSD models since we do not have any
installed here and I do not want to shell out large amounts of money on
something I am not sure will give us any significant benefits.

Thanks,
Ivan




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users


ADSM.ORG Privacy and Data Security by https://kimlaw.us