Bacula-users

Re: [Bacula-users] Understanding Maximum Spool Size?

2013-12-04 10:17:52
Subject: Re: [Bacula-users] Understanding Maximum Spool Size?
From: "Clark, Patricia A." <clarkpa AT ornl DOT gov>
To: bacula-users <bacula-users AT lists.sourceforge DOT net>
Date: Wed, 4 Dec 2013 10:13:59 -0500
For job defined spool size use:
Maximum Job Spool Size = 50G

You will want to adjust your Maximum Spool Size parameter to your available 
space.

Patti Clark
Linux System Administrator
Research and Development Systems Support Oak Ridge National Laboratory

From: Radosław Korzeniewski <radoslaw AT korzeniewski DOT net<mailto:radoslaw 
AT korzeniewski DOT net>>
Date: Wednesday, December 4, 2013 4:41 AM
To: Brice Figureau <brice+bacula AT daysofwonder DOT com<mailto:brice+bacula AT 
daysofwonder DOT com>>
Cc: bacula-users <bacula-users AT lists.sourceforge DOT net<mailto:bacula-users 
AT lists.sourceforge DOT net>>
Subject: Re: [Bacula-users] Understanding Maximum Spool Size?

Hello,

2013/12/4 Brice Figureau <brice+bacula AT daysofwonder DOT 
com<mailto:brice+bacula AT daysofwonder DOT com>>
Hi,

I'm setting up a brand new bacula configuration (running debian wheezy
5.2.6) on new hardware with a powerful LTO-5 autochanger.
As usual, I'm activating data spooling in the sd configuration like
this:
Device {
  Name = neo200s-drive
  Drive Index = 0
  Media Type = LTO-5
  ArchiveDevice = /dev/nst0
  LabelMedia = yes
  RandomAccess = no
  AutomaticMount = yes
  RemovableMedia = yes
  AlwaysOpen = yes
  AutoChanger = yes
  Maximum Spool Size = 50G
  Maximum Block Size = 1032192
  Maximum Network Buffer Size = 65536
  Spool Directory = /var/spool/bacula
}

(/var/spool/bacula ends up on a 4 drive RAID10 volume).

With 3 concurrent jobs, I observed the that the spool file is at most
12GB per job, far from the max 50GB, I've setup.
Eventually the 3rd job finished, but the 2 remaining jobs were still
spooled with around 12GB:

03-Dec 18:44 backup2.internal-sd JobId 14: Spooling data again ...
03-Dec 18:45 backup2.internal-sd JobId 14: User specified spool size reached.
03-Dec 18:45 backup2.internal-sd JobId 14: Writing spooled data to Volume. 
Despooling 11,355,276,196 bytes ...
03-Dec 18:50 backup2.internal-sd JobId 14: Despooling elapsed time = 00:02:55, 
Transfer rate = 64.88 M Bytes/second

And at the same time, for the other job:
03-Dec 18:47 backup2.internal-sd JobId 15: Spooling data again ...
03-Dec 18:48 backup2.internal-sd JobId 15: User specified spool size reached.
03-Dec 18:48 backup2.internal-sd JobId 15: Writing spooled data to Volume. 
Despooling 12,277,034,372 bytes ...
03-Dec 18:53 backup2.internal-sd JobId 15: Despooling elapsed time = 00:03:30, 
Transfer rate = 58.46 M Bytes/second

I'm quite surprised to see that 11GB+12GB=23GB of spool is used, when I
declared that the max should be 50GB.

I don't have any limitation of the spool size per job (using the
defaults here).

Does anyone know what rules bacula is using to know when to stop
spooling?


I guess there are some other jobs which consume your spool size. Maximum Spool 
Size parameter defines maximum for all jobs running on selected Device.

best regards
--
Radosław Korzeniewski
radoslaw AT korzeniewski DOT net<mailto:radoslaw AT korzeniewski DOT net>

------------------------------------------------------------------------------
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>