Bacula-users

[Bacula-users] tuning Bacula - Maximum Spool Size

2015-04-28 16:40:30
Subject: [Bacula-users] tuning Bacula - Maximum Spool Size
From: Robert A Threet <robert3t AT netzero DOT net>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 28 Apr 2015 15:37:33 -0500
Looks like I have about 4TB of local SAS drives to play with on my Dell 720.

I was thinking of bumping up Maximum Spool Size x10 = 240GB
And x10 the Maximum Job Spool Size to 80G.
Based on this, it seems logical that Maximum Concurrent Jobs = 3 (not 21 as in 
current config).

Q: Does this sound reasonable?


Device {                          # I have 4 of this in a Dell TL-4000 tape 
library
Name = tl4000-0
Drive Index = 0
Media Type = LTO6
Archive Device = /dev/tape/by-id/scsi-35000e1116097b001-nst  # /dev/nst0
AutomaticMount = yes;               # when device opened, read it
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
AutoChanger = yes
Autoselect = yes
# Offline On Unmount = yes
Maximum File Size = 16 G
Maximum Job Spool Size = 8G
Maximum Spool Size = 20G
Maximum Concurrent Jobs = 21
Alert Command = "sh -c 'smartctl -H -l error %c'"
}

System

-- 
Robert A Threet <robert3t AT netzero DOT net>
____________________________________________________________
Old School Yearbook Pics
View Class Yearbooks Online Free. Search by School & Year. Look Now!
http://thirdpartyoffers.netzero.net/TGL3231/553fef964b7866f95340bst02vuc

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>