Bacula-users

Re: [Bacula-users] Maximum Concurrent Jobs Problem

2008-10-15 04:52:43
Subject: Re: [Bacula-users] Maximum Concurrent Jobs Problem
From: Steffen Knauf <Steffen.Knauf AT renderforce DOT de>
To: John Drescher <drescherjm AT gmail DOT com>
Date: Wed, 15 Oct 2008 10:32:17 +0200
I don't have different priorities I thought the default of this option 
in the client section is 2 and in the storage section 10, so
i don't set this option explicit. Now i set this option in following 
configs/sections:
Client / Storage / Director /sd /fd
Is it important that the value of the "Maximum Concurrent Jobs" Options 
are the same?

Steffen


> On Tue, Oct 14, 2008 at 11:25 AM, Steffen Knauf
> <Steffen.Knauf AT renderforce DOT de> wrote:
>   
>> Hello,
>>
>> On every 1st Friday bacula starts a backup of a 4 TB partition.
>> This will take a while ;) , so the other backup jobs should be run
>> concurrent.
>> But nothing happens, the other jobs don't start until this huge job
>> finished.
>> If this job freezed, other jobs don't start, too.
>> The jobs are in different pools. Perhaps i forgot someone?
>>
>> Director:
>>
>> Maximum Concurrent Jobs = 2
>>
>> Storage/File Daemon:
>>
>> Maximum Concurrent Jobs = 20
>>
>>     
>
> Depending on the level of concurrency you want, you may need this in 5
> places in your bacula-dir.conf file. Do you have it in the client and
> storage sections of bacula-dir.conf? Are you using different
> priorities?
>
> John
>
>   


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>