Bacula-users

Re: [Bacula-users] Catalog backup while job running?

2012-04-05 14:43:34
Subject: Re: [Bacula-users] Catalog backup while job running?
From: Stephen Thompson <stephen AT seismo.berkeley DOT edu>
To: Phil Stracchino <alaric AT metrocast DOT net>
Date: Thu, 05 Apr 2012 11:41:12 -0700
On 04/02/2012 03:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>>>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>>>> transactions, but lose on read speed.
>>>
>>> If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
>>> pool is probably too small.
>>
>> This is probably true, but I have limited system resources and my File
>> table is almost 300Gb large.
>
> Ah, well, sometimes there's only so much you can allocate.
>
>>> --skip-lock-tables is referred to in the mysqldump documentation, but
>>> isn't actually a valid option.  This is actually an increasingly
>>> horrible problem with mysqldump.  It has been very poorly maintained,
>>> and has barely developed at all in ten or fifteen years.
>>>
>>
>> This has me confused.  I have jobs that can run, and insert records into
>> the File table, while I am dumping the Catalog.  It's only at the
>> tail-end that a few jobs get the error above.  Wouldn't a locked File
>> table cause all concurrent jobs to fail?
>
> Hmm.  I stand corrected.  I've never seen it listed as an option in the
> man page, despite there being one reference to it, but I see that
> mysqldump --help does explain it even though the man page doesn't.
>
> In that case, the only thing I can think of is that you have multiple
> jobs trying to insert attributes at the same time and the last ones in
> line are timing out.
>


This appears to be the root cause.  After running a few more nights, the 
coincidence with the Catalog dump was not maintained.  It happens for a 
few jobs each night, at different times, different jobs, and sometimes 
when no Catalog dump is occurring.

I think it's simply that a bunch of batch inserts wind up running at the 
same time and the last in line run out of time.  Rather than setting my 
timeout arbitrarily large (10 minutes did not solve the problem), I am 
curious about what you say below.

> (Locking the table for batch attribute insertion actually isn't
> necessary; MySQL can be configured to interleave auto_increment inserts.
>   However, that's the way Bacula does it.)

Are you saying that if I turn on auto_increment inserts in MySQL, it 
won't matter whether or not bacula is asking for locks during batch 
inserts?  Or does bacula also need to be configured (patched) not to use 
locks during batch inserts?

And lastly, why does the bacula documentation claim that locks are 
'essential' for batch inserts and you claim they are not?

I'm surprised more folks running mysql InnoDB and bacula aren't having 
this problem since I stumbled upon it so easily.  :)  Perhaps the trend 
is MySQL MyISAM --> Postgres.


>
> Don't know that I have any helpful suggestions there, then...  sorry.
>
>
>

thanks!
Stephen
-- 
Stephen Thompson               Berkeley Seismological Laboratory
stephen AT seismo.berkeley DOT edu    215 McCone Hall # 4760
404.538.7077 (phone)           University of California, Berkeley
510.643.5811 (fax)             Berkeley, CA 94720-4760

------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users