Bacula-users

Re: [Bacula-users] Catalog backup while job running?

2012-04-03 06:30:29
Subject: Re: [Bacula-users] Catalog backup while job running?
From: Martin Simmons <martin AT lispworks DOT com>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 3 Apr 2012 11:28:20 +0100
>>>>> On Mon, 02 Apr 2012 15:06:31 -0700, Stephen Thompson said:
> 
> >> That aside, I'm seeing something unexpected.  I am now able to
> >> successfully run jobs while I use mysqldump to dump the bacula Catalog,
> >> except at the very end of the dump there is some sort of contention.  A
> >> few of my jobs (3-4 out of 150) that are attempting to despool
> >> attritbutes at the tail end of the dump yield this error:
> >>
> >> Fatal error: sql_create.c:860 Fill File table Query failed: INSERT INTO
> >> File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq) SELECT
> >> batch.FileIndex, batch.JobId, Path.PathId,
> >> Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM batch
> >> JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name =
> >> Filename.Name): ERR=Lock wait timeout exceeded; try restarting transaction
> >>
> >> I have successful jobs before and after this 'end of the dump' timeframe.
> >>
> >> It looks like I might be able to "fix" this by increasing my
> >> innodb_lock_wait_timeout, but I'd like to understand WHY I need to
> >> icnrease it.  Anyone know what's happening at the end of a dump like
> >> this that would cause the above error?
> >>
> >> mysqldump -f --opt --skip-lock-tables --single-transaction bacula
> >>   >>bacula.sql
> >>
> >> Is it the commit on this 'dump' transaction?
> >
> > --skip-lock-tables is referred to in the mysqldump documentation, but
> > isn't actually a valid option.  This is actually an increasingly
> > horrible problem with mysqldump.  It has been very poorly maintained,
> > and has barely developed at all in ten or fifteen years.
> >
> 
> This has me confused.  I have jobs that can run, and insert records into 
> the File table, while I am dumping the Catalog.  It's only at the 
> tail-end that a few jobs get the error above.  Wouldn't a locked File 
> table cause all concurrent jobs to fail?

Are you sure that jobs are inserting records into the File table whilst they
are running?  With spooling, file records are not inserted until the end of
the job.

Likewise, in batch mode (as above), the File table is only updated once at the
end.

__Martin

------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users