Bacula-users

Re: [Bacula-users] Catalog backup while job running?

2012-04-02 16:13:15
Subject: Re: [Bacula-users] Catalog backup while job running?
From: Phil Stracchino <alaric AT metrocast DOT net>
To: Stephen Thompson <stephen AT seismo.berkeley DOT edu>
Date: Mon, 02 Apr 2012 16:11:09 -0400
On 04/02/2012 01:49 PM, Stephen Thompson wrote:
> Well, we've made the leap from MyISAM to InnoDB, seems like we win on 
> transactions, but lose on read speed.

If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
pool is probably too small.

> That aside, I'm seeing something unexpected.  I am now able to 
> successfully run jobs while I use mysqldump to dump the bacula Catalog, 
> except at the very end of the dump there is some sort of contention.  A 
> few of my jobs (3-4 out of 150) that are attempting to despool 
> attritbutes at the tail end of the dump yield this error:
> 
> Fatal error: sql_create.c:860 Fill File table Query failed: INSERT INTO 
> File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq) SELECT 
> batch.FileIndex, batch.JobId, Path.PathId, 
> Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM batch 
> JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name = 
> Filename.Name): ERR=Lock wait timeout exceeded; try restarting transaction
> 
> I have successful jobs before and after this 'end of the dump' timeframe.
> 
> It looks like I might be able to "fix" this by increasing my 
> innodb_lock_wait_timeout, but I'd like to understand WHY I need to 
> icnrease it.  Anyone know what's happening at the end of a dump like 
> this that would cause the above error?
> 
> mysqldump -f --opt --skip-lock-tables --single-transaction bacula 
>  >>bacula.sql
> 
> Is it the commit on this 'dump' transaction?

--skip-lock-tables is referred to in the mysqldump documentation, but
isn't actually a valid option.  This is actually an increasingly
horrible problem with mysqldump.  It has been very poorly maintained,
and has barely developed at all in ten or fifteen years.

Table locks are the default behavior of mysqldump, as part of the
default --opt group.  To override it, you actually have to use
--skip-opt, than add back in the rest of the options from the --opt
group that you actually wanted.  There is *no way* to get mysqldump to
Do The Right Thing for both transactional and non-transactional tables
in the same run.  it is simply not possible.

My suggestion would be to look at mydumper instead.  It has been written
by a couple of former MySQL AB support engineers who started with a
clean sheet of paper, and it is what mysqldump should have become ten
years ago.  It dumps tables in parallel, doesn't require exclusion of
schemas that shouldn't be dumped because it knows they shouldn't be
dumped, doesn't require long strings of arguments to tell it how to
correctly handle transactional and non-transactional tables because it
understands both and just Does The Right Thing on a table-by-table
basis, can dump tables in parallel for better speed, can dump binlogs as
well as tables, separates the data from the schemas...

Give it a try.

That said, I make my MySQL dump job a lower priority job and run it only
after all other jobs have completed.  This makes sure I get the most
current possible data in my catalog dump.  I just recently switched to a
revised MySQL backup job that uses mydumper with the following simple
shell script as a ClientRunBeforeJob on a separate host from the actual
DB server.  (Thus, if the backup client goes down, I still have the live
DB, and if the DB server goes down, I still have the DB backups on disk.)


#!/bin/bash

RETAIN=5
USER=xxxxxxxxxx
PASS=xxxxxxxxxx
DUMPDIR=/dbdumps
HOST=babylon4
PORT=6446
TIMEOUT=300
FMT='%Y%m%d-%T'
DEST=${DUMPDIR}/${HOST}-$(date +${FMT})

for dir in $(ls -r ${DUMPDIR} | tail -n +${RETAIN})
do
   echo Deleting ${DUMPDIR}/${dir}
   rm -rf ${DUMPDIR}/${dir}
done

mydumper -Cce -h ${HOST} -p ${PORT} -u ${USER} --password=${PASS} -o
${DEST} -l ${TIMEOUT}


Then my Bacula fileset for the DB-backup job just backs up the entire
/db-dumps directory.


-- 
  Phil Stracchino, CDK#2     DoD#299792458     ICBM: 43.5607, -71.355
  alaric AT caerllewys DOT net   alaric AT metrocast DOT net   phil AT 
co.ordinate DOT org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
                 It's not the years, it's the mileage.

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users