Bacula-users

Re: [Bacula-users] Bacula + Postgres : copy batch problem

2010-08-03 08:19:36
Subject: Re: [Bacula-users] Bacula + Postgres : copy batch problem
From: Rory Campbell-Lange <rory AT campbell-lange DOT net>
To: Martin Simmons <martin AT lispworks DOT com>
Date: Tue, 3 Aug 2010 13:17:25 +0100
Thanks very much for your response, Martin.

On 03/08/10, Martin Simmons (martin AT lispworks DOT com) wrote:
> >>>>> On Tue, 3 Aug 2010 10:15:18 +0100, Rory Campbell-Lange said:

> > I have 3.4GB free in /var where Postgresql is located. At the end of a
> > large backup job (7,643,966 files taking up 7.265TB of space) Postgres
> > bails out copying a batch file into the File table due to a mysterious
> > "no space left on device" error.
> > 
> > Questions:
> > 1. How should I size my postgresql partition?
> 
> I expect 7.6 million records to need at least 800MB when inserted and the
> batch tables will need a similar amount during the backup.  It is difficult to
> predict what the hash-join temporary file will need because it depends on the
> internals of PostgreSQL.
> 
> Firstly though I suggest running df frequently during the backup to verify
> that the problem really is /var filling up.

My server logs over the backup period still show over 2GB free in /var
(where postgresql is held) and 8GB in /tmp. Thanks however for the rule
of thumb sizes for the records.

> > 2. Can I stop this needless after-backup insertion? I tried setting
> >    Spool Attributes to NO but it did not work
> 
> You need to rebuild Bacula with the --disable-batch-insert option, but it
> might run quite slowly.  Setting synchronous_commit = off in postgresql.conf
> might help to make it faster.

Thanks about the note about the --disable-batch-insert compile time
option. Changing the synchronous_commit flag to off will speed up
inserts into the database which will is great, but it won't affect the
size of the batch file. Please clarify why you are suggesting this.

> > 3. Why is Bacula using a batch file at all? Why not simply do a straight
> >    insert?
> 
> Because 7,643,966 inserts would be much slower.

Really? I've logged Bacula's performance on the server and the inserts
run at around 0.35 ms and updates at around 0.5 ms. 

8 million inserts at 0.35ms will take about 46 minutes. But it would be
quite possible for Bacula to do this asynchronously while it does the
job of writing data from disk to tape, which in this case takes several
days. Perhaps this is something the developers could consider?


In the mean time I will move Postgres to a 10G dedicated XFS partition
and try again.

-- 
Rory Campbell-Lange
rory AT campbell-lange DOT net

------------------------------------------------------------------------------
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users