Bacula-users

Re: [Bacula-users] Bacula + Postgres : copy batch problem

2010-07-30 08:26:49
Subject: Re: [Bacula-users] Bacula + Postgres : copy batch problem
From: Dan Langille <dan AT langille DOT org>
To: Rory Campbell-Lange <rory AT campbell-lange DOT net>
Date: Fri, 30 Jul 2010 08:22:38 -0400
On 7/30/2010 3:53 AM, Rory Campbell-Lange wrote:
> Bacula has bailed out near the end of a 6.5TB backup (which is really
> frustrating!)
>
>      Fatal error: sql_create.c:843 Batch end postgresql.c:748 error ending
>      batch mode: ERROR:  could not extend relation 1663/17472/17828:
>      wrote only 4096 of 8192 bytes at block 98374
>      HINT:  Check free disk space.
>
> on postgresql 8.3.
>
> This is the same issue as Holger Rauch's problems reported here:
> http://www.mail-archive.com/bacula-users AT lists.sourceforge DOT 
> net/msg41952.html
> This is with a backup spooling to a local holding disk. The job spool
> sizes are set at 50G on a spool directory size of 300G. No problems
> there.
>
> My database is here:
>      /dev/sda3             9.2G  5.4G  3.4G  62% /var
> only 1% of the inodes are used.
>
> The database itself is only just over 500MB.

I think you're suggesting Bacula used up 3.4GB in a query..

How many files are you backing up?

> I've done some searching at it appears that the best response to this
> problem is from Postgresql's Tom Lane:
> http://markmail.org/message/shclbb4iaphypswv His suggestion is that the
> query made a massive temporary file that caused /var to overfill. Also
> see http://www.mail-archive.com/pgsql-performance AT postgresql DOT 
> org/msg31231.html
>
> Clearly there is a problem with the size of the temporary file used
> during the batch copy update. Since there are successful inserts into
> the log table milliseconds later this clearly points to a problem in the
> way Bacula inserts data in batch mode.

Umm, or it could be a problem with the way you have your computer system 
configured.  :)  It's a matter of perspective.  In a 6.5TB backup, I'm 
going to guess there are a large number of files, given that var filled 
up.  Can you extend var?  Or create a symlink to another filesystem to 
give PostgreSQL the space it needs.

 > Is it not possible to change this
> arrangement to use sequential inserts instead?

Look at the Spool Attributes directive.  Set it to know. This way, the 
details of each file will be added to the database right after that file 
is backed up.

> I'm also keen to know if I can append to this large job to try and
> retrieve the set of data, or do I have to start again?

Start again.

-- 
Dan Langille - http://langille.org/

------------------------------------------------------------------------------
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>