Bacula-users

Re: [Bacula-users] bacula-dir virtual memory limit during restore

2008-06-04 17:06:38
Subject: Re: [Bacula-users] bacula-dir virtual memory limit during restore
From: John Kloss <John.Kloss AT jhmi DOT edu>
To: baculausers <bacula-users AT lists.sourceforge DOT net>
Date: Wed, 4 Jun 2008 17:06:20 -0400
Arno,

On Jun 3, 2008, at 8:18 PM, Arno Lehmann wrote:
> Hi, again...
>
> 03.06.2008 16:52, John Kloss wrote:
> ...
>> I am trying to restore 2.5 terabytes of data composed of 6.5 million
>> files.
> ...
>> How does one recover 2.5 terabytes and 6.5 million files using the
>> latest version of bacula?
>
> I forgot to mention that, besides creating a bootstrap file manually
> and "running" that, you could also use bextract to get at your data.
> As long as you don't run jobs concurrently without spooling, creating
> a usable bootstrap file is simple, though time-consuming. But all the
> data you need exist in the catalog... it's just a matter of finding
> the right query.

I didn't need to go that far, thank goodness.

> See here for how a bootstrap file is designed:
> http://www.bacula.org/en/dev-manual/ 
> Bootstrap_File.html#BootstrapChapter

Yes, I reread that section, much more carefully than I had before.

> The (SQL) code that builds the bootstrap file is - probably - in the
> src/cats directory of the source distribution...

It was the query.sql file that was most instructive.  Given its  
examples I wrote a simple query to pull out the jobid, starttime,  
volumename, volsessionid, and volsessiontime based on a single  
jobid.  That's all I needed to create a 24 line bootstrap file.  I  
then made a simple restore job that utilized that bootstrap file and  
now I am able to recover the data.

Thank you for the suggestion, Arno.  It was well received.

However, this still does not answer my other, and now less pressing,  
questions such as why the director seems to go into an infinite loop  
of acquiring and releasing memory buffers when utilizing the restore  
command in an attempt to restore several million files (my suspicion  
is that the aberrant behavior is due to the number of files and not  
the amount of data)-- especially in light of the fact that I did not  
observe this issue with bacula version 1.36.

I ran a diff on the smartall.{c,h} and mem_pool.{c,h} files from 1.36  
and 2.2.8 and did not see much that had changed.  I'm certain that  
it's in these routines that bacula is getting stuck because running a  
truss -u *:: I can see the mutex lock for the mem_pool routine and  
then the mutex_lock for the smartalloc routine and then either a libc  
malloc or free and then a mutex unlock for smartalloc and then mutex  
unlock for mem_pool.  This cycle runs over and over and over again  
for hours.  I don't think the issue is with the smartalloc/mem_pool  
code.  I think the issue lies somewhere else.

Thank you.

        John Kloss.

-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users