Bacula-users

Re: [Bacula-users] Catalog too big / not pruning?

2009-08-06 13:14:35
Subject: Re: [Bacula-users] Catalog too big / not pruning?
From: Martin Simmons <martin AT lispworks DOT com>
To: bacula-users AT lists.sourceforge DOT net
Date: Thu, 6 Aug 2009 18:11:20 +0100
>>>>> On Thu, 6 Aug 2009 05:59:24 -0700, Jeremy Koppel said:
> 
> We're running Postgresql 8.0.8; we can't currently update this machine
> (we'll have to move Bacula to a newer box when we have one available).  Ran
> that query, and the top 4 do have very large numbers:
> 
> 
>              relname             |  reltuples  | relpages
> ---------------------------------+-------------+----------
>  file                            | 3.28168e+07 |   592614

OK, that is 147 bytes per row, which is about what you would expect.


>  file_fp_idx                     | 3.28168e+07 |   378580
>  file_jobid_idx                  | 3.28168e+07 |   368832
>  file_pkey                       | 3.28168e+07 |   364870
> 
> And running vacuumdb with the --analyze flag says:
> 
> INFO:  "file": found 0 removable, 32828342 nonremovable row versions in 
> 592867 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> Nonremovable row versions range from 113 to 154 bytes long.
> 
> ...
> 
> I ran the full vacuum after that, and now it is down to 5.9GB, so I guess
> all those records really weren't taking up much space.  Also, the indexes
> actually got bigger:
> 
>          relname         |  reltuples  | relpages
> -------------------------+-------------+----------
>  file                    | 3.28283e+07 |   592684
>  file_fp_idx             | 3.28283e+07 |    90029
>  file_jobid_idx          | 3.28283e+07 |    71896
>  file_pkey               | 3.28283e+07 |    71895
> 
> I read up on it and saw that this was expected behavior, and that running a
> reindex on the table should fix it.  So I ran REINDEX TABLE file;, but that
> didn't have any effect.  I'll do some looking into that today.

Look again at the sizes -- they actually got 5x smaller!  Initially they were
very bloated compared to the table size.


> Also, I found the output from dbcheck curious.  Of all the orphaned records
> it found, the file records were an even number: 10,000,000.  It sort of
> seems like maybe dbcheck can only clean 10,000,000 records at a time.  : )
> So, I have just now started running it again, and so far it has found 0 bad
> Path records, 0 bad Filename records, etc., all 0 this time, until it got to
> File records, where it says again: Found 10000000 File records, Deleting
> 10000000 orphaned File records.

Yes, I think there is a limit on the number of file records it will delete
each time.

__Martin

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users