Re: [Bacula-users] 11TB backup run out of memory and dies
2010-06-14 13:37:35
On 06/14/10 13:23, Alan Brown wrote:
> On 14/06/10 17:06, ekke85 wrote:
>> I have 710 files in that directory/nfs share/fileset. There is only
>> one or two files that is over 1TB, I might try to backup one of those
>> files on its own and see what it does.
>>
>>> how many are there already in the database?
>>>
>>>
>> I am no expert with the DB side, but I have 12760 rows in the "File" table.
>
> These are very small numbers. Your memory is OK for a mysql with this
> number of entries.
>
> The files themselves must be enormous though. You need to pay careful
> attention to the size of the spool partition.
Honestly, if you have that large of a backup set and the individual
files can be in the terabyte range, you might want to reconsider whether
spooling is the correct approach. Remember, the primary purposes of
spooling are to allow a fast dump to disk of small backup sets followed
by a slower write out to tape (in order to free up clients as fast as
possible), or to prevent shoeshining when clients and/or the network
cannot transfer data fast enough to keep a high-speed tape drive streaming.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
alaric AT caerllewys DOT net alaric AT metrocast DOT net phil AT
co.ordinate DOT org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
It's not the years, it's the mileage.
------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
lucky parental unit. See the prize list and enter to win:
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|
|
|