Frank Sweetser wrote:
> When I've seen this problem, it was because the total amount of data contained
> in the spool directory really was only a few megs below the limit. Typically
> it was either due to other concurrent jobs, in which case I avoided the
> problem by upping the spool size, or leftover spool files from crashed jobs,
> which can be safely deleted.
No concurrent jobs running or queued{1}.
Definitely the full 20Gb of space was available. DF gave 29Gb free and
there were no other files in the spool space (Been caught there before).
Initially i was thnking it was this particular client and maybe there
was a damged network cable involved, but today it also occurred direcly
off one platter in the fileserver where the applicaion runs. So that was
why the head scratching.
{1} The job/client where it regularly occurs is actually the third of
three queued clients.
I specified the 20Gb space as I have a number of client full backups
that will fit into it easily.
--
Terry Collins {:-)}}}}}
Email: terryc200710 - at - woa.com.au Web: http://www.woa.com.au/terryc
Bicycles, Bushwalking, GIS, Appropriate Technology, Natural Environment,
Welding
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|