Bacula-users

Re: [Bacula-users] using auto-clean on an autochanger

2013-07-16 13:06:15
Subject: Re: [Bacula-users] using auto-clean on an autochanger
From: Michael Stauffer <mgstauff AT gmail DOT com>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 16 Jul 2013 13:02:52 -0400
Thanks Patrick, and to others.
For now I'll play it safe and do the cleanings manually. Although
changing the timeout sounds like a good future option.

-M


> From: Jummo <jummo4 AT yahoo DOT de>
> Subject: Re: [Bacula-users] using auto-clean on an autochanger
> To: Michael Stauffer <mgstauff AT gmail DOT com>
> Cc: bacula-users AT lists.sourceforge DOT net
> Message-ID: <[email protected]>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
> Hi Michael,
>
> Against my statement in [1] I have seen several timeouts with failed jobs.
> I have changed my tape library configuration to only notify me, if a
> cleaning requests raised by a tape drive. All jobs will still run
> (hopefully with correct data on tape, because the drive is still able to
> write correct data, but will have problems in the near future. Can someone
> confirm this?). Then I will manually start the cleaning.
>
> As mention by Arno Lehmann in [2], you could increase the timeout Bacula
> will wait for the storage.
>
> Best Regards,
> Patrick
>
> [1] http://adsm.org/lists/html/Bacula-users/2013-01/msg00228.html
> [2] http://adsm.org/lists/html/Bacula-users/2008-12/msg00451.html

------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>