Hi,
Now, I am testing Cancel Running Duplicates directive. I noticed a
strange behaviour also.
My Job configuration included:
Allow Duplicate Jobs = no
Allow Higher Duplicates = no
Cancel Running Duplicates = yes
First backup has been canceled, and second backup has been finished OK.
It is good for this duplicate configuration, but fatal error is
worrisome. I tried several times and this situation occurs every time.
There are steps for this situation:
*run job=QemuImages storage=UP pool=Paktos
Run Backup job
JobName: QemuImages
Level: Full
Client: darkstar-fd
FileSet: QemuImages_FileSet
Pool: Paktos (From User input)
Storage: UP (From command line)
When: 2010-01-31 02:26:50
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=249
*run job=QemuImages storage=UrzadzeniePlikowe pool=Tescik
Run Backup job
JobName: QemuImages
Level: Full
Client: darkstar-fd
FileSet: QemuImages_FileSet
Pool: Tescik (From Job resource)
Storage: UrzadzeniePlikowe (From command line)
When: 2010-01-31 02:27:06
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=250
You have messages.
*messages
31-sty 02:26 darkstar-dir JobId 249: Start Backup JobId 249,
Job=QemuImages.2010-01-31_02.26.57_33
31-sty 02:26 darkstar-dir JobId 249: Using Device "UPDev"
31-sty 02:26 darkstar-sd JobId 249: Volume "pppp" previously written,
moving to end of data.
31-sty 02:26 darkstar-sd JobId 249: Ready to append to end of Volume
"pppp" size=2179759754
31-sty 02:27 darkstar-dir JobId 250: Cancelling duplicate JobId=249.
31-sty 02:27 darkstar-sd JobId 249: JobId=249
Job="QemuImages.2010-01-31_02.26.57_33" marked to be canceled.
31-sty 02:27 darkstar-sd JobId 249: Job write elapsed time = 00:00:54,
Transfer rate = 29.30 M Bytes/second
31-sty 02:27 darkstar-sd JobId 249: JobId=249
Job="QemuImages.2010-01-31_02.26.57_33" marked to be canceled.
31-sty 02:27 darkstar-dir JobId 249: Bacula darkstar-dir 5.0.0
(26Jan10): 31-sty-2010 02:27:53
Build OS: x86_64-unknown-linux-gnu debian 5.0.3
JobId: 249
Job: QemuImages.2010-01-31_02.26.57_33
Backup Level: Full
Client: "darkstar-fd" 5.0.0 (26Jan10)
x86_64-unknown-linux-gnu,debian,5.0.3
FileSet: "QemuImages_FileSet" 2009-12-02 19:55:06
Pool: "Paktos" (From User input)
Catalog: "MojaBazaBaculi" (From Client resource)
Storage: "UP" (From Pool resource)
Scheduled time: 31-sty-2010 02:26:50
Start time: 31-sty-2010 02:26:59
End time: 31-sty-2010 02:27:53
Elapsed time: 54 secs
Priority: 10
FD Files Written: 2
SD Files Written: 2
FD Bytes Written: 1,582,301,184 (1.582 GB)
SD Bytes Written: 1,582,301,384 (1.582 GB)
Rate: 29301.9 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): pppp
Volume Session Id: 10
Volume Session Time: 1264883012
Last Volume Bytes: 3,763,206,791 (3.763 GB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: Canceled
Termination: Backup Canceled
31-sty 02:27 darkstar-dir JobId 249: Fatal error: Unable to authenticate
with File daemon at "darkstar:9102". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the FD or
FD networking messed up (restart daemon).
Please see
http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION003760000000000000000
for help.
31-sty 02:27 darkstar-dir JobId 249: Failed to connect to File daemon.
31-sty 02:28 darkstar-dir JobId 250: Start Backup JobId 250,
Job=QemuImages.2010-01-31_02.27.08_34
31-sty 02:28 darkstar-dir JobId 250: Using Device "UrzadzeniePlikoweDev"
31-sty 02:28 darkstar-sd JobId 250: Volume "zzzz" previously written,
moving to end of data.
31-sty 02:28 darkstar-sd JobId 250: Ready to append to end of Volume
"zzzz" size=1583475341
* status dir
.....
.....
Running Jobs:
Console connected at 30-sty-10 22:00
Console connected at 31-sty-10 02:22
JobId Level Name Status
======================================================================
250 Full QemuImages.2010-01-31_02.27.08_34 is running
====
....
....
My configuration Maximum Concurrent Jobs is correct. Does somebody know,
why I recieved Fatal error?
Very Thanks.
gani
------------------------------------------------------------------------------
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|