Bacula-users

Re: [Bacula-users] [Bacula-devel] bacula hang issue. was: bacula sometimes gets stuck when volume wanted is already in a different drive

2009-01-24 11:57:27
Subject: Re: [Bacula-users] [Bacula-devel] bacula hang issue. was: bacula sometimes gets stuck when volume wanted is already in a different drive
From: Silver Salonen <silver AT ultrasoft DOT ee>
To: bacula-users AT lists.sourceforge DOT net
Date: Sat, 24 Jan 2009 18:54:05 +0200
Hi.

It seems I'm experiencing the same problem on FreeBSD 6.3. I ran bacula-sd in 
gdb and when the backups started running, a few of them ran and completed 
successfully, but stayed in "terminated" status afterwards. Other jobs just 
didn't start running. When I sent just ordinary kill to the process, gdb said 
the program terminated. The output of gdb:

(gdb) run -f -c /usr/local/etc/bacula-sd.conf
Starting program: /usr/local/sbin/bacula-sd -f -c /usr/local/etc/bacula-
sd.conf
(no debugging symbols found)...(no debugging symbols found)...warning: Unable 
to get location for thread creation breakpoint: generic error
[New LWP 100405]
(no debugging symbols found)...(no debugging symbols found)...(no debugging 
symbols found)...(no debugging symbols found)...(no debugging symbols 
found)...(no debugging symbols found)...(no debugging symbols found)...(no 
debugging symbols found)...(no debugging symbols found)...[New Thread 
0x80c0200 (LWP 100057)]

Program received signal SIGTERM, Terminated.
[Switching to Thread 0x80c0200 (LWP 100057)]
0x281075db in pthread_testcancel () from /lib/libpthread.so.2
(gdb) backtrace
#0  0x281075db in pthread_testcancel () from /lib/libpthread.so.2
#1  0x280f4c25 in sigaction () from /lib/libpthread.so.2
#2  0x280f4f11 in sigaction () from /lib/libpthread.so.2
#3  0x280f56f0 in sigaction () from /lib/libpthread.so.2
#4  0x280f589c in sigaction () from /lib/libpthread.so.2
#5  0x280ffeec in pthread_mutexattr_init () from /lib/libpthread.so.2
#6  0x280d8450 in ?? ()
(gdb) quit
The program is running.  Exit anyway? (y or n) y

PS. Sorry if I used gdb incorrectly, I'm not very experienced with it.. let me 
know what to do better next time ;)

-- 
Silver

On Tuesday 06 January 2009 19:56:19 Kern Sibbald wrote:
> Hello,
> 
> It looks like a bad or incomplete backtrace to me, which is what SuSE 10.2 
is 
> well known to me for (the reason I dumped SuSE as a development platform).
> 
> Without a complete traceback that clearly shows the problem, there is not 
much 
> we can do.
> 
> Regards,
> 
> Kern
> 
> 
> On Tuesday 06 January 2009 18:44:31 Bob Hetzel wrote:
> > Kern Sibbald wrote:
> > > Hello,
> > >
> > > On the surface, this looks like a support problem rather than a bug.  
The
> > > backtrace is perfectly normal -- no sign of a problem.  However, if your
> > > SD is freezing up, it is possibly a bug.   I would suggest that you post
> > > a backtrace (similar to what you posted here), but for your SD.
> > >
> > > After seeing the SD backtrace, I can make some suggestions on what the
> > > problem might be and your possibilities for resolving it.
> > >
> > > Regards,
> > >
> > > Kern
> >
> > # ps fax |grep bacula
> > 28775 pts/0    S+     0:00  |                   \_ grep bacula
> > 21829 ?        Ssl  111:02 /usr/sbin/bacula-sd -u root -g bacula -v -c
> > /etc/bacula/bacula-sd.conf
> >
> >
> > # ./btraceback /usr/sbin/bacula-sd 21829
> > ./btraceback: line 23: 28781 Aborted                 gdb -quiet -batch -x
> > /etc/bacula/btraceback.gdb $1 $2 >${WD}/bacula.$$.traceback 2>&1
> >
> > Looks like it had a problem... Here's the contents of the message that got
> > e-mailed to root.
> >
> > Using host libthread_db library "/lib/libthread_db.so.1".
> > [Thread debugging using libthread_db enabled]
> > [New Thread -1212381488 (LWP 21829)]
> > [New Thread -1349526640 (LWP 28214)]
> > [New Thread -1341133936 (LWP 26749)]
> > [New Thread -1246848112 (LWP 26720)]
> > [New Thread -1263633520 (LWP 26639)]
> > [New Thread -1332741232 (LWP 26586)]
> > [New Thread -1322382448 (LWP 26574)]
> > [New Thread -1297204336 (LWP 26495)]
> > [New Thread -1255240816 (LWP 26485)]
> > [New Thread -1272026224 (LWP 26476)]
> > [New Thread -1372738672 (LWP 26393)]
> > [New Thread -1238238320 (LWP 26311)]
> > [New Thread -1280418928 (LWP 26261)]
> > [New Thread -1305597040 (LWP 26250)]
> > [New Thread -1313989744 (LWP 26043)]
> > [New Thread -1288811632 (LWP 25948)]
> > [New Thread -1213060208 (LWP 25671)]
> > [New Thread -1229845616 (LWP 21835)]
> > [New Thread -1221452912 (LWP 21834)]
> > 0xb7fd3410 in __kernel_vsyscall ()
> > $1 = "gyrus-sd", '\0' <repeats 21 times>
> > $2 = 0x80c1e00 "bacula-sd"
> > $3 = 0x80c20b0 "/usr/sbin/"
> > $4 = 0x0
> > $5 = 0x80b950b "2.4.4 (28 December 2008)"
> > $6 = 0x80b9524 "i686-pc-linux-gnu"
> > $7 = 0x80b9536 "suse"
> > $8 = 0x80b953b "10.2"
> > #0  0xb7fd3410 in __kernel_vsyscall ()
> > #1  0xb7c86a41 in ___newselect_nocancel () from /lib/libc.so.6
> > #2  0x08088119 in bnet_thread_server (addrs=0x80c2850, max_clients=41,
> > client_wq=0x80bf5c0,
> >      handle_client_request=0x8065e3e <handle_connection_request(void*)>) 
at
> > bnet_server.c:161
> > #3  0x0804c3ed in main (argc=0, argv=0xbf8fac44) at stored.c:265
> >
> > Thread 19 (Thread -1221452912 (LWP 21834)):
> > #0  0xb7fd3410 in __kernel_vsyscall ()
> > #1  0xb7e1c7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > /lib/libpthread.so.0 #2  0x080a37c3 in watchdog_thread (arg=0x0) at
> > watchdog.c:307
> > #3  0xb7e18112 in start_thread () from /lib/libpthread.so.0
> > #4  0xb7c8d2ee in clone () from /lib/libc.so.6
> >
> > Thread 18 (Thread -1229845616 (LWP 21835)):
> > #0  0xb7fd3410 in __kernel_vsyscall ()
> > #1  0xb7e1c7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > /lib/libpthread.so.0 #2  0x080a37c3 in watchdog_thread (arg=0x0) at
> > watchdog.c:307
> > #3  0xb7e18112 in start_thread () from /lib/libpthread.so.0
> > #4  0xb7c8d2ee in clone () from /lib/libc.so.6
> >
> > Thread 17 (Thread -1213060208 (LWP 25671)):
> > #0  0xb7fd3410 in __kernel_vsyscall ()
> > #1  0xb7e1c556 in pthread_cond_wait@@GLIBC_2.3.2 () from
> > /lib/libpthread.so.0 #2  0x0809e6d6 in rwl_writelock (rwl=0x80bf900) at
> > rwlock.c:239
> > #3  0x080797a5 in _lock_volumes () at reserve.c:148
> > #4  0x0804fd9f in release_device (dcr=0x8156ad8) at acquire.c:426
> > #5  0x08052e47 in do_append_data (jcr=0x81aa868) at append.c:331
> > #6  0x0806a3b7 in append_data_cmd (jcr=0x81aa868) at fd_cmds.c:194
> > #7  0x08069cf6 in do_fd_commands (jcr=0x81aa868) at fd_cmds.c:165
> > #8  0x0806a4d5 in run_job (jcr=0x81aa868) at fd_cmds.c:128
> > #9  0x0806af4f in run_cmd (jcr=0x81aa868) at job.c:210
> > #10 0x0806648d in handle_connection_request (arg=0x80d6a88) at 
dircmd.c:232
> > #11 0x080a41ee in workq_server (arg=0x80bf5c0) at workq.c:357
> > #12 0xb7e18112 in start_thread () from /lib/libpthread.so.0
> > #13 0xb7c8d2ee in clone () from /lib/libc.so.6
> >
> > Thread 16 (Thread -1288811632 (LWP 25948)):
> > #0  0xb7fd3410 in __kernel_vsyscall ()
> > #1  0xb7e1c556 in pthread_cond_wait@@GLIBC_2.3.2 () from
> > /lib/libpthread.so.0 #2  0x0806f913 in DEVICE::r_dlock (this=0x80c4b48) at
> > lock.c:214
> > #3  0x0806f9db in DEVICE::dblock (this=0x80c4b48, why=3) at lock.c:142
> > #4  0x08050601 in acquire_device_for_append (dcr=0x8152858) at
> > acquire.c:344 #5  0x08051f00 in do_append_data (jcr=0x81be0d8) at
> > append.c:85
> > #6  0x0806a3b7 in append_data_cmd (jcr=0x81be0d8) at fd_cmds.c:194
> > #7  0x08069cf6 in do_fd_commands (jcr=0x81be0d8) at fd_cmds.c:165
> > #8  0x0806a4d5 in run_job (jcr=0x81be0d8) at fd_cmds.c:128
> > #9  0x0806af4f in run_cmd (jcr=0x81be0d8) at job.c:210
> > #10 0x0806648d in handle_connection_request (arg=0x81c1ab8) at 
dircmd.c:232
> > #11 0x080a41ee in workq_server (arg=0x80bf5c0) at workq.c:357
> > #12 0xb7e18112 in start_thread () from /lib/libpthread.so.0
> > #13 0xb7c8d2ee in clone () from /lib/libc.so.6
> >
> > Thread 15 (Thread -1313989744 (LWP 26043)):
> > #0  0xb7fd3410 in __kernel_vsyscall ()
> > #1  0xb7e1ec4e in __lll_mutex_lock_wait () from /lib/libpthread.so.0
> > #2  0xb7e1aa3c in _L_mutex_lock_88 () from /lib/libpthread.so.0
> >
> > > On Tuesday 06 January 2009 15:59:13 Bob Hetzel wrote:
> > >> Hi all,
> > >>
> > >> I've had a problem for a while whereby bacula hangs waiting for 
storage.
> > >>
> > >> Here's the message I posted to the bacula-users list previously.
> > >> http://marc.info/?l=bacula-users&m=123004380923706&w=2
> > >>
> > >> Yesterday I upgraded to 2.4.4 and I think I've still got that
> > >> problem--bacula still stops processing backups, but the message output
> > >> is different.  Here's the last messages from the console this time.
> > >>
> > >> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "WMI Writer", State:
> > >> 0x1 (VSS_WS_STABLE)
> > >> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "MSDEWriter", State:
> > >> 0x1 (VSS_WS_STABLE)
> > >> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "Microsoft Writer
> > >> (Bootable State)", State: 0x1 (VSS_WS_STABLE)
> > >> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "Microsoft Writer
> > >> (Service State)", State: 0x1 (VSS_WS_STABLE)
> > >> 05-Jan 18:28 gyrus-sd JobId 53263: Job write elapsed time = 01:25:13,
> > >> Transfer rate = 1.977 M bytes/second
> > >> 05-Jan 18:28 gyrus-sd JobId 53263: Committing spooled data to Volume
> > >> "LTO295L2". Despooling 10,120,678,805 bytes ...
> > >> 05-Jan 19:30 gyrus-dir JobId 53310: Prior failed job found in catalog.
> > >> Upgrading to Full.
> > >> 05-Jan 19:30 gyrus-dir JobId 53311: Prior failed job found in catalog.
> > >> Upgrading to Full.
> > >> 05-Jan 19:30 gyrus-dir JobId 53313: Prior failed job found in catalog.
> > >> Upgrading to Full.
> > >> 05-Jan 19:30 gyrus-dir JobId 53323: Prior failed job found in catalog.
> > >> Upgrading to Full.
> > >> 06-Jan 08:55 gyrus-dir JobId 53280: Fatal error: Network error with FD
> > >> during Backup: ERR=Connection timed out
> > >> 06-Jan 08:55 gyrus-sd JobId 53280: Job
> > >> regcomm-gx280.2009-01-05_11.53.02.12 marked to be canceled.
> > >> 06-Jan 08:55 gyrus-dir JobId 53280: Fatal error: No Job status returned
> > >> from FD. 06-Jan 08:55 gyrus-dir JobId 53280: Error: Bacula gyrus-dir
> > >> 2.4.4 (28Dec08): 06-Jan-2009 08:55:07
> > >>    Build OS:               i686-pc-linux-gnu suse 10.2
> > >>    JobId:                  53280
> > >>    Job:                    regcomm-gx280.2009-01-05_11.53.02.12
> > >>    Backup Level:           Incremental, since=2009-01-04 12:00:41
> > >>    Client:                 "regcomm-gx280" 2.4.0 (04Jun08)
> > >> Linux,Cross-compile,Win32 FileSet:                "cd-drive-dirs"
> > >> 2007-11-05 19:00:00
> > >>    Pool:                   "Default" (From Job resource)
> > >>    Storage:                "Dell-PV136T" (From Job resource)
> > >>    Scheduled time:         05-Jan-2009 11:53:02
> > >>    Start time:             05-Jan-2009 17:13:48
> > >>    End time:               06-Jan-2009 08:55:07
> > >>    Elapsed time:           15 hours 41 mins 19 secs
> > >>    Priority:               15
> > >>    FD Files Written:       0
> > >>    SD Files Written:       0
> > >>    FD Bytes Written:       0 (0 B)
> > >>    SD Bytes Written:       0 (0 B)
> > >>    Rate:                   0.0 KB/s
> > >>    Software Compression:   None
> > >>    VSS:                    no
> > >>    Storage Encryption:     no
> > >>    Volume name(s):
> > >>    Volume Session Id:      92
> > >>    Volume Session Time:    1231165805
> > >>    Last Volume Bytes:      281,632,167,936 (281.6 GB)
> > >>    Non-fatal FD errors:    0
> > >>    SD Errors:              0
> > >>    FD termination status:  Error
> > >>    SD termination status:  Error
> > >>    Termination:            *** Backup Error ***
> > >>
> > >>
> > >> I then ran a status director and saw that jobs have been stalled for
> > >> more than 12 hours.
> > >> Upon running a status storage, the bconsole program became 
unresponsive.
> > >> *status storage
> > >> Automatically selected Storage: Dell-PV136T
> > >> Connecting to Storage daemon Dell-PV136T at gyrus:9103
> > >>
> > >> gyrus-sd Version: 2.4.4 (28 December 2008) i686-pc-linux-gnu suse 10.2
> > >> Daemon started 05-Jan-09 09:30, 79 Jobs run since started.
> > >>   Heap: heap=3,608,576 smbytes=3,262,825 max_bytes=3,328,738 bufs=567
> > >> max_bufs=569 Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8
> > >>
> > >> Running Jobs:
> > >> Writing: Full Backup job krr6-d830.2009-01-05_11 JobId=53240
> > >> Volume="LTO298L2" pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=22,666 Bytes=16,473,267,441 Bytes/sec=259,119
> > >>      FDReadSeqNo=437,933 in_msg=373469 out_msg=9 fd=29
> > >> Writing: Incremental Backup job lcc3-o755.2009-01-05_11 JobId=53243
> > >> Volume="LTO298L2"
> > >>      pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=0 Bytes=0 Bytes/sec=0
> > >>      FDReadSeqNo=6 in_msg=6 out_msg=4 fd=49
> > >> Writing: Full Backup job lsk2.2009-01-05_11 JobId=53248
> > >> Volume="LTO298L2" pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=1
> > >>      Files=51,118 Bytes=6,367,622,723 Bytes/sec=107,850
> > >>      FDReadSeqNo=528,490 in_msg=378838 out_msg=9 fd=50
> > >> Writing: Full Backup job mes179-d630.2009-01-05_11 JobId=53254
> > >> Volume="LTO298L2" pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=0 Bytes=0 Bytes/sec=0
> > >>      FDReadSeqNo=6 in_msg=6 out_msg=5 fd=16
> > >> Writing: Full Backup job mje42-gx280.2009-01-05_11 JobId=53256
> > >> Volume="LTO298L2" pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=1
> > >>      Files=26,703 Bytes=4,115,465,652 Bytes/sec=69,705
> > >>      FDReadSeqNo=285,444 in_msg=209770 out_msg=9 fd=23
> > >> Writing: Full Backup job mlc10-d830.2009-01-05_11 JobId=53257
> > >> Volume="LTO295L2" pool="Default" device="IBMLTO2-2" (/dev/nst1)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=4,705 Bytes=7,970,089,786 Bytes/sec=135,003
> > >>      FDReadSeqNo=158,333 in_msg=145785 out_msg=5 fd=6
> > >> Writing: Full Backup job mxg86.2009-01-05_11 JobId=53263
> > >> Volume="LTO295L2" pool="Default" device="IBMLTO2-2" (/dev/nst1)
> > >>      spooling=0 despooling=0 despool_wait=1
> > >>      Files=48,131 Bytes=10,109,816,521 Bytes/sec=171,248
> > >>      FDReadSeqNo=557,760 in_msg=417784 out_msg=9 fd=27
> > >> Writing: Full Backup job nma9.2009-01-05_11 JobId=53267
> > >> Volume="LTO295L2" pool="Default" device="IBMLTO2-2" (/dev/nst1)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=8,203 Bytes=8,329,933,955 Bytes/sec=141,283
> > >>      FDReadSeqNo=194,492 in_msg=171633 out_msg=5 fd=37
> > >> Writing: Full Backup job ody.2009-01-05_11 JobId=53271 
Volume="LTO298L2"
> > >>      pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=0 Bytes=0 Bytes/sec=0
> > >>      FDReadSeqNo=6 in_msg=6 out_msg=4 fd=41
> > >> Writing: Full Backup job pyromancer1.2009-01-05_11 JobId=53276
> > >> Volume="LTO295L2" pool="Default" device="IBMLTO2-2" (/dev/nst1)
> > >>      spooling=0 despooling=0 despool_wait=1
> > >>      Files=61,700 Bytes=33,704,216,119 Bytes/sec=573,718
> > >>      FDReadSeqNo=1,037,077 in_msg=857720 out_msg=9 fd=12
> > >> Writing: Full Backup job rbd2-gx280.2009-01-05_11 JobId=53279
> > >> Volume="LTO295L2" pool="Default" device="IBMLTO2-2" (/dev/nst1)
> > >>      spooling=0 despooling=0 despool_wait=1
> > >>      Files=37,744 Bytes=8,917,776,587 Bytes/sec=151,820
> > >>      FDReadSeqNo=457,402 in_msg=346240 out_msg=9 fd=43
> > >> Writing: Incremental Backup job regcomm-gx280.2009-01-05_11 JobId=53280
> > >> Volume="LTO298L2"
> > >>      pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=0 Bytes=0 Bytes/sec=0
> > >>      FDReadSeqNo=6 in_msg=6 out_msg=5 fd=46
> > >> Writing: Incremental Backup job rxm23.2009-01-05_11 JobId=53285
> > >> Volume="" pool="Default" device="IBMLTO2-1" (/dev/nst0)
> > >>      spooling=0 despooling=0 despool_wait=0
> > >>      Files=0 Bytes=0 Bytes/sec=0
> > >>      FDSocket closed
> > >> ====
> > >>
> > >> Jobs waiting to reserve a drive:
> > >>     3605 JobId=53285 wants free drive but device "IBMLTO2-1" 
(/dev/nst0)
> > >> is busy. ====
> > >>
> > >> Terminated Jobs:
> > >>   JobId  Level    Files      Bytes   Status   Finished        Name
> > >> ===================================================================
> > >>   53230  Full     24,670    3.577 G  OK       05-Jan-09 16:44
> > >> jxc37-gx280.2009-01-05_11
> > >>   53242  Full     13,891    7.358 G  OK       05-Jan-09 16:48
> > >> kxs45-o755.2009-01-05_11
> > >>   53219  Full     24,002    16.59 G  OK       05-Jan-09 16:59
> > >> jc-gx620.2009-01-05_11 53251  Incr         67    1.157 G  OK
> > >> 05-Jan-09 16:59 lxm12.2009-01-05_11 53237  Full     60,759    20.41 G 
> > >> OK 05-Jan-09 17:02
> > >> klh16-o755.2009-01-05_11
> > >>   53252  Incr         68    9.517 M  OK       05-Jan-09 17:02
> > >> marlon.2009-01-05_11 53250  Full     21,339    4.143 G  OK      
> > >> 05-Jan-09 17:03
> > >> lxh68-gx620.2009-01-05_11
> > >>   53262  Incr         61    204.7 M  OK       05-Jan-09 17:04
> > >> msj2-gx260.2009-01-05_11
> > >>   53273  Incr        101    323.2 M  OK       05-Jan-09 17:07
> > >> plp5-o755.2009-01-05_11
> > >>   53258  Incr      1,147    107.3 M  OK       05-Jan-09 17:07
> > >> mls7-gx620.2009-01-05_11
> > >> ====
> > >>
> > >> Device status:
> > >> Autochanger "Dell-PV136T" with devices:
> > >>     "IBMLTO2-1" (/dev/nst0)
> > >>     "IBMLTO2-2" (/dev/nst1)
> > >> Device "IBMLTO2-1" (/dev/nst0) is mounted with:
> > >>      Volume:      LTO298L2
> > >>      Pool:        Default
> > >>      Media type:  LTO-2
> > >>      Slot 43 is loaded in drive 0.
> > >>      Total Bytes=282,273,481,728 Blocks=4,375,518 Bytes/block=64,512
> > >>      Positioned at File=284 Block=9,941
> > >> Device "IBMLTO2-2" (/dev/nst1) is mounted with:
> > >>      Volume:      LTO295L2
> > >>      Pool:        Default
> > >>      Media type:  LTO-2
> > >>      Slot 40 is loaded in drive 1.
> > >>      Total Bytes=87,572,201,472 Blocks=1,357,455 Bytes/block=64,512
> > >>      Positioned at File=87 Block=8,956
> > >> ====
> > >>
> > >> Used Volume status:
> > >>
> > >> [output ends suddenly here before getting back to a * prompt]
> > >>
> > >> I can leave it like this for the next couple hours just in case 
somebody
> > >> has any ideas for something else they'd like me to try or send to 
assist
> > >> in solving this problem.
> > >>
> > >> Here's the traceback file...
> > >>
> > >> Using host libthread_db library "/lib/libthread_db.so.1".
> > >> [Thread debugging using libthread_db enabled]
> > >> [New Thread -1214118192 (LWP 21853)]
> > >> [New Thread -1426953328 (LWP 26587)]
> > >> [New Thread -1393382512 (LWP 26575)]
> > >> [New Thread -1367471216 (LWP 26496)]
> > >> [New Thread -1401775216 (LWP 26488)]
> > >> [New Thread -1435346032 (LWP 26405)]
> > >> [New Thread -1375863920 (LWP 26262)]
> > >> [New Thread -1384256624 (LWP 26044)]
> > >> [New Thread -1418560624 (LWP 25949)]
> > >> [New Thread -1341043824 (LWP 25672)]
> > >> [New Thread -1332544624 (LWP 22749)]
> > >> [New Thread -1324143728 (LWP 22748)]
> > >> [New Thread -1315751024 (LWP 22747)]
> > >> [New Thread -1307350128 (LWP 22746)]
> > >> [New Thread -1298949232 (LWP 22745)]
> > >> [New Thread -1290556528 (LWP 22744)]
> > >> [New Thread -1282159728 (LWP 22743)]
> > >> [New Thread -1273767024 (LWP 22742)]
> > >> [New Thread -1265374320 (LWP 22741)]
> > >> [New Thread -1256977520 (LWP 22740)]
> > >> [New Thread -1248584816 (LWP 22737)]
> > >> [New Thread -1231582320 (LWP 22736)]
> > >> [New Thread -1239975024 (LWP 22223)]
> > >> [New Thread -1223189616 (LWP 21856)]
> > >> [New Thread -1214796912 (LWP 21855)]
> > >> 0xb7fb7410 in __kernel_vsyscall ()
> > >> $1 = "gyrus-dir", '\0' <repeats 20 times>
> > >> $2 = 0x80f8e00 "bacula-dir"
> > >> $3 = 0x80f90b0 "/usr/sbin/"
> > >> $4 = 0x80f91e0 "MySQL"
> > >> $5 = 0x80ed38b "2.4.4 (28 December 2008)"
> > >> $6 = 0x80ed3a4 "i686-pc-linux-gnu"
> > >> $7 = 0x80ed3b6 "suse"
> > >> $8 = 0x80ed3bb "10.2"
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc0876 in __nanosleep_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a8347 in bmicrosleep (sec=60, usec=0) at bsys.c:71
> > >> #3  0x08071be4 in wait_for_next_job (one_shot_job_to_run=0x0) at
> > >> scheduler.c:130 #4  0x0804de85 in main (argc=0, argv=0xbfa58da4) at
> > >> dird.c:288
> > >>
> > >> Thread 25 (Thread -1214796912 (LWP 21855)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7b28a41 in ___newselect_nocancel () from /lib/libc.so.6
> > >> #2  0x080a9a85 in bnet_thread_server (addrs=0x80f9860, max_clients=20,
> > >> client_wq=0x80f64e0,
> > >>      handle_client_request=0x808d5fe <handle_UA_client_request>) at
> > >> bnet_server.c:161
> > >> #3  0x0808d5f6 in connect_thread (arg=0x80f9860) at ua_server.c:84
> > >> #4  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #5  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 24 (Thread -1223189616 (LWP 21856)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x080ccff3 in watchdog_thread (arg=0x0) at
> > >> watchdog.c:307
> > >> #3  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #4  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 23 (Thread -1239975024 (LWP 22223)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x82d3b28, ptr=0xb61771b4
> > >> ";\r\016\b�q\027�\217#", nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x82d3b28) at bsock.c:381
> > >> #4  0x080a90ed in bnet_recv (bsock=0x82d3b28) at bnet.c:187
> > >> #5  0x0808efef in do_storage_status (ua=0x8172fc0, store=0x823b5a0) at
> > >> ua_status.c:325
> > >> #6  0x0808f776 in status_cmd (ua=0x8172fc0, cmd=0x8172c98 "status
> > >> storage") at ua_status.c:134
> > >> #7  0x08076e8c in do_a_command (ua=0x8172fc0, cmd=0x8172c98 "status
> > >> storage") at ua_cmds.c:180
> > >> #8  0x0808d70f in handle_UA_client_request (arg=0x816e4c8) at
> > >> ua_server.c:147 #9  0x080cda1e in workq_server (arg=0x80f64e0) at
> > >> workq.c:357
> > >> #10 0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #11 0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 22 (Thread -1231582320 (LWP 22736)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0x8302638) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0x8302638) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0x8302638) at 
backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0x8302638) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 21 (Thread -1248584816 (LWP 22737)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0x832a4c8) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0x832a4c8) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0x832a4c8) at 
backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0x832a4c8) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 20 (Thread -1256977520 (LWP 22740)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x82704b8, ptr=0xb513ff44 "��˷(",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x82704b8) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x82704b8) at getmsg.c:109
> > >> #5  0x0806c14b in start_storage_daemon_job (jcr=0xaf819f50, rstore=0x0,
> > >> wstore=0xaf81ae00) at msgchan.c:285
> > >> #6  0x080517e3 in do_backup (jcr=0xaf819f50) at backup.c:142
> > >> #7  0x08063e29 in job_thread (arg=0xaf819f50) at job.c:290
> > >> #8  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #9  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #10 0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 19 (Thread -1265374320 (LWP 22741)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0xaf80f3e0) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0xaf80f3e0) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0xaf80f3e0) at
> > >> backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0xaf80f3e0) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 18 (Thread -1273767024 (LWP 22742)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0x8310518) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0x8310518) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0x8310518) at 
backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0x8310518) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 17 (Thread -1282159728 (LWP 22743)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0xaf842440,
> > >>      ptr=0xb393bf44
> > >> "��˷(�\223�h.3\b�*3\b\030\0222\b��\223�\201�\005\b@$\204���˷H�\223�|
\222
> > >>\n\ b@$\204�\224�1\b\004",
> > >>
> > >>      nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0xaf842440) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0xaf842440) at getmsg.c:109
> > >> #5  0x0806c14b in start_storage_daemon_job (jcr=0xaf81b7b0, rstore=0x0,
> > >> wstore=0xaf81c660) at msgchan.c:285
> > >> #6  0x080517e3 in do_backup (jcr=0xaf81b7b0) at backup.c:142
> > >> #7  0x08063e29 in job_thread (arg=0xaf81b7b0) at job.c:290
> > >> #8  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #9  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #10 0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 16 (Thread -1290556528 (LWP 22744)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0x8335af0) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0x8335af0) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0x8335af0) at 
backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0x8335af0) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 15 (Thread -1298949232 (LWP 22745)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x827e880, ptr=0xb2939024
> > >> "`\231\017\b\202", nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x827e880) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x827e880) at getmsg.c:109
> > >> #5  0x080513f8 in wait_for_job_termination (jcr=0xaf8016d0) at
> > >> backup.c:274 #6  0x08051abd in do_backup (jcr=0xaf8016d0) at
> > >> backup.c:235
> > >> #7  0x08063e29 in job_thread (arg=0xaf8016d0) at job.c:290
> > >> #8  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #9  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #10 0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 14 (Thread -1307350128 (LWP 22746)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0xaf809e38) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0xaf809e38) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0xaf809e38) at
> > >> backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0xaf809e38) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 13 (Thread -1315751024 (LWP 22747)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc0876 in __nanosleep_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a8347 in bmicrosleep (sec=2, usec=0) at bsys.c:71
> > >> #3  0x08066122 in jobq_server (arg=0x80f6340) at jobq.c:590
> > >> #4  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #5  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 12 (Thread -1324143728 (LWP 22748)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cbd7dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from
> > >> /lib/libpthread.so.0 #2  0x0806b094 in
> > >> wait_for_storage_daemon_termination (jcr=0x831e3b8) at msgchan.c:409
> > >> #3  0x080514e2 in wait_for_job_termination (jcr=0x831e3b8) at
> > >> backup.c:304 #4  0x08051abd in do_backup (jcr=0x831e3b8) at 
backup.c:235
> > >> #5  0x08063e29 in job_thread (arg=0x831e3b8) at job.c:290
> > >> #6  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #7  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #8  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 11 (Thread -1332544624 (LWP 22749)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0xaf8425b8, ptr=0xb092f024
> > >> "`\231\017\b\202", nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0xaf8425b8) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0xaf8425b8) at getmsg.c:109
> > >> #5  0x080513f8 in wait_for_job_termination (jcr=0x8307878) at
> > >> backup.c:274 #6  0x08051abd in do_backup (jcr=0x8307878) at 
backup.c:235
> > >> #7  0x08063e29 in job_thread (arg=0x8307878) at job.c:290
> > >> #8  0x08065a5f in jobq_server (arg=0x80f6340) at jobq.c:466
> > >> #9  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #10 0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 10 (Thread -1341043824 (LWP 25672)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x8334e10, ptr=0xb0114114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x8334e10) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x8334e10) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0x8302638) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 9 (Thread -1418560624 (LWP 25949)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0xaf843388, ptr=0xab727114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0xaf843388) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0xaf843388) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0x8307878) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 8 (Thread -1384256624 (LWP 26044)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x816acb8, ptr=0xad7de114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x816acb8) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x816acb8) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0x8310518) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 7 (Thread -1375863920 (LWP 26262)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0xaf801128, ptr=0xadfdf114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0xaf801128) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0xaf801128) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0x831e3b8) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 6 (Thread -1435346032 (LWP 26405)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x82935e0, ptr=0xaa725114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x82935e0) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x82935e0) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0x832a4c8) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 5 (Thread -1401775216 (LWP 26488)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0xaf843798, ptr=0xac729114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0xaf843798) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0xaf843798) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0x8335af0) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 4 (Thread -1367471216 (LWP 26496)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x82cc080, ptr=0xae7e0114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x82cc080) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x82cc080) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0xaf8016d0) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 3 (Thread -1393382512 (LWP 26575)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x8186008, ptr=0xacf2a114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x8186008) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x8186008) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0xaf809e38) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 2 (Thread -1426953328 (LWP 26587)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc002b in __read_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a93cc in read_nbytes (bsock=0x82a0090, ptr=0xaaf26114 "",
> > >> nbytes=4) at bnet.c:82
> > >> #3  0x080abae8 in BSOCK::recv (this=0x82a0090) at bsock.c:381
> > >> #4  0x0805de81 in bget_dirmsg (bs=0x82a0090) at getmsg.c:109
> > >> #5  0x0806b3b1 in msg_thread (arg=0xaf80f3e0) at msgchan.c:374
> > >> #6  0xb7cb9112 in start_thread () from /lib/libpthread.so.0
> > >> #7  0xb7b2f2ee in clone () from /lib/libc.so.6
> > >>
> > >> Thread 1 (Thread -1214118192 (LWP 21853)):
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #1  0xb7cc0876 in __nanosleep_nocancel () from /lib/libpthread.so.0
> > >> #2  0x080a8347 in bmicrosleep (sec=60, usec=0) at bsys.c:71
> > >> #3  0x08071be4 in wait_for_next_job (one_shot_job_to_run=0x0) at
> > >> scheduler.c:130 #4  0x0804de85 in main (argc=0, argv=0xbfa58da4) at
> > >> dird.c:288
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> #0  0xb7fb7410 in __kernel_vsyscall ()
> > >> No symbol table info available.
> > >> #1  0xb7cc0876 in __nanosleep_nocancel () from /lib/libpthread.so.0
> > >> No symbol table info available.
> > >> #2  0x080a8347 in bmicrosleep (sec=60, usec=0) at bsys.c:71
> > >> 71          stat = nanosleep(&timeout, NULL);
> > >> Current language:  auto; currently c++
> > >> timeout = {tv_sec = 60, tv_nsec = 0}
> > >> tv = {tv_sec = 0, tv_usec = 2}
> > >> tz = {tz_minuteswest = 2, tz_dsttime = 0}
> > >> stat = 0
> > >> #3  0x08071be4 in wait_for_next_job (one_shot_job_to_run=0x0) at
> > >> scheduler.c:130 130            bmicrosleep(next_check_secs, 0); /* 
> > >> recheck
> > >> once per minute */ jcr = (JCR *) 0xaf85cef8
> > >> job = (JOB *) 0x820acb8
> > >> run = (RUN *) 0x8241a70
> > >> now = 53354
> > >> prev = 1231214400
> > >> next_job = (job_item *) 0x0
> > >> first = false
> > >> #4  0x0804de85 in main (argc=0, argv=0xbfa58da4) at dird.c:288
> > >> 288         while ( (jcr = wait_for_next_job(runjob)) ) {
> > >> ch = -1
> > >> jcr = (JCR *) 0xaf85cef8
> > >> no_signals = false
> > >> test_config = false
> > >> uid = 0xbfa5a4fe "bacula"
> > >> gid = 0xbfa5a508 "bacula"
> > >> #0  0x00000000 in ?? ()
> > >> No symbol table info available.
> > >> #0  0x00000000 in ?? ()
> > >> No symbol table info available.
> > >> #0  0x00000000 in ?? ()
> > >> No symbol table info available.
> > >>
> > >> 
------------------------------------------------------------------------

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
<Prev in Thread] Current Thread [Next in Thread>