Bacula-users

[Bacula-users] Some sql_get.c sqlite error messages with bacula

2009-07-21 12:29:33
Subject: [Bacula-users] Some sql_get.c sqlite error messages with bacula
From: Russell Sutherland <russ AT quist DOT ca>
To: bacula-users AT lists.sourceforge DOT net
Date: Tue, 21 Jul 2009 11:56:29 -0400
I've had some small problems trying to get bacula up and running on an
older Linux system. Specifically:

Debian 3.0 aka woody
# uname -a
Linux postoffice.burg.com  2.2.20 #1 Thu Feb 26 18:12:28 UTC 2004 i686 unknown

I installed bacula 2.2.8 from sources and am using sqlite 2.4.7 from
the debian packages:

# bconsole
Connecting to Director postoffice:9101
1000 OK: postoffice-dir Version: 2.2.8 (26 January 2008)

# dpkg --list | grep sqli
ii  libsqlite-dev  2.4.7-1        SQLite development files
ii  libsqlite0     2.4.7-1        SQLite shared library
ii  sqlite         2.4.7-1        A command line interface for SQLite

I am backing up to a smb/windows mounted disk and all is well except
for some error messages at the sql level:

21-Jul 11:20 postoffice-dir JobId 7: Start Backup JobId 7,
Job=postoffice-data.2009-07-21_11.20.03
21-Jul 11:20 postoffice-dir JobId 7: Using Device "FileStorage"
21-Jul 11:20 nas-sd JobId 7: Volume "Inc-0005" previously written,
moving to end of data.
21-Jul 11:20 nas-sd JobId 7: Ready to append to end of Volume
"Inc-0005" size=9133277
21-Jul 11:21 nas-sd JobId 7: Job write elapsed time = 00:00:23,
Transfer rate = 417.5 K bytes/second
21-Jul 11:21 postoffice-dir JobId 7: Fatal error: sql_get.c:359
sql_get.c:359 query SELECT VolumeName,MAX(VolIndex) FROM
JobMedia,Media WHERE JobMedia.JobId=7 AND
+JobMedia.MediaId=Media.MediaId GROUP BY VolumeName ORDER BY 2 ASC
failed:
ORDER BY expressions should not be constant
21-Jul 11:21 postoffice-dir JobId 7: sql_get.c:359 SELECT
VolumeName,MAX(VolIndex) FROM JobMedia,Media WHERE JobMedia.JobId=7
AND JobMedia.MediaId=Media.MediaId GROUP BY VolumeName
+ORDER BY 2 ASC
21-Jul 11:21 postoffice-dir JobId 7: Bacula postoffice-dir 2.2.8
(26Jan08): 21-Jul-2009 11:21:02
  Build OS:               i686-pc-linux-gnu debian 3.0
  JobId:                  7
  Job:                    postoffice-data.2009-07-21_11.20.03
  Backup Level:           Incremental, since=2009-07-21 10:56:47
  Client:                 "postoffice-fd" 2.2.8 (26Jan08)
i686-pc-linux-gnu,debian,3.0
  FileSet:                "Full Set" 2009-07-21 10:28:16
  Pool:                   "Inc-Pool" (From Job IncPool override)
  Storage:                "Storage-postoffice-disk" (From Job
resource)
  Scheduled time:         21-Jul-2009 11:20:31
  Start time:             21-Jul-2009 11:20:38
  End time:               21-Jul-2009 11:21:02
  Elapsed time:           24 secs
  Priority:               10
  FD Files Written:       171
  SD Files Written:       171
  FD Bytes Written:       9,577,579 (9.577 MB)
  SD Bytes Written:       9,602,651 (9.602 MB)
  Rate:                   399.1 KB/s
  Software Compression:   None
  VSS:                    no
  Storage Encryption:     no
  Volume name(s):
  Volume Session Id:      1
  Volume Session Time:    1248188363
  Last Volume Bytes:      18,748,058 (18.74 MB)
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  OK
  SD termination status:  OK
  Termination:            Backup OK

21-Jul 11:21 postoffice-dir JobId 7: Begin pruning Jobs.
21-Jul 11:21 postoffice-dir JobId 7: No Jobs found to prune.
21-Jul 11:21 postoffice-dir JobId 7: Begin pruning Files.
21-Jul 11:21 postoffice-dir JobId 7: No Files found to prune.
21-Jul 11:21 postoffice-dir JobId 7: End auto prune.

Need I "worry" about this error?

In due time the machine will be updated, but I do need to have bacula
running at least for the next 3 months.

-- 
Russell Sutherland
russ AT quist DOT ca
+1.416.696.7600

------------------------------------------------------------------------------
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>