Author: Victor Hugo dos Santos <listas.vhs AT gmail DOT com>
Date: Wed, 11 Mar 2009 18:01:13 -0300
Hello, I have this network topology: - ~ 40 servers/clients - 1 director - 2 storages - all servers connect in Gigabits switches and GB network cards too. :-) - all baculas (director and sotrages) an
Bacula does much more than what rsync etc do. Bacula also updates the Catalog database, for example. That is the usual bottleneck. Note: I believe that the bandwidth you are looking at is 'data trans
Author: John Drescher <drescherjm AT gmail DOT com>
Date: Wed, 11 Mar 2009 17:37:02 -0400
This is expected for incremental backups because the hard drive spends most of the time thrashing finding the files to backup. However in most cases even with the low data rate this is much faster th
Author: Victor Hugo dos Santos <listas.vhs AT gmail DOT com>
Date: Thu, 12 Mar 2009 09:38:46 -0300
On Wed, Mar 11, 2009 at 6:37 PM, John Drescher <drescherjm AT gmail DOT com> wrote: [...] the same problem in FULL backup.. look: == Build OS: i486-pc-linux-gnu debian lenny/sid JobId: 9610 Job: red-
Author: Victor Hugo dos Santos <listas.vhs AT gmail DOT com>
Date: Thu, 12 Mar 2009 09:52:26 -0300
I have jobs with unless 25 files and 200MB, running alone (no parallel jobs in director and storage). this should not a bottleneck to catalog !! :-( I'm looking information in logs of executed jobs:
Author: John Drescher <drescherjm AT gmail DOT com>
Date: Thu, 12 Mar 2009 09:19:12 -0400
You are using software compression. That will drastically slow down a backup. John -- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are powering Web 2.0 with engaging, cross-pla
Author: Thomas Glatthor <Thomas.Glatthor AT ic3s DOT de>
Date: Thu, 12 Mar 2009 14:22:31 +0100
Victor Hugo dos Santos schrieb: have you ever tried without compression? maybe the cpu is the bottleneck. -- ic3s Information, Computer und Solartechnik AG Bäckerbarg 6, 22889 Tangstedt, Germany Tele
What John said. Turn off software compression. -- Dan Langille BSDCan - The Technical BSD Conference : http://www.bsdcan.org/ PGCon - The PostgreSQL Conference: http://www.pgcon.org/ -- Apps built wi
Author: John Drescher <drescherjm AT gmail DOT com>
Date: Thu, 12 Mar 2009 10:28:10 -0400
I am not sure what the OP is trying to optimize here however I do have a few ideas if I guess the problem correctly. Assumption: Need to backup 40 desktops/servers to a disk array with reasonable per
Author: Victor Hugo dos Santos <listas.vhs AT gmail DOT com>
Date: Fri, 13 Mar 2009 17:34:35 -0300
[...] well.. is true.. when I configure jobs to use compression, the time is very superior and rate (rate=time / MB, I belive) is slow. :-( before, I had configured GZIP option as 9 and: time 03:55:4
Author: Bruno Friedmann <bruno AT ioda-net DOT ch>
Date: Sat, 14 Mar 2009 07:25:18 +0100
Hi Hugo, Well now you know about the different level about gzip keep an eyes on the different space the resulted saves took. For example I've some windows server which contain oracle full dump sql (2