Re: [Bacula-users] Migration from Arkeia, Other Questions
2008-05-01 19:39:34
> With spooling, The answer is no. I limit my spool size to 2 to 5 G but
> others limit it to much larger. I know Arno does uses a much larger
> spool file that approximates the size of the tape.
>
>> Or do I need to have each job
>> write to a new volume (in the same pool) with "Use Volume Once = yes",
>> or
>> "Maximum Volume Jobs = 1"? That doesn't seem to be efficient though. I'd
>> expect some option to "write job to a unique volume, but once that job
>> is
>> done, another job can append" (to eliminate the interleaved volume
>> blocks
>> issue with simultaneous jobs writing). If we're trying to keep the total
>> space used by a pool to 3 TB max, it doesn't seem efficient to use one
>> job
>> per volume if the volume can't fill up. Recycling by max # of volumes
>> won't
>> work (use space efficiently/keep as much data as possible) if the
>> volumes
>> aren't full. And from what I can tell there's no "max pool size" option.
>>
>> Anyone follow?
>>
>>
> Yes, I follow. I could go into more detail but this has been discussed
> on the list dozens of times. Please check the archives for the pros
> and cons of all of these methods...
This is driving me crazy (searching mailing list archives). There doesn't
seem to be a clear cut answer - or I'm looking for the wrong thing. I just
don't see why disk based backups need to be spooled, when that IO could be
used for additional simultaneous jobs. I'm looking to run 10-20+ jobs
concurrently, without setting up 10-20 devices and 10-20 pools (as I
understand things need to be done). It's basically 1 device/pool/volume per
job? 10 concurrent jobs, 10 devices - if you don't want spooling or sloppy
volume writing? Seems like there would be a much easier way. Am I missing
something?
Sorry for all the questions but this part isn't clicking. And apparently I'm
not the only one considering the activity on this topic.
Thomas Krwawecz III
--
Blue Gravity Communications, Inc.
3495 Haddonfield Rd, Suite 6
Pennsauken, NJ 08109
Toll Free: 1-877-8 HOSTING
Tel: (856) 662-9100, Fax: (856) 662-9101
Web: http://www.bluegravity.com
----- Original Message -----
From: "John Drescher" <drescherjm AT gmail DOT com>
To: "Thomas Krwawecz III" <tom AT bluegravity DOT com>
Cc: "Bob Hetzel" <beh AT case DOT edu>; <Bacula-users AT lists.sourceforge DOT
net>
Sent: Thursday, May 01, 2008 12:47 PM
Subject: Re: [Bacula-users] Migration from Arkeia, Other Questions
> On Thu, May 1, 2008 at 12:23 PM, Thomas Krwawecz III
> <tom AT bluegravity DOT com> wrote:
>> We're doing disk based backups over NFS to a NetApp so hardware
>> compatibility is not a concern. In no way are we switching to save
>> money. I
>> just need something reliable. With Arkeia I'm lucky if scheduled backups
>> run
>> consistently over 2-4 weeks before breaking.
>>
>> Final questions (and I can pay if there's someone available for quick
>> phone
>> support until we get setup):
>>
>>
> I know Arno Lehmann can provide paid support if you would like that.
>
>>
>> 1) Where should I send suggestions/feature requests? To the devel list?
>>
>>
> I believe Arno collects these. Although there is currently long wish
> list...
>
>>
>> 2) YES/NO: Backing up concurrent jobs/clients requires the following in
>> "bacula-dir.conf", as well as the other confs, correct?
>>
>> Director {
>> .
>> Maximum Concurrent Jobs = 10
>> .
>> }
>>
> I have this set in quite a few places:
> # grep -R oncurrent *
> bacula-dir.conf: Maximum Concurrent Jobs = 5
> bacula-fd.conf: Maximum Concurrent Jobs = 5
> bacula-sd.conf: Maximum Concurrent Jobs = 20
> include/bacula-dir-storage.conf: Maximum Concurrent Jobs = 5
> include/bacula-dir-storage.conf: Maximum Concurrent Jobs = 5
> include/bacula-dir-storage.conf: Maximum Concurrent Jobs = 5
> include/bacula-dir-storage.conf: Maximum Concurrent Jobs = 5
> include/bacula-dir-clients-linux.conf: Maximum Concurrent Jobs = 2
> include/bacula-dir-clients-linux.conf: Maximum Concurrent Jobs = 2
> include/bacula-dir-clients-linux.conf: Maximum Concurrent Jobs = 2
> include/bacula-dir-clients-win.conf: Maximum Concurrent Jobs = 2
>
> It looks like this is for the fd and the local sd and also some of the
> client resources and storage resources in bacula-dir.conf as well as
> the main bacula-dir.conf. BTW the include directory are files included
> in bacula-dir.conf using the @ syntax.
>
>>
>> 3) How should pools be defined for concurrent backups? Is there a
>> problem
>> with multiple jobs writing to the same volume?
>>
> With spooling, The answer is no. I limit my spool size to 2 to 5 G but
> others limit it to much larger. I know Arno does uses a much larger
> spool file that approximates the size of the tape.
>
>> Or do I need to have each job
>> write to a new volume (in the same pool) with "Use Volume Once = yes",
>> or
>> "Maximum Volume Jobs = 1"? That doesn't seem to be efficient though. I'd
>> expect some option to "write job to a unique volume, but once that job
>> is
>> done, another job can append" (to eliminate the interleaved volume
>> blocks
>> issue with simultaneous jobs writing). If we're trying to keep the total
>> space used by a pool to 3 TB max, it doesn't seem efficient to use one
>> job
>> per volume if the volume can't fill up. Recycling by max # of volumes
>> won't
>> work (use space efficiently/keep as much data as possible) if the
>> volumes
>> aren't full. And from what I can tell there's no "max pool size" option.
>>
>> Anyone follow?
>>
>>
> Yes, I follow. I could go into more detail but this has been discussed
> on the list dozens of times. Please check the archives for the pros
> and cons of all of these methods...
>
>> Here's what I have configured now:
>>
>> Pool {
>> Name = Weekly-Pool
>> Pool Type = Backup
>> AutoPrune = yes
>> Recycle = yes
>> Recycle Oldest = yes
>> Label Format = "Weekly-Volume-"
>> Volume Retention = 14 days
>> Maximum Volumes = 48
>> Maximum Volume Bytes = 25000000000
>> }
>
> You can use Maximum Volume Bytes = 250 G
>
> instead. Or was that 25G? Much easier to tell with the G ...
>
-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Mikael Kermorgant
- Re: [Bacula-users] Migration from Arkeia, Other Questions, John Drescher
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Bob Hetzel
- Re: [Bacula-users] Migration from Arkeia, Other Questions, John Drescher
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Thomas Krwawecz III
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Arno Lehmann
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Thomas Krwawecz III
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Dan Langille
- Re: [Bacula-users] Migration from Arkeia, Other Questions, Arno Lehmann
- Re: [Bacula-users] (no subject), Jean-Sébastien Hederer
Re: [Bacula-users] Migration from Arkeia, Other Questions, Alan Brown
|
|
|