Bacula-users

Re: [Bacula-users] Maximum Volume Jobs not honoured?

2014-07-25 04:23:04
Subject: Re: [Bacula-users] Maximum Volume Jobs not honoured?
From: tballin <t.ballin AT studiorakete DOT de>
To: bacula-users AT lists.sourceforge DOT net
Date: Fri, 25 Jul 2014 10:14:54 +0200
Hi,
thanks for the reply. Yes I have set maximum Jobs to 3. I have two 
drives in one Library witch can (must!) handle two jobs at a time so I 
set the maximum Job for the Storage Daemon/Resource since I though 
that’s the way to go - Actually I would love to spread one job over two 
drives but I have been told that’s not possible.
I have set maximum Concurrent Jobs to 3 ( in the storage resource, the 
director has 20 )because I have another one 15TB remote job which uses 
spooling. Albeit the 3rd job can spool 3TB of data the third job will 
only start if it can lock a drive. So the maximum number of Jobs I saw 
concurrently running was 2.

If I get you right the "media sharing" would stop if I use different 
pool for every job? I did not do that because all jobs actually backup 
the same snapshot so all expire times are the same - it the same backup 
set. Would that be more like a workaround or is this expected? To me 
"Maximum Volume Jobs = 1" sounds like on any given Volume there will be 
only one job or the a part of one job. And that’s what we need.

Timo

On 07/24/2014 08:28 PM, Martin Simmons wrote:
> It looks like you have Maximum Concurrent Jobs set to > 1, so you have jobs
> interleaving on the volumes because they specify the same pool.
>
> I don't know how that is supposed to interact with Maximum Volume Jobs = 1,
> but it looks like it doesn't work.  Did you expect it to do something
> specific?
>
> __Martin
>
>
>>>>>> On Thu, 24 Jul 2014 19:11:38 +0200, tballin  said:
>> Hi all,
>>
>> we startet using Bacula with a Neo400 ( incl. 2 LTO6 Drives ) for
>> backup. Although we have quit big amounts of data to backup the job
>> should be pretty simple since mostly it just needs to copy the files to
>> tapes for an off-site backup. Since we are using a zfs as file system
>> bacula has a relaxed time window of one week for the backup.
>>
>> But we have one important point that is One Job per Volume ( so we can
>> take the tapes easily off site and back ). After some interesting
>> experiences with "purge jobs" our full backup as base for the
>> incremental was finally done:
>>
>>     Build OS:               x86_64-unknown-linux-gnu redhat
>>     JobId:                  65
>>     Job:                    IncrementalToRevolver02a.2014-07-18_14.10.42_13
>>     Backup Level:           Full (upgraded from Incremental)
>>     Client:                 "shelfspace-fd" 7.0.3 (12May14) 
>> x86_64-unknown-linux-gnu,redhat,
>>     FileSet:                "NoahSet_A" 2014-07-11 16:47:40
>>     Pool:                   "Revolver02-Full" (From Job FullPool override)
>>     Catalog:                "MyCatalog" (From Client resource)
>>     Storage:                "Neo400" (From Pool resource)
>>     Scheduled time:         18-Jul-2014 14:10:42
>>     Start time:             18-Jul-2014 14:10:42
>>     End time:               24-Jul-2014 07:21:14
>>     Elapsed time:           5 days 17 hours 10 mins 32 secs
>>     Priority:               10
>>     FD Files Written:       11,719,314
>>     SD Files Written:       11,719,314
>>     FD Bytes Written:       24,262,705,996,248 (24.26 TB)
>>     SD Bytes Written:       24,265,651,316,062 (24.26 TB)
>>     Rate:                   49131.5 KB/s
>>     Software Compression:   None
>>     VSS:                    no
>>     Encryption:             no
>>     Accurate:               no
>>     Volume name(s):         
>> RB0104L6|RB0101L6|RB0110L6|RB0107L6|RB0219L6|RB0112L6|RB0109L6|RB0216L6|RB0211L6|RB0210L6|RB0208L6|RB0218L6
>>     Volume Session Id:      14
>>     Volume Session Time:    1405092165
>>     Last Volume Bytes:      2,628,260,877,312 (2.628 TB)
>>     Non-fatal FD errors:    2
>>     SD Errors:              0
>>     FD termination status:  OK
>>     SD termination status:  OK
>>     Termination:            Backup OK -- with warnings
>>
>> And for the Second Drive:
>>
>>
>> 20-Jul 05:41 shelfspace-dir JobId 66: Bacula shelfspace-dir 7.0.3 (12May14):
>>     Build OS:               x86_64-unknown-linux-gnu redhat
>>     JobId:                  66
>>     Job:                    IncrementalToRevolver02b.2014-07-18_14.10.42_14
>>     Backup Level:           Full (upgraded from Incremental)
>>     Client:                 "shelfspace-fd" 7.0.3 (12May14) 
>> x86_64-unknown-linux-gnu,redhat,
>>     FileSet:                "NoahSet_B" 2014-07-11 16:47:40
>>     Pool:                   "Revolver02-Full" (From Job FullPool override)
>>     Catalog:                "MyCatalog" (From Client resource)
>>     Storage:                "Neo400" (From Pool resource)
>>     Scheduled time:         18-Jul-2014 14:10:42
>>     Start time:             18-Jul-2014 14:10:46
>>     End time:               20-Jul-2014 05:41:43
>>     Elapsed time:           1 day 15 hours 30 mins 57 secs
>>     Priority:               10
>>     FD Files Written:       1,719,165
>>     SD Files Written:       1,719,165
>>     FD Bytes Written:       6,878,789,531,036 (6.878 TB)
>>     SD Bytes Written:       6,879,192,891,412 (6.879 TB)
>>     Rate:                   48354.7 KB/s
>>     Software Compression:   None
>>     VSS:                    no
>>     Encryption:             no
>>     Accurate:               no
>>     Volume name(s):         RB0101L6|RB0110L6|RB0107L6|RB0219L6|RB0112L6
>>     Volume Session Id:      15
>>     Volume Session Time:    1405092165
>>     Last Volume Bytes:      1,699,995,386,880 (1.699 TB)
>>     Non-fatal FD errors:    0
>>     SD Errors:              0
>>     FD termination status:  OK
>>     SD termination status:  OK
>>     Termination:            Backup OK
>>
>>
>> There are on average ~ 2.8TB on every tape. Also when I query the jobs
>> for a tape I get two jobid 66 and 65 as result.
>> But when I get the details for the Volume via e.g. llist volume=RB0101L6
>> it says there is only one job on the volume and ther should be only one
>> job on the volume ... as far as I understand:
>>
>> *llist volume=RB0101L6
>>
>> Automatically selected Catalog: MyCatalog
>>
>> Using Catalog "MyCatalog"
>>
>>             MediaId: 20
>>
>>          VolumeName: RB0101L6
>>
>>                Slot: 21
>>
>>              PoolId: 9
>>
>>           MediaType: LTO-6
>>
>>        FirstWritten: 2014-07-18 14:13:27
>>
>>         LastWritten: 2014-07-18 21:30:03
>>
>>           LabelDate: 2014-07-18 14:10:46
>>
>>             VolJobs: 1
>>
>>            VolFiles: 160
>>
>>           VolBlocks: 49,609,582
>>
>>           VolMounts: 2
>>
>>            VolBytes: 3,200,413,418,496
>>
>>           VolErrors: 0
>>
>>           VolWrites: 50,849,663
>>
>>    VolCapacityBytes: 0
>>
>>           VolStatus: Used
>>
>>             Enabled: 1
>>
>>             Recycle: 1
>>
>>        VolRetention: 3,024,000
>>
>>      VolUseDuration: 0
>>
>>          MaxVolJobs: 1
>>
>>         MaxVolFiles: 0
>>
>>         MaxVolBytes: 0
>>
>>           InChanger: 1
>>
>>             EndFile: 160
>>
>>            EndBlock: 6,541
>>
>>            VolParts: 0
>>
>>           LabelType: 0
>>
>>           StorageId: 2
>>
>>            DeviceId: 0
>>
>>          LocationId: 0
>>
>>        RecycleCount: 1
>>
>>        InitialWrite: 0000-00-00 00:00:00
>>
>>       ScratchPoolId: 0
>>
>>       RecyclePoolId: 2
>>
>>       ActionOnPurge: 0
>>
>>             Comment: NULL
>>
>>
>> Every Pool has the Maximum Volume Jobs = 1 set. I'll admit we are doing
>> lot of experimenting and purging but this option has always been set.
>> In total the amount of data wirtten to the Tapes / Job Size is matching
>> ... more or less. But there are just to many Volumes per Job...
>>
>> I am a little clueless here ... I would appreciate any hints.
>>
>> With kind regards
>>
>> Timo Ballin
>>
>>
>>
>>
>>
>>
>> ____ ESET 10118 (20140718) ____
>> The message was checked by ESET Mail Security.
>>
>>
>> ------------------------------------------------------------------------------
>> Want fast and easy access to all the code in your enterprise? Index and
>> search up to 200,000 lines of code with a free copy of Black Duck
>> Code Sight - the same software that powers the world's largest code
>> search on Ohloh, the Black Duck Open Hub! Try it now.
>> http://p.sf.net/sfu/bds
>> _______________________________________________
>> Bacula-users mailing list
>> Bacula-users AT lists.sourceforge DOT net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ------------------------------------------------------------------------------
> Want fast and easy access to all the code in your enterprise? Index and
> search up to 200,000 lines of code with a free copy of Black Duck
> Code Sight - the same software that powers the world's largest code
> search on Ohloh, the Black Duck Open Hub! Try it now.
> http://p.sf.net/sfu/bds
> _______________________________________________
> Bacula-users mailing list
> Bacula-users AT lists.sourceforge DOT net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
> ____ ESET 10118 (20140718) ____
> The message was checked by ESET Mail Security.
>


-- 
STUDIO RAKETE GmbH
Timo Ballin, System Administration
Schomburgstr. 120
D - 22767 Hamburg

t.ballin AT studiorakete DOT de
Tel:+49 (0)40 - 380 375 69 - 0
Fax:+49 (0)40 - 380 375 69 - 99

------------------------------------------------------
Pflichtangaben laut Handelsgesetzbuch und GmbH-Gesetz:

STUDIO RAKETE GmbH
Schomburgstr. 120 D - 22767 Hamburg

www.studiorakete.de / info AT studiorakete DOT de

Geschaeftsfuehrerin: Jana Bohl

Die Gesellschaft ist eingetragen im Handelregister des
Amtsgerichts Hamburg unter der Nummer HR B 95660
USt.-ID Nr.: DE 245787817



____ ESET 10118 (20140718) ____
The message was checked by ESET Mail Security.


------------------------------------------------------------------------------
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>