Bacula-users

Re: [Bacula-users] copy jobs

2015-08-13 12:53:02
Subject: Re: [Bacula-users] copy jobs
From: Ana Emília M. Arruda <emiliaarruda AT gmail DOT com>
To: Lukas Hejtmanek <xhejtman AT ics.muni DOT cz>
Date: Thu, 13 Aug 2015 13:48:28 -0300
Hello Lukas,

I forgot to mention that you can have two copy jobs running and sending to different pools (in this case, you would need more than one pool per client, but this way you will have your backups replicated to two different groups of volumes) setting the next pool in the schedule resource, run directive:

Schedule {
    Name = CopyJobSchedule
    Run = ... Next Pool=CopyJobPool1 daily at 23:00
    Run = ... Next Pool=CopyJobPool2 daily at 23:00
}

It would be necessary to configure maximum concurrent jobs for the devices being used, client and copy job. Also, if you schedule the jobs at the same time (or if the time overlaps) you may need to configure allow duplicate jobs for the copy job.

In this situation it will not be possible to use the PoolUncopiedJobs option. Instead you could use an SQLQuery as I sent in my earlier post.

Best regards,
Ana

On Wed, Aug 12, 2015 at 12:11 AM, Ana Emília M. Arruda <emiliaarruda AT gmail DOT com> wrote:
Hello Lukas,

Regarding your first post, could you give more details about "is it possible to setup copy job for a single client so that all the jobs data
is stored on two different volumes
​"? Do you mean the same data being replicated into two different volumes in the same pool? Sorry, I cannot see the goal here. I see copy jobs intended for offline backups. So the data goes to a different pool from the original ones.

​I tried some configurations with the cloned copy job, but if you have more than one job to be copied, lets say 2 jobs, you will have the original copy job and its clone each one copying each one of the original jobs. This is not the objective here.

On Mon, Aug 10, 2015 at 11:36 AM, Lukas Hejtmanek <xhejtman AT ics.muni DOT cz> wrote:
Hello,

is it possible to setup copy job for a single client so that all the jobs data
is stored on two different volumes? But without specifying an extra pool for
such a client. If I specify an extra pool, I can specify PoolUncopiedJobs.
That works but not for a single client only.

If I specify e.g. Client as a selection rule, I'm lost in a recursion because
all copy jobs are selected for copy next time as they match a client name.

​Maybe you could use an SQLQuery for this. ​

  Selection Type = SQLQuery
  Selection Pattern = "SELECT DISTINCT Job.JobId FROM
​\ 
                               ​
Client,Job,JobMedia,Media WHERE Client.Name='
​myclient​
-fd'
​\ 
                               ​
AND Client.ClientId=Job.ClientId AND Job.Type='B'
​\
                               ​
AND JobStatus IN ('T') AND JobMedia.JobId=Job.JobId
​\
                               ​
AND JobMedia.MediaId=Media.MediaId ORDER BY Job.StartTime DESC LIMIT
​X​
;"

​This will include all your jobs for a specific client, just the backups jobs (not the copy jobs), and if you know hoy many jobs for this client will be copied (every time your copy job runs) you can set this value in 'X' at the end of the query.

Best regards,
Ana
 

So any change here?

--
Lukáš Hejtmánek

------------------------------------------------------------------------------
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users


------------------------------------------------------------------------------
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
<Prev in Thread] Current Thread [Next in Thread>