Daily Schedule - What processes and order do you use?

droach

ADSM.ORG Senior Member
Joined
Jan 7, 2008
Messages
239
Reaction score
13
Points
0
Location
Cut and Shoot, Texas
I am new to the 'PROTECT STGPOOL' and 'REPLICATE NODE *' world and am curious if since adding these processes to my Daily Schedule I should rethink the order of the tasks in my schedule. In particular, the 'Expiration' process.

The Expiration task has always been the last task in my schedule, but with the two new replication tasks I am wondering if Expiration should be run prior to protect and replicate? I mean, why replicate expired or soon to be expired data, right?

Here is what the Daily schedule on my 'source' currently looks like:

BAckup DB DEVclass=db_backup Wait=Yes Type=Full Scratch=Yes
Prepare Source=DBBackup Wait=Yes
BAckup DEVCONFig Filenames=L:\TSM\DEVCONFIG\devconf.out
BAckup VOLHistory Filenames=L:\TSM\DEVCONFIG\volhist.out
MOVe DRMedia * WHERESTate=VAULTRetrieve TOSTate=ONSITERetrive Wait=Yes
MOVe DRMedia * WHERESTate=MOuntable TOSTate=VAult Wait=Yes
PROTect STGPool AZUREFILE Wait=Yes
REPLicate Node * Wait=Yes
EXPIre Inventory Quiet=Yes Type=ALl REsource=9 SKipdirs=No

Any comments or suggestions are appreciated.
 
You should to do your protect/replicate before the backup db.

The idea is that as soon as your clients are done backing up, you protect and replicate the data. The quicker you can get your copy at the target location, the better protected you are. Same as when you had a traditional pool, you'd do your backup stgpool right after the client backup.
 

Attachments

  • SP 24hr Circle.jpg
    SP 24hr Circle.jpg
    116.7 KB · Views: 13
I understand your point about expiration, but the expiration does nothing to get your data protected. And you wouldn't be replicating what's expired, you're replicating new backups while expiration expires old data.
 
Marclant, thanks for your replies. After I thought about it a while the "soon to be expired" data has really already been replicated so running the Expire before the protect/Replicate doesn't make sense. Monday morning brain cramp I guess.

I see your point about running the Protect/Replicate first...and thanks for the scheduling wheel.
 
What about an 'Expiration' process on the source and target servers? I was under the impression that 'Replicate Node *' would take care of expiration and I didn't need a specific 'Expire Inv' task in my daily schedule.

Does not seem to be true as my target nodes and source nodes now seem to have very large discrepancies in the number of files and the total occupancy. Seeing a lot of 'Inactive Files' on my target nodes that are gone from the source nodes. As a test I ran 'Expire Inv node=nodename' on my target server and it cleaned up the occupancy for that node and brought it into sync with its Source node.
 
I'm not sure about that one. The help on the repicate node command states:
Files that are no longer stored on the source replication server, but that exist on the target replication server, are deleted during this process.
source: https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.7/srv.reference/r_cmd_node_replicate.html

However, I did hear that expiration should run on source and target.

Are you using dissimilar policies? I'm guessing not since you are expecting same storage use on source and target.
 
Policies are exactly the same. I saw that comment about the 'Replicate Node' deleting files on the target when they are deleted on the source. If that was working I wouldn't have a problem. It appears to mark the files for deletion, but it does not delete them. Thinking the Expire Inventory is still needed to actually do the deletion...at least that is what my tests have shown.

Isn't there a configuration document that spells out the proper way to setup node replication? I just want the filespaces on my target server to match exactly with my source server. Seems like this should be the default behavior for a "vanilla" replication setup like mine.

I have tried all kinds of "tricks" to try to get them in sync and so far nothing has worked. I think I am getting closer, but it feels like I am in uncharted territory. It has gotten so bad that my target storage pool is almost full at 89%, but my source storage pool is only at 65%. We have a ticket open with IBM to investigate and explain to us what tasks are needed on the source and target servers to successfully implement node replication.
 
Back
Top