Results 1 to 19 of 19

Thread: EXCH Cluster

  1. #1
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default EXCH Cluster

    Anyone running EXCH in a cluster that is more than an active passive? Like 3 active 1 passive? If so, please help me understand how you set up the command file. Currently the command file on the active nodes has a specific server name, but how does this work when they fail over?
    Can you have more than one statement in the command file?

    tdpexcc backup * full /tsmoptfile=dsm.opt /excserver=msg003 /logfile=excsch.log >> excfull.log

    I understand how you do this on a active passive, but on a 3 node cluster dont get it. Would you have to edit the command file each time on fails over to the passive node?

  2. #2
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    You need a node for every group. Every group has its own <excserver> name, so no problem there. The name always points to the active node, so you don't have to edit anything. You can have more than one statement in the command file, but they'll run sequentially, or you'll lose the logs. Or you can have more than one command file.

  3. #3
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Thanks for the reply. I have a 4 active 1 passive cluster. I have the EXCH tdp installed on each server with the dsm.opt file, dsmsched and error log on the respective drive that would fail over. The command file is on the C drive default location.
    All have cluster schedules that have been working fine. Yesterday we did some testing and failed one of them over to the passive node. I set up the schedules and brought it online in cluster admin. No problem, picked up the schedule. But the command failed with a RC 1. I created the shortcut so I was able to open and run the backup, but that got me worried on how to do the command file on the passive node.
    So you are saying I can have more than one command in the command file.
    for example:
    tdpexcc backup * full /tsmoptfile=dsm.opt /excserver=msg003
    tdpexcc backup * full /tsmoptfile=dsm.opt /excserver=msg002
    tdpexcc backup * full /tsmoptfile=dsm.opt /excserver=msg001

    and if msg002 happens to be failed over will it know to run it?

    I like the idea of more than one command file, but not sure how that works.

  4. #4
    Moderator moon-buddy's Avatar
    Join Date
    Aug 2005
    Location
    Somewhere in the US
    Posts
    5,861
    Thanks
    4
    Thanked 228 Times in 223 Posts

    Default

    Quote Originally Posted by JohanW View Post
    You need a node for every group. Every group has its own <excserver> name, so no problem there. The name always points to the active node, so you don't have to edit anything. You can have more than one statement in the command file, but they'll run sequentially, or you'll lose the logs. Or you can have more than one command file.
    I agree if this was a normal non-clustered environment. What happens during a fail over?

    The setup is a 4 node: active-active-active-passive environment. This tells me that at anytime, node 1 can fail over to node 4 (assumind node 4 is the passive node). Node 4 should be able to backup the files for Node 1. Similarly, node 2 can fail over to Node 1 (when node 4 is the active one), and node 1 can backup the resources for node 2, and so on.

    The trick is to find the combination of fail over possibilities to properly design the TSM fail over scenario.

    I haven't worked out a system like this but I think this is what needs to be done to ease the "heartache" when configuring the system:

    1. Limit the fail over possibilities - meaning the preferred fail over partner: 1 -> 4, 2 ->, 3 -> 4
    2. Establish the TSM fail over setup based on 1 above.
    Ed

  5. #5
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    No, not that way. This way:
    Code:
    start tdpexcc backup            *  full /tsmoptfile=h:\ExchangeTSM\dsmNotShipping.opt /excserver=BRGMAIL /configfile=h:\ExchangeTSM\tdpexc.cfg >> %LogNotShipping%
    start tdpexcc backup "SG Shipping" full /tsmoptfile=h:\ExchangeTSM\dsmShipping.opt    /excserver=BRGMAIL /configfile=h:\ExchangeTSM\tdpexc.cfg >> %LogShipping%
    which provides for (manual) striping which the XCHG TDP won't do.

    Your problem is something different, however.

    You have 4 cluster groups. All of those have a cluster name and a cluster XCHG instance. They also have their own nodes, right? Right? And their own schedules? So you point the schedule at the cluster name (as in the resource, which fails over with the XCHG instance), and you always end up on the node which has the XCHG instance. No need for multiple commands in the command file.

  6. #6
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Moon-buddy: no need. The groups that fail over each contain the name, the disk, the store and the TSM client. It's no different from an active-active cluster, or even from an active-passive cluster really.

  7. #7
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Yes you are correct... 4 active 1 passive. Each has its own schedule created in cluster admin. Each has its own dsm.opt and on the C drive of each server its own command file with the specific name of the server.

    Thanks for your example. Its kind of making sense to me now.

    each of the active can fail over at anytime to the passive, but none of the active can fail over to each other. So basically its the command file on the passive node that I need to fix

  8. #8
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    The C-drive does not belong to the cluster but to the node. You can use the executables shared if all nodes are installed identically, but I prefer to keep per-group configuration on the disk in that group, so I'm sure it's available where it's needed.

  9. #9
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    You could make things easier on yourself if you stopped thinking in active and passive nodes, because any cluster group can be on any node, and there is no limit to the number of groups on one node. All groups could run on one node if all the other nodes failed.

    Instead, think in cluster groups, which have resources, and where you want to do something (entirely contained within that group). The TSM client is a cluster resource and fails over with the name, the disk and the store.

  10. #10
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    JohanW
    Do you have an example of the command file on your passive node?

  11. #11
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    There is no cluster group on a passive node. So there is no command file.

    You need a command file per cluster group. This is why I have references to H: in my example. The group owns H:, so configuration is always available within the group.

  12. #12
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Okay, so maybe I am over thinking this. So for example... my exch01 server has drive S and T. I have the dsm.opt file for the TDP on the S drive, shortcut points to the S drive.
    S and T failed over to the passive node. So let me make sure I am understanding this...do I put a command file on the S drive and not in the default TDP location on the C drive?

  13. #13
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    JohanW, you said "any cluster group can be on any node"
    I was told my the EXCH team that any storage group can failover to the passive server...but they cannot failover to another active server.
    So storage group msg-001 cannot fail-over to msg-002 which is active.
    msg-003 cannot fail to msg-001 and so on. They have to fail only to the passive server.

  14. #14
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    JohanW...thanks for all your help.
    Let me just ask one other question.
    we have a basic exch schedule that looks like the following:

    02/20/2009 02:46:04 Schedule Name: EXCH_NIGHTLY_8PM
    02/20/2009 02:46:04 Action: Command
    02/20/2009 02:46:04 Objects:C:\progra~\tivoli\tsm\TDPexchange\excfull. cmd

    Could I not create new TSM schedules for each of my exch cluster groups, put the command file on the S drive location and point the command to say S:\TSM\TDPExchange\excfull.cmd

    for example:
    02/20/2009 02:46:04 Schedule Name: EXCH_NIGHTLY_001
    02/20/2009 02:46:04 Action: Command
    02/20/2009 02:46:04 Objects:S:\TSM\TDPExchange\excfull.cmd

    and associate that server to this backup. And do the same for all the others?

  15. #15
    Senior Member THE_WIPET's Avatar
    Join Date
    May 2006
    Location
    Montreal
    Posts
    562
    Thanks
    0
    Thanked 1 Time in 1 Post

    Default

    i got Exchange 2007 - Active/Active/passive mode. this is what i did. It's almost the same thing as configuring on a active/passive setup... but with a little twist.

    i installed BA and TDP on the 3 of them.
    lets call them PS1,PS2,PS3 fo physical
    and Quorum 1,Exch1,Exch2 for you cluster names.

    create a tsm node for PS1,PS2,PS3,Quorom,Exch1,Exch2

    1- create a folder TSMDATA on the Qurom drive - put the Dsm.opt + logs of the quorom node.
    if you want, create a Lnk. or a cmd that call dsmc -optfile:*:\tsmdata\dsm.opt for your quorum node.

    2- in the cluster admin of PS1, create a shared resources (greatly detail in the client backup installation giude - Seting up cluster schedule)

    3- create the cluster sched with this command (be aware, sched name as to be the same in the cluster resrouce name. dont forget to add the registry key after creating the sched.
    dsmcutil install SCHEDuler /name:"TSM_SCHED_QUORUM" /clientdir:"c:\Program Files\tivoli\tsm\baclient" /optfile:*:\tsmdata\dsm.opt /node:quorom /password:quorom /validate:yes /autostart:no /startnow:no /clusternode:yes /clustername:quorom

    4- I deactivate the automatique failedover in the "Generic services for TSM".. if the service does not mount.. i dont want to be TSM that initiate the failed over.

    5 - For Exchange. I create a folder TSMDATA in the drive dedicated for the Exchange logs of exch1.

    6 - all the config and log for Exch1 are in the folder.

    7 - I create a Cmd file that call the right exchange server. this is my command line.
    tdpexc /excserver=EXCH1 /tsmoptfile=M:\TSMDATA\dsm.opt /configfile=M:\TSMDATA\tdpexc.cfg
    this will call automatically the GUI interface with the good server.

    7.1 - this is the cmd file that the TSm scheduler call. tdpexcc backup * full /excserver=EXCH1 /tsmoptfile=M:\TSMDATA\dsm.opt /configfile=M:\TSMDATA\tdpexc.cfg /logfile=M:\TSMDATA\Exchange_full_EXCH1.log

    do step 3 and 4 for the Exchange node. to configure the sched in a cluster mode.

    8 - Repete the same thing on the other server and the other nodes.

    N.B in our Exchange cluster configuration. Exch1 run only on PS1 or PS2 and Exch2 run only on PS2 and PS3 (Quorum can run on PS1,2,3). So modify you generic cluster service acordingly to you setup.

    on the TSM Server side I create to command schedule - backup Exch1 and Backup Exch2 calling the cmd file from the log drive TSMDATA. if you do incremental you create another sched calling the incr cmd file.

    I know a pour alot of info... if you need help just PM me

    hope this help

    N.B. It the same thing a configuring a Active/passive cluster. The twist is one server will have the configuration on Exch1 and Exch2 on the box. And you need to define 1 sched per Exchange node in TSM since the cmd file will not reside on the same drive letter.
    Last edited by THE_WIPET; 03-06-2009 at 12:14 PM.

  16. #16
    Moderator JohanW's Avatar
    Join Date
    Nov 2006
    Location
    Netherlands
    Posts
    986
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Kyle, you have the right idea with the command file. Regarding the failover, your XCHG team are limiting themselves, it's not the architecture doing that. If they want to do that, fine, but design for the general case where any group can be on any node. Because you can.

    Wipet, thanks for writing that out. One thing: do you have one TSMDATA folder on M: for both XCHG cluster groups? You should have a TSMDATA for Exch1 on M: and a separate TSMDATA for Exch2 on N: (if that's its disk). Each cluster group should be completely selfcontained. (Exception: you can use stuff on local (not belonging to the cluster) drives on nodes, if you make sure the nodes are installed identically. But do not use that for configuration or logging, just for executables.)

    I'm off on vacation now. Good luck.

  17. #17
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    Just an update to everyone who sent me responses....

    What I finally ended up doing....
    I will use my Active EXCH-004 server as an example.
    Orginally installed the TDP for exch to the C drive default location, but ended up moving everything to the S drive which is the data drive that fails over.

    I then created a new TSM scheduler that is only associated to this node.
    Objects: S:\TSM\TDPExchange\excfull.cmd

    My cmd file is on the S drive along with all the other files needed to run the backup.

    tdpexcc backup * full /tsmoptfile=S:\TSM\TDPExchange\dsm.opt /excserver=chw-msg-004 /configfile=S:\TSM\TDPExchange\tdpexc.cfg /logfile=S:\TSM\TDPExchange\excsch.log >> excfull.log

    I created a scheduler in cluster admin that points to the S drive dsm.opt file and sched/error logs. Exch-004 was failed over to passive node. I set the schedules up over there and brought them on line with no problems.

    I believe this way everything needed to perform the database backup is located on the S drive. When it fails to the passive node, all the information needed is there.

    Does this seem correct?

  18. #18
    Senior Member THE_WIPET's Avatar
    Join Date
    May 2006
    Location
    Montreal
    Posts
    562
    Thanks
    0
    Thanked 1 Time in 1 Post

    Default

    Yes it is.. This is how i configure.. and since your "passive" Exchange server will take all the other serveurs... You have to do this with Exch1,Exch2 ans Exch3.

    If the exch svr have a different database drive letter, you have to modify your script to reflect each drive.

    if you use TSM central scheduling, you will have to create a backup sched for every node since you call a cmd file and they are not in the same drive.

    JohanW: No... My exchange serveur got unique Drive letters for Database and Log. They are self contain
    EX: EXCH1 - Db is M:\ log is L:\
    EXCH2 - DB is S:\ and log is T:\
    Last edited by THE_WIPET; 03-12-2009 at 08:05 AM. Reason: Add response to JohanW

  19. #19
    Member
    Join Date
    Sep 2007
    Posts
    38
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default

    THE_WIPET...Thanks that is exactly what I did. Each exch server has different drives that fail over so each one has its own TSM schedule with the specific path. So now if any fail over like you said they are self contained.
    Thanks again for your assistance.

Similar Threads

  1. EXCH Cluster set up
    By Kyle2024 in forum Exchange / Outlook
    Replies: 1
    Last Post: 02-04-2009, 11:02 AM
  2. Cluster Notification from xxxxxxx (REBOOT (CLUSTER TAKEOVER)) WARNING
    By karinegh in forum SAN Technical Discussion
    Replies: 10
    Last Post: 04-08-2008, 02:47 PM
  3. BAckup DB2 cluster &was cluster
    By ma7moudshaf3y in forum Backup / Archive Discussion
    Replies: 1
    Last Post: 03-08-2008, 11:49 AM
  4. ANS1301E w/ Exch. TDP on Exchange 2000
    By snydosaurus in forum TDP/Application Layer Backup
    Replies: 1
    Last Post: 11-18-2004, 02:00 PM
  5. Cluster DB and Cluster DB.Log
    By yanda in forum Backup / Archive Discussion
    Replies: 0
    Last Post: 04-23-2003, 11:55 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •