Storage Foundation and High Availability Solutions 7.4 HA and DR Solutions Guide for Microsoft Exchange 2010 - Windows

Last Published:
Product(s): InfoScale & Storage Foundation (7.4)
Platform: Windows
  1. Section I. Introduction and Concepts
    1. Introducing Storage Foundation and High Availability Solutions for Microsoft Exchange Server
      1.  
        About clustering solutions with InfoScale products
      2.  
        About high availability
      3.  
        How a high availability solution works
      4. How VCS monitors storage components
        1.  
          Shared storage - if you use NetApp filers
        2.  
          Shared storage - if you use SFW to manage cluster dynamic disk groups
        3.  
          Shared storage - if you use Windows LDM to manage shared disks
        4.  
          Non-shared storage - if you use SFW to manage dynamic disk groups
        5.  
          Non-shared storage - if you use Windows LDM to manage local disks
        6.  
          Non-shared storage - if you use VMware storage
      5.  
        About SFW HA support for Exchange Server 2010
      6.  
        About campus clusters
      7.  
        Differences between campus clusters and local clusters
      8.  
        Sample campus cluster configuration
      9.  
        What you can do with a campus cluster
      10.  
        About replication
      11.  
        About a replicated data cluster
      12.  
        How VCS replicated data clusters work
      13.  
        About disaster recovery
      14.  
        What you can do with a disaster recovery solution
      15.  
        Typical disaster recovery configuration
    2. Introducing the VCS agent for Exchange 2010
      1.  
        About the VCS database agent for Microsoft Exchange 2010
      2.  
        Exchange 2010 database agent functions
      3.  
        Exchange 2010 database agent state definitions
      4.  
        Exchange 2010 database agent resource type definition
      5.  
        Exchange 2010 database agent attribute definitions
      6.  
        Exchange 2010 service group resource dependency graph
      7.  
        Exchange 2010 service group sample configuration
  2. Section II. Configuration Workflows
    1. Configuring high availability for Exchange Server with InfoScale Enterprise
      1. Reviewing the HA configuration
        1. Sample Exchange server HA configuration
          1.  
            IP addresses required
      2. Reviewing a standalone Exchange Server configuration
        1.  
          Sample standalone Exchange server configuration
      3.  
        Reviewing the campus cluster configuration
      4. Reviewing the Replicated Data Cluster configuration
        1.  
          Sample Exchange Server Replicated Data Cluster configuration
        2. About setting up a Replicated Data Cluster configuration
          1.  
            About setting up replication
          2.  
            About configuring and migrating the service group
      5. Reviewing the disaster recovery configuration
        1.  
          Active-passive DR configuration
      6.  
        Following the HA workflow in the Solutions Configuration Center
      7.  
        VCS campus cluster configuration
      8.  
        VCS Replicated Data Cluster configuration
      9. Disaster recovery configuration
        1.  
          DR configuration tasks: Primary site
        2.  
          DR configuration tasks: Secondary site
      10.  
        About installing the Veritas InfoScale products
      11. Notes and recommendations for cluster and application configuration
        1.  
          IPv6 support
      12.  
        Campus cluster failover using the ForceImport attribute
      13.  
        Configuring the storage hardware and network
      14. Configuring disk groups and volumes for Exchange Server
        1.  
          About cluster disk groups and volumes
        2.  
          Prerequisites for configuring cluster disk groups and volumes
        3.  
          Considerations for a fast failover configuration
        4.  
          Considerations for converting existing shared storage to cluster disk groups and volumes
        5.  
          Considerations when creating disks and volumes for campus clusters
        6.  
          Considerations for volumes for a Volume Replicator configuration
        7.  
          Sample disk group and volume configuration for Exchange 2010
        8.  
          Viewing the available disk storage
        9.  
          Creating a dynamic disk group
        10.  
          Adding disks to campus cluster sites
        11.  
          Creating volumes for high availability clusters
        12.  
          Creating volumes for campus clusters
      15. About managing disk groups and volumes
        1.  
          Importing a disk group and mounting a volume
        2.  
          Unmounting a volume and deporting a disk group
        3.  
          Adding drive letters to mount the volumes
        4.  
          Deporting the cluster disk group
      16. Configuring the cluster using the Cluster Configuration Wizard
        1.  
          Configuring notification
        2.  
          Adding nodes to a cluster
    2. Using the Solutions Configuration Center
      1.  
        About the Solutions Configuration Center
      2.  
        Starting the Solutions Configuration Center
      3.  
        Options in the Solutions Configuration Center
      4.  
        About launching wizards from the Solutions Configuration Center
      5.  
        Remote and local access to Solutions wizards
      6.  
        Solutions wizards and logs
      7.  
        Workflows in the Solutions Configuration Center
  3. Section III. Deployment
    1. Installing Exchange Server 2010
      1. About installing Exchange Server 2010
        1.  
          Before you install Exchange Server 2010
        2.  
          Privileges required for installing Exchange 2010
        3.  
          Installing Exchange Server 2010
      2.  
        Creating mailbox databases on shared storage
      3.  
        Moving mailbox databases to shared storage
      4.  
        Adding new Exchange servers to an existing cluster
    2. Configuring Exchange Server for failover
      1.  
        Tasks for configuring a new server for high availability
      2.  
        Tasks for configuring an existing server for high availability
      3.  
        About configuring the Exchange 2010 service group
      4.  
        Prerequisites for configuring the Exchange Server service group
      5.  
        Creating the Exchange Server 2010 service group
      6. Configuring the service group in a non-shared storage environment
        1.  
          Enabling fast failover for disk groups (optional)
      7.  
        Verifying the Exchange Server cluster configuration
      8.  
        Determining additional steps needed
    3. Configuring campus clusters for Exchange Server
      1.  
        Tasks for configuring campus clusters
      2.  
        Verifying the campus cluster: Switching the service group
      3.  
        Setting the ForceImport attribute to 1 after a site failure
    4. Configuring Replicated Data Clusters for Exchange Server
      1.  
        Tasks for configuring Replicated Data Clusters for Exchange Server
      2.  
        Creating the primary system zone for the application service group
      3.  
        Creating a parallel environment in the secondary zone
      4.  
        Setting up security for Volume Replicator
      5. Setting up the Replicated Data Sets (RDS)
        1.  
          Prerequisites for setting up the RDS for the primary and secondary zones
        2.  
          Creating the Replicated Data Sets with the wizard
      6. Configuring a RVG service group for replication
        1.  
          Creating the RVG service group
        2. Configuring the resources in the RVG service group for RDC replication
          1.  
            Configuring the IP and NIC resources
          2.  
            Configuring the VMDg or VMNSDg resources for the disk groups
          3.  
            Modifying the DGGuid attribute for the new disk group resource in the RVG service group
          4.  
            Adding the Volume Replicator RVG resources for the disk groups
          5.  
            Linking the Volume Replicator RVG resources to establish dependencies
          6.  
            Deleting the VMDg or VMNSDg resource from the Exchange Server service group
        3. Configuring the RVG Primary resources
          1.  
            Creating the RVG Primary resources
          2.  
            Linking the RVG Primary resources to establish dependencies
          3.  
            Bringing the RVG Primary resources online
        4.  
          Configuring the primary system zone for the RVG service group
      7.  
        Setting a dependency between the service groups
      8. Adding the nodes from the secondary zone to the RDC
        1.  
          Adding the nodes from the secondary zone to the RVG service group
        2.  
          Configuring secondary zone nodes in the RVG service group
        3.  
          Configuring the RVG service group NIC resource for fail over (VMNSDg only)
        4.  
          Configuring the RVG service group IP resource for failover
        5.  
          Configuring the RVG service group VMNSDg resources for fail over
        6.  
          Adding the nodes from the secondary zone to the Exchange Server service group
        7.  
          Configuring the zones in the Exchange Server service group
        8.  
          Configuring the application service group IP resource for fail over (VMNSDg only)
        9.  
          Configuring the application service group NIC resource for fail over (VMNSDg only)
      9. Verifying the RDC configuration
        1.  
          Bringing the service group online
        2.  
          Switching online nodes
      10.  
        Additional instructions for GCO disaster recovery
    5. Deploying disaster recovery for Exchange Server
      1.  
        Tasks for deploying a disaster recovery configuration of Microsoft Exchange
      2.  
        Tasks for setting up DR in a non-shared storage environment
      3. Reviewing the disaster recovery configuration
        1.  
          Supported disaster recovery configurations for service group dependencies
      4.  
        Setting up the secondary site: Installing InfoScale Enterprise and configuring a cluster
      5.  
        Verifying your primary site configuration
      6. Setting up your replication environment
        1. Requirements for EMC SRDF array-based hardware replication
          1.  
            Software requirements for configuring EMC SRDF
          2.  
            Replication requirements for EMC SRDF
        2. Requirements for Hitachi TrueCopy array-based hardware replication
          1.  
            Software requirements for Hitachi TrueCopy
          2.  
            Replication requirements for Hitachi TrueCopy
      7.  
        Assigning user privileges (secure clusters only)
      8.  
        About configuring disaster recovery with the DR wizard
      9.  
        Configuring disaster recovery with the DR wizard
      10.  
        Cloning the storage on the secondary site using the DR wizard (Volume Replicator replication option)
      11.  
        Creating temporary storage on the secondary site using the DR wizard (array-based replication)
      12.  
        Installing Exchange 2010
      13.  
        Cloning the service group configuration from the primary site to the secondary site
      14.  
        Configuring the Exchange service group in a non-shared storage environment
      15. Configuring replication and global clustering
        1.  
          Configuring Volume Replicator replication and global clustering
        2. Configuring EMC SRDF replication and global clustering
          1.  
            Optional settings for EMC SRDF
        3. Configuring Hitachi TrueCopy replication and global clustering
          1.  
            Optional settings for HTC
        4.  
          Configuring global clustering only
      16.  
        Creating the replicated data sets (RDS) for Volume Replicator replication
      17.  
        Creating the Volume Replicator RVG service group for replication
      18. Configuring the global cluster option for wide-area failover
        1.  
          Linking clusters: Adding a remote cluster to a local cluster
        2.  
          Converting a local service group to a global service group
        3.  
          Bringing a global service group online
      19.  
        Verifying the disaster recovery configuration
      20.  
        Establishing secure communication within the global cluster (optional)
      21.  
        Adding multiple DR sites (optional)
      22.  
        Recovery procedures for service group dependencies
      23. Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
        1.  
          Preparing the new node
        2.  
          Preparing the existing DR environment
        3.  
          Installing Exchange on the new node
        4.  
          Modifying the replication and Exchange service groups
        5.  
          Reversing replication direction
    6. Testing fault readiness by running a fire drill
      1.  
        About disaster recovery fire drills
      2. About the Fire Drill Wizard
        1.  
          About Fire Drill Wizard general operations
        2. About Fire Drill Wizard operations in a Volume Replicator environment
          1.  
            Preparing the fire drill configuration
          2.  
            About running the fire drill
          3.  
            About restoring the fire drill configuration
          4.  
            About deleting the fire drill configuration
        3.  
          About Fire Drill Wizard operations in a Hitachi TrueCopy or EMC SRDF environment
      3. About post-fire drill scripts
        1.  
          Exchange 2010 scripts or cmdlets
      4.  
        Tasks for configuring and running fire drills
      5. Prerequisites for a fire drill
        1.  
          Prerequisites for a fire drill in a Volume Replicator environment
        2.  
          Prerequisites for a fire drill in a Hitachi TrueCopy environment
        3.  
          Prerequisites for a fire drill in an EMC SRDF environment
      6. Preparing the fire drill configuration
        1.  
          System Selection panel details
        2.  
          Service Group Selection panel details
        3.  
          Secondary System Selection panel details
        4.  
          Disk Selection panel details
        5.  
          Hitachi TrueCopy Path Information panel details
        6.  
          HTCSnap Resource Configuration panel details
        7.  
          SRDFSnap Resource Configuration panel details
        8.  
          Fire Drill Preparation panel details
      7. Running a fire drill
        1.  
          Post fire drill operations panel details
      8.  
        Re-creating a fire drill configuration that has changed
      9.  
        Restoring the fire drill system to a prepared state
      10. Deleting the fire drill configuration
        1.  
          Fire Drill Deletion panel details
      11.  
        Considerations for switching over fire drill service groups
  4. Section IV. Reference
    1. Appendix A. Using Veritas AppProtect for vSphere
      1.  
        About Just In Time Availability
      2.  
        Prerequisites
      3.  
        Setting up a plan
      4.  
        Deleting a plan
      5.  
        Managing a plan
      6.  
        Viewing the history tab
      7.  
        Limitations of Just In Time Availability
      8.  
        Getting started with Just In Time Availability
      9.  
        Supported operating systems and configurations
      10.  
        Viewing the properties
      11.  
        Log files
      12.  
        Plan states
      13.  
        Troubleshooting Just In Time Availability
    2. Appendix B. Troubleshooting
      1.  
        VCS logging
      2.  
        Exchange Service agent error messages
      3.  
        Troubleshooting Microsoft Exchange uninstallation
      4.  
        Troubleshooting Exchange Setup Wizard issues

Configuring the cluster using the Cluster Configuration Wizard

After installing the software, set up the components required to run Cluster Server. The VCS Cluster Configuration Wizard (VCW) sets up the cluster infrastructure, including LLT and GAB, the user account for the VCS Helper service, and provides an option for configuring the VCS Authentication Service in the cluster. The wizard also configures the ClusterService group, which contains resources for notification and global clusters (GCO). You can also use VCW to modify or delete cluster configurations.

Note:

After configuring the cluster you must not change the names of the nodes that are part of the cluster. If you wish to change a node name, run VCW to remove the node from the cluster, rename the system, and then run VCW again to add that system to the cluster.

Note the following prerequisites before you proceed:

  • The required network adapters (NICs), and SCSI controllers are installed and connected to each system.

    Veritas recommends the following actions for network adapters:

    • Disable the ethernet auto-negotiation options on the private NICs to prevent:

      • Loss of heartbeats on the private networks

      • VCS from mistakenly declaring a system as offline

      Contact the NIC manufacturer for details on this process.

    • Remove TCP/IP from the private NICs to lower system overhead.

  • Verify that the public network adapters on each node use static IP addresses (DHCP is not supported) and name resolution is configured for each node.

  • Veritas recommends that you use three network adapters (two NICs exclusively for the VCS private network and one for the public network) per system. You can implement the second private link as a low-priority link over a public interface. Route each private NIC through a separate hub or switch to avoid single points of failure. Veritas recommends that you disable TCP/IP from private NICs to lower system overhead.

    Note:

    If you wish to use Windows NIC teaming, you must select the Static Teaming mode. Only the Static Teaming mode is currently supported.

  • Use independent hubs or switches for each VCS communication network (GAB and LLT). You can use cross-over Ethernet cables for two-node clusters. GAB supports hub-based or switch network paths, or two-system clusters with direct network links.

  • Verify the DNS settings for all systems on which the application is installed and ensure that the public adapter is the first adapter in the Connections list.

    When enabling DNS name resolution, make sure that you use the public network adapters, and not those configured for the VCS private network.

  • The logged on user must have local Administrator privileges on the system where you run the wizard. The user account must be a domain user account.

  • The logged on user must have administrative access to all systems selected for cluster operations. Add the domain user account to the local Administrator group of each system.

  • If you plan to create a new user account for the VCS Helper service, the logged on user must have Domain Administrator privileges or must belong to the Domain Account Operators group.

  • When configuring a user account for the Veritas VCS Helper service, make sure that the user account is a domain user. The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.

  • Make sure the VCS Helper service domain user account has "Add workstations to domain" privilege enabled in the Active Directory.

  • Verify that each system can access the storage devices and each system recognizes the attached shared disk.

    Use Windows Disk Management on each system to verify that the attached shared LUNs (virtual disks) are visible.

  • If you plan to set up a disaster recovery (DR) environment, you must configure the wide-area connector process for global clusters.

  • If you are setting up a Replicated Data Cluster configuration, add only the systems in the primary zone (zone 0) to the cluster, at this time.

  • In an any-to-any configuration, you can add the systems for all the Exchange servers when creating the cluster, not only the first Exchange server.

To configure a VCS cluster using the wizard

  1. Start the VCS Cluster Configuration Wizard from Start > All Programs > Veritas > Veritas Cluster Server > Configuration Tools > Cluster Configuration Wizard or, on Windows Server 2012 operating systems, from the Apps menu in the Start screen.
  2. Read the information on the Welcome panel and click Next.
  3. On the Configuration Options panel, click Cluster Operations and click Next.
  4. On the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.

    To discover information about all systems and users in the domain, do the following:

    • Clear Specify systems and users manually.

    • Click Next.

      Proceed to step 8.

    To specify systems and user names manually (recommended for large domains), do the following:

    • Select Specify systems and users manually.

      Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes.

    • Click Next.

      If you chose to retrieve the list of systems, proceed to step 6. Otherwise, proceed to the next step.

  5. On the System Selection panel, type the name of each system to be added, click Add, and then click Next.

    Do not specify systems that are part of another cluster.

    Proceed to step 8.

  6. On the System Selection panel, specify the systems for the cluster and then click Next.

    Do not select systems that are part of another cluster.

    Enter the name of the system and click Add to add the system to the Selected Systems list, or click to select the system in the Domain Systems list and then click the > (right-arrow) button.

  7. The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier. Review the status and then click Next.

    Select the system to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again.

    A system can be rejected for any of the following reasons:

    • System is not pingable.

    • WMI access is disabled on the system.

    • Wizard is unable to retrieve the system architecture or operating system.

    • Product is either not installed or there is a version mismatch.

  8. On the Cluster Configuration Options panel, click Create New Cluster and then click Next.
  9. On the Cluster Details panel, specify the details for the cluster and then click Next.

    Specify the cluster details as follows:

    Cluster Name

    Type a name for the new cluster. Veritas recommends a maximum length of 32 characters for the cluster name.

    Cluster ID

    Select a cluster ID from the suggested cluster IDs in the drop-down list, or type a unique ID for the cluster. The cluster ID can be any number from 0 to 65535.

    Note:

    If you chose to specify systems and users manually in step 4 or if you share a private network between more than one domain, make sure that the cluster ID is unique.

    Operating System

    From the drop-down list, select the operating system.

    All the systems in the cluster must have the same operating system and architecture.

    Available Systems

    Select the systems that you wish to configure in the cluster.

    Check the Select all systems check box to select all the systems simultaneously.

    The wizard discovers the NICs on the selected systems. For single-node clusters with the required number of NICs, the wizard prompts you to configure a private link heartbeat. In the dialog box, click Yes to configure a private link heartbeat.

  10. The wizard validates the selected systems for cluster membership. After the systems are validated, click Next.

    If a system is not validated, review the message associated with the failure and restart the wizard after rectifying the problem.

    If you chose to configure a private link heartbeat in step 9, proceed to the next step. Otherwise, proceed to step 12.

  11. On the Private Network Configuration panel, configure the VCS private network and then click Next. You can configure the VCS private network either over the ethernet or over the User Datagram Protocol (UDP) layer using IPv4 or IPv6 network.

    Do one of the following:

    • To configure the VCS private network over ethernet, complete the following steps:

    • Select Configure LLT over Ethernet.

    • Select the check boxes next to the two NICs to be assigned to the private network. You can assign a maximum of eight network links.

      Veritas recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one of the NICs and use the low-priority NIC for both public and as well as private communication.

    • If there are only two NICs on a selected system, Veritas recommends that you lower the priority of at least one NIC that will be used for private as well as public network communication.

      To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.

    • If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Veritas recommends that you do not select teamed NICs for the private network.

      The wizard configures the LLT service (over ethernet) on the selected network adapters.

    • To configure the VCS private network over the User Datagram Protocol (UDP) layer, complete the following steps:

    • Select Configure LLT over UDP on IPv4 network or Configure LLT over UDP on IPv6 network depending on the IP protocol that you wish to use.

      The IPv6 option is disabled if the network does not support IPv6.

    • Select the check boxes next to the NICs to be assigned to the private network. You can assign a maximum of eight network links. Veritas recommends reserving two NICs exclusively for the VCS private network.

    • For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet.

      The IP address is used for the VCS private communication over the specified UDP port.

    • Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK.

      For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier.

      The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication.

  12. On the VCS Helper Service User Account panel, specify the name of a domain user for the VCS Helper service.

    The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.

    Specify the domain user details as follows:

    • To specify an existing user, do one of the following:

      • Click Existing user and select a user name from the drop-down list.

      • If you chose not to retrieve the list of users in step 4, type the user name in the Specify User field and then click Next.

    • To specify a new user, click New user and type a valid user name in the Create New User field and then click Next.

      Do not append the domain name to the user name; do not type the user name as Domain\user or user@domain.

    • In the Password dialog box, type the password for the specified user and click OK, and then click Next.

  13. On the Configure Security Service Option panel, specify security options for the cluster communications and then click Next.

    Do one of the following:

    • To use VCS cluster user privileges, click Use VCS User Privileges and then type a user name and password.

      The wizard configures this user as a VCS Cluster Administrator. In this mode, communication between cluster nodes and clients, including Cluster Manager (Java Console), occurs using the encrypted VCS cluster administrator credentials. The wizard uses the VCSEncrypt utility to encrypt the user password.

      The default user name for the VCS administrator is admin and the password is password. Both are case-sensitive. You can accept the default user name and password for the VCS administrator account or type a new name and password.

      Veritas recommends that you specify a new user name and password.

    • To use the single sign-on feature, click Use Single Sign-on.

      In this mode, the VCS Authentication Service is used to secure communication between cluster nodes and clients by using digital certificates for authentication and SSL to encrypt communication over the public network. VCS uses SSL encryption and platform-based authentication. The Veritas High Availability Engine (HAD) and Veritas Command Server run in secure mode.

      The wizard configures all the cluster nodes as root brokers (RB) and authentication brokers (AB). Authentication brokers serve as intermediate registration and certification authorities. Authentication brokers have certificates signed by the root. These brokers can authenticate clients such as users and services. The wizard creates a copy of the certificates on all the cluster nodes.

  14. Review the summary information on the Summary panel, and click Configure.

    The wizard configures the VCS private network. If the selected systems have LLT or GAB configuration files, the wizard displays an informational dialog box before overwriting the files. In the dialog box, click OK to overwrite the files. Otherwise, click Cancel, exit the wizard, move the existing files to a different location, and rerun the wizard.

    The wizard starts running commands to configure VCS services. If an operation fails, click View configuration log file to see the log.

  15. On the Completing Cluster Configuration panel, click Next to configure the ClusterService group; this group is required to set up components for notification and for global clusters.

    To configure the ClusterService group later, click Finish.

    At this stage, the wizard has collected the information required to set up the cluster configuration. After the wizard completes its operations, with or without the ClusterService group components, the cluster is ready to host application service groups. The wizard also starts the VCS engine (HAD) and the Veritas Command Server at this stage.

  16. On the Cluster Service Components panel, select the components to be configured in the ClusterService group and then click Next.

    Do the following:

    • Check the Notifier Option check box to configure notification of important events to designated recipients.

    • Check the GCO Option check box to configure the wide-area connector (WAC) process for global clusters.The WAC process is required for inter-cluster communication.

      Configure the GCO Option using this wizard only if you are configuring a Disaster Recovery (DR) environment and are not using the Disaster Recovery wizard.

      You can configure the GCO Option using the DR wizard. The Disaster Recovery chapters in the application solutions guides discuss how to use the Disaster Recovery wizard to configure the GCO option.

More Information

Configuring notification