Storage Foundation and High Availability Solutions 7.4.1 HA and DR Solutions Guide for Microsoft Exchange 2010 - Windows
- Section I. Introduction and Concepts
- Introducing Storage Foundation and High Availability Solutions for Microsoft Exchange Server
- How VCS monitors storage components
- Introducing the VCS agent for Exchange 2010
- Introducing Storage Foundation and High Availability Solutions for Microsoft Exchange Server
- Section II. Configuration Workflows
- Configuring high availability for Exchange Server with InfoScale Enterprise
- Reviewing the HA configuration
- Reviewing a standalone Exchange Server configuration
- Reviewing the Replicated Data Cluster configuration
- Reviewing the disaster recovery configuration
- Disaster recovery configuration
- Notes and recommendations for cluster and application configuration
- Configuring disk groups and volumes for Exchange Server
- About managing disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- Using the Solutions Configuration Center
- Configuring high availability for Exchange Server with InfoScale Enterprise
- Section III. Deployment
- Installing Exchange Server 2010
- Configuring Exchange Server for failover
- Configuring the service group in a non-shared storage environment
- Configuring campus clusters for Exchange Server
- Configuring Replicated Data Clusters for Exchange Server
- Setting up the Replicated Data Sets (RDS)
- Configuring a RVG service group for replication
- Configuring the resources in the RVG service group for RDC replication
- Configuring the RVG Primary resources
- Adding the nodes from the secondary zone to the RDC
- Verifying the RDC configuration
- Deploying disaster recovery for Exchange Server
- Reviewing the disaster recovery configuration
- Setting up your replication environment
- Configuring replication and global clustering
- Configuring the global cluster option for wide-area failover
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Testing fault readiness by running a fire drill
- About the Fire Drill Wizard
- About post-fire drill scripts
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- Running a fire drill
- Deleting the fire drill configuration
- Section IV. Reference
- Appendix A. Using Veritas AppProtect for vSphere
- Appendix B. Troubleshooting
- Appendix A. Using Veritas AppProtect for vSphere
Reviewing the HA configuration
In a typical example of an Exchange high availability configuration, the Exchange Mailbox Server role is installed on one or more cluster nodes. The Exchange mailbox databases are created on shared storage that is accessible from all the mailbox servers in the cluster. The shared storage is monitored by specific VCS storage agents. The Exchange mailbox databases are managed by a service group configured with a set of cluster nodes.
The mailbox databases are active on the node where the service group is online. If the active node fails, the mailbox databases are moved to an alternate mailbox server configured in the service group.
The following figure illustrates a typical two-node Exchange 2010 failover configuration that uses shared storage. System1 and System2 are the mailbox servers that are part of the Exchange database service group. When the service group is online on System1, the mailbox databases are active on System1. The Client Access server directs all client requests to the mailbox server on System1. System2 acts as a redundant mailbox server as well as an additional Client Access server at the site. If System1 fails, all the mailbox databases are moved to System2, and the mailbox server on System2 starts accepting client requests for those databases.
In this configuration, System2 functions as a redundant failover target for the mailbox databases that are active on System1. However, you can also configure System2 to host a different set of mailbox databases. You create a separate service group for those databases and then bring the service group online on System2. Thus System1 and System2 can both host mailbox databases and at the same time act as failover targets for each other.
The following figure illustrates such a configuration where multiple database service groups are configured on multiple mailbox servers; each server hosts a different set of mailbox databases and also serves as a failover target for the other mailbox databases configured in the cluster.
System1, System2, and System3 are the three mailbox servers that host Exchange service groups. Exchange mailbox databases DB1 and DB2 are configured in Exch_SG1 and are active on System1, DB3 and DB4 are configured in Exch_SG2 and are active on System2, and DB5 and DB6 are configured in Exch_SG3 and are active on System3.
All the cluster nodes are part of each database service group which means that each service group can failover on any of the three cluster nodes. If System1 fails, Group 1 (DB1, DB2) is moved to System2. System2 then hosts Group1 and Group2 at the same time. Similarly, if System2 fails, DB3 and DB4 are moved to System3. All the mailbox servers in the cluster host separate mailbox databases while simultaneously act as failover targets for other databases configured in the cluster.
Databases DB1, DB2 and their respective log volumes reside on the same disk group. Similar, DB3, DB4, and DB5, DB6, also reside on the same disk group. When Exch_SG1 service group fails over from System1 to System2, both the databases, DB1 and DB2, are moved to System2. Thus this configuration achieves multiple database mobility.
If you want to control database mobility on a per database basis, you need to configure the database and its volume on an independent disk group. The mailbox databases are active on the node where the service group is online. If the active node fails, the mailbox databases are moved to an alternate mailbox server configured in the service group.