InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
About the Steward process: Split-brain in two-cluster global clusters
Failure of all heartbeats between any two clusters in a global cluster indicates one of the following:
The remote cluster is faulted.
All communication links between the two clusters are broken.
In global clusters with three or more clusters, VCS queries the connected clusters to confirm that the remote cluster is truly down. This mechanism is called inquiry.
In a two-cluster setup, VCS uses the Steward process to minimize chances of a wide-area split-brain. The process runs as a standalone binary on a system outside of the global cluster configuration.
To configure redundancy for the Steward process, you can configure Steward in one of the following ways:
Configure high availability of a single Steward process:
You can configure the Steward process in a two-node cluster. In case of a failure, the Steward process fails over to the other node in the cluster.
Configure multiple Stewards:
You can configure multiple Stewards. Each Steward can be configured at a different site. If the communication links between one of the Stewards and one of the clusters are lost, the Steward process at the other site can respond to the inquiry.
Figure: Steward process: Split-brain in two-cluster global clusters depicts the Steward process to minimize chances of a split brain within a two-cluster setup.
When all communication links between any two clusters are lost, each cluster contacts the Steward with an inquiry message. The Steward sends an ICMP ping to the cluster in question and responds with a negative inquiry if the cluster is running or with positive inquiry if the cluster is down. In case of multiple stewards, the cluster sends the inquiry to all the Stewards simultaneously. If at least one Steward responds with a negative inquiry, VCS assumes that the other cluster is running and does not need any corrective action.
The Steward can also be used in configurations with more than two clusters. VCS provides the option of securing communication between the Steward process and the wide-area connectors.
In non-secure configurations, you can configure the steward process on a platform that is different to that of the global cluster nodes. Secure configurations have not been tested for running the steward process on a different platform.
For example, you can run the steward process on a Windows system for a global cluster running on Linux systems. However, the VCS release for Linux contains the steward binary for Linux only. You must copy the steward binary for Windows from the VCS installation directory on a Windows cluster, typically C:\Program Files\VERITAS\Cluster Server
.
A Steward is effective only if there are independent paths from each cluster to the host that runs the Steward. If there is only one path between the two clusters, you must prevent split-brain by confirming manually via telephone or some messaging system with administrators at the remote site if a failure has occurred. By default, VCS global clusters fail over an application across cluster boundaries with administrator confirmation. You can configure automatic failover by setting the ClusterFailOverPolicy attribute to Auto.
For more information on configuring the Steward process, see the Cluster Server Administrator's Guide.
The default port for the steward is 14156.
More Information