InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
Configuring the Steward process (optional)
In case of a two-cluster global cluster setup, you can configure a Steward to prevent potential split-brain conditions, provided the proper network infrastructure exists.
See About the Steward process: Split-brain in two-cluster global clusters.
To configure the Steward process for clusters not running in secure mode
- Identify a system that will host the Steward process.
To configure Steward in a dual-stack configuration, ensure that you enable IPv4 and IPv6 on the system that will host the Steward process. You must also plumb both the IPv4 and IPv6 addresses on the host system.
- Make sure that both clusters can connect to the system through a ping command.
- Install the VRTSvcs, VRTSvlic, and VRTSperl RPMs on the Steward system using the following command:
# rpm -ivh VRTSperl VRTSvlic VRTSvcs
- In both the clusters, set the Stewards attribute to the IP address of the system running the Steward process.
The steward attribute must contain the IPv4 or IPv6 address of the steward server depending on whether the cluster node is configured with IPv4 or IPv6 respectively.
When a cluster node is configured with IPv4, the steward attribute must be set to the IPv4 address of the steward server. When a cluster node is configured with IPv6, the Steward attribute must be set to the IPv6 address.
For example:
cluster cluster1938 ( UserNames = { admin = gNOgNInKOjOOmWOiNL } ClusterAddress = "10.182.147.19" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 Stewards = {"10.212.100.165", "10.212.101.162"} }
- On the system designated to host the Steward, start the Steward process:
# steward -start
To configure the Steward process for clusters running in secure mode
- Verify that the prerequisites for securing Steward communication are met.
To verify that the wac process runs in secure mode, do the following:
Check the value of the wac resource attributes:
# hares -value wac StartProgram
The value must be "/opt/VRTSvcs/bin/wacstart - secure."
# hares -value wac MonitorProcesses
The value must be "/opt/VRTSvcs/bin/wac - secure."
List the wac process:
# ps -ef | grep wac
The wac process must run as "/opt/VRTSvcs/bin/wac - secure."
- Identify a system that will host the Steward process.
- Make sure that both clusters can connect to the system through a ping command.
- Perform this step only if VCS is not already installed on the Steward system. If VCS is already installed, skip to step 5.
Install the VRTSvcs and VRTSperl rpms.
If the cluster UUID is not configured, configure it by using
/opt/VRTSvcs/bin/uuidconfig.pl
.On the system that is designated to run the Steward process, run the installvcs -securityonenode command.
The installer prompts for a confirmation if VCS is not configured or if VCS is not running on all nodes of the cluster. Enter y when the installer prompts whether you want to continue configuring security.
For more information about the -securityonenode option, see the Cluster Server Configuration and Upgrade Guide.
- Generate credentials for the Steward using
/opt/VRTSvcs/bin/steward_secure.pl
or perform the following steps:# unset EAT_DATA_DIR
# unset EAT_HOME_DIR
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat createpd -d VCS_SERVICES -t ab
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat addprpl -t ab -d VCS_SERVICES -p STEWARD -s password
# mkdir -p /var/VRTSvcs/vcsauth/data/STEWARD
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
# /opt/VRTSvcs/bin/vcsat setuptrust -s high -b localhost:14149
- Set up trust on all nodes of the GCO clusters:
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC
# vcsat setuptrust -b <IP_of_Steward>:14149 -s high
- Set up trust on the Steward:
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
# vcsat setuptrust -b <VIP_of_remote_cluster1>:14149 -s high
# vcsat setuptrust -b <VIP_of_remote_cluster2>:14149 -s high
- In both the clusters, set the Stewards attribute to the IP address of the system running the Steward process.
For example:
cluster cluster1938 ( UserNames = { admin = gNOgNInKOjOOmWOiNL } ClusterAddress = "10.182.147.19" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 Stewards = {"10.212.100.165", "10.212.101.162} }
- On the system designated to run the Steward, start the Steward process:
# /opt/VRTSvcs/bin/steward -start -secure
To stop the Steward process
- To stop the Steward process that is not configured in secure mode, open a new command window and run the following command:
# steward -stop
To stop the Steward process running in secure mode, open a new command window and run the following command:
# steward -stop -secure