Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Configuring Global Cluster Membership (GCM)
With Global Cluster Membership (GCM), you can define membership of clusters for disaster recovery. The GCM CR must be configured and applied on all clusters. When configured, the Global Cluster Membership forms a logical notion called 'Global Cluster' with all underlying clusters as 'Member Clusters'. Member clusters are OpenShift clusters providing disaster recovery capabilities to application components. To provide DR, these member clusters
Send heartbeats with each other periodically.
Exchange information like state, configuration, operation.
Perform/participate in operation like migration.
Complete the following steps
Edit
/YAML/DR/SampleGlobalClusterMembership.yaml
as underapiVersion: infoscale.veritas.com/v1 kind: GlobalClusterMembership metadata: name: global-cluster-membership spec: # Local cluster name in the global membership localClusterName: <Local cluster where you want to apply this YAML> globalMemberClusters: # Cluster ID of each member of global cluster membership - clusterID: <A unique ID of the primary cluster> # Address Used For Communicating With Peer Cluster's DR Controller drControllerAddress: "<Load balancer IP address or haproxy of the local cluster>" # Port used for DR controller drControllerPort: "<Load balancer port number>" - clusterID: <A unique ID of the secondary cluster> drControllerAddress: "<Load balancer IP address or haproxy of the DR site>" drControllerPort: "<Load balancer port number>" # If heartbeat with peer cluster missed more than CounterMissTolerance times,then cluster will be moved to FAULTED state counterMissTolerance: 5 globalClusterOperation: "none" # Application metadata backup sync frequency to DR site(s) in minutes metadataBackupInterval: 15 # Refresh data replication status after specified minutes datarepRefreshStatusFrequency: 10 # Include cluster-scoped Custom Resource Definitions (CRDs) # in disaster recovery plan backup backupClusterScopeCRD: true # Maximum metadata backup copies stored per DR plan maximumMetadataCopies: 5
Note:
Do not enclose the parameter values in angle brackets(< >) . For example, if 8334 is the Load balancer port number; enter drControllerPort: "8334" for drControllerPort: "<Load balancer port number>". localClusterName and clusterID can have maximum 20 characters.
Run the following command on the bastion node of the source cluster.
oc apply -f /YAML/DR/SampleGlobalClusterMembership.yaml
Edit another instance of
/YAML/DR/SampleGlobalClusterMembership.yaml
to add DR site as underapiVersion: infoscale.veritas.com/v1 kind: GlobalClusterMembership metadata: name: global-cluster-membership spec: # Local cluster name in the global membership localClusterName: <DR site cluster name where you want to apply this YAML> globalMemberClusters: # Cluster ID of each member of global cluster membership - clusterID: <A unique ID of the primary cluster> # Address Used For Communicating With Peer Cluster's DR Controller drControllerAddress: "<Load balancer IP address or haproxy of the local cluster>" # Port used for DR controller drControllerPort: "<Load balancer port number>" - clusterID: <A unique ID of the secondary cluster> drControllerAddress: "<Load balancer IP address or haproxy of the DR site>" drControllerPort: "<Load balancer port number>" # If heartbeat with peer cluster missed more than CounterMissTolerance # times, then cluster will be moved to FAULTED state counterMissTolerance: 5 globalClusterOperation: "none" # Application metadata backup sync frequency to DR site(s) in minutes metadataBackupInterval: 15 # Refresh data replication status after specified minutes datarepRefreshStatusFrequency: 10 # Include cluster-scoped Custom Resource Definitions (CRDs) in # disaster recovery plan backup backupClusterScopeCRD: true # Maximum metadata backup copies stored per DR plan maximumMetadataCopies: 5
Copy this file to the DR site and Run the following command again on the bastion node of the DR site.
oc apply -f /YAML/DR/SampleGlobalClusterMembership.yaml
Manually verify on all clusters whether the GLOBALCLUSTERSTATE is DISCOVER_WAIT by running oc get gcm.
Various states are
State
Description
UNKNOWN
A transient default Global-Cluster state. After initial configuration/setup, cluster state must transition to DISCOVER_WAIT. Prolonged UNKNOWN state indicates errors in initial configuration/setup. Review DR Controller log for the ongoing activities.
DISCOVER_WAIT
Although local cluster has a copy of GCM and member cluster details, it is not certain whether local copy of GCM and member cluster is up-to-date. If GCM and member cluster details are identical on all peer clusters then all clusters automatically transition to RUNNING state. If the details are not identical, waits till you seed the cluster by updating GlobalClusterOperation to localbuild. When a member cluster transitions to RUNNING state, all peer clusters with identical membership transition to RUNNING state.
ADMIN_WAIT
If local membership definition does not match with peer cluster's membership definition, clusters transition to this state. Update membership on peer clusters and ensure that it is identical. Peer clusters then transition to RUNNING state.
RUNNING
Cluster transitions to RUNNING state if you seed cluster membership by updating GlobalClusterOperation to localbuild. Cluster transitions to RUNNING state even when local copy of membership matches with peer clusters.
EXITING
You have initiated DR Controller stop.
EXITED
DR Controller stopped.
To verify whether the Global Cluster is successfully created, run the following command on the bastion node.
oc get gcm
Review the cluster names, GlobalClusterState, and PeerLinkState in the output similar to the following. GlobalClusterState must be Running and PeerLinkState must be Connected.
NAME CLUSTER NAME CLUSTER STATE PROTOCOL global-cluster-membership Clus1 RUNNING 10
PEER LINK STATE {"Clus1":"CONNECTED","Clus2":"CONNECTED"}
Here,
NAME
is theName of GlobalClusterMembership custom resource
,CLUSTER NAME
is theLocal Cluster ID
, andClus1
,Clus2
are the Cluster IDs that you defined for global membership.