InfoScale™ 9.0 Support for Containers - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale™ on OpenShift
- Installing Arctera InfoScale™ on Kubernetes
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing InfoScale DR on OpenShift
- Installing InfoScale DR on Kubernetes
- TECHNOLOGY PREVIEW: Disaster Recovery scenarios
- Configuring InfoScale
- Troubleshooting
Configuring Global Cluster Membership (GCM)
With Global Cluster Membership (GCM), you can define membership of clusters for disaster recovery. The GCM CR must be configured and applied on all clusters. When configured, the Global Cluster Membership forms a logical notion called 'Global Cluster' with all underlying clusters as 'Member Clusters'. Member clusters are Kubernetes clusters providing disaster recovery capabilities to application components. To provide DR, these member clusters
Send heartbeats with each other periodically.
Exchange information like state, configuration, operation.
Perform/participate in operation like migration.
Complete the following steps
Edit
/YAML/DR/SampleGlobalClusterMembership.yaml
as underapiVersion: infoscale.veritas.com/v1 kind: GlobalClusterMembership metadata: name: global-cluster-membership spec: localClusterName: <Cluster for which you want to create a DR backup> globalMemberClusters: - clusterID: <ID of the cluster for which you want a DR backup> drControllerAddress: "<Load balancer IP address (haproxy) of the local cluster>" drControllerPort: "<Load balancer port number>" - clusterID: <ID of the Cluster to be used for a backup> drControllerAddress: "<Load balancer IP address (haproxy) of the DR site>" drControllerPort: "<Load balancer port number>" # Required details if velero is not installed in "velero" namespace # and/or user needs to set a specific User ID, fsGroup in security # context veleroConfig: # Specify namespace in which velero is installed. This field is # optional # if velero is installed in the default "velero" namespace. veleroNamespace: "<Namespace where Velero is installed>" # User id to enable volume mount # This is to comply with default security context constraint. # This field is optional for Kubernetes but required for OpenShift # if default ID below needs to be changed. userID: 1000640000 <You can change the default value to a valid value for both Primary and DR clusters> # Supplemental group to enable volume mount. # This field is optional for Kubernetes but required for OpenShift # if default ID below needs to be changed. FSGroup: 1000640000 <You can change the default value to a valid value for both Primary and DR clusters>
Note:
Do not enclose the parameter values in angle brackets(< >) . For example, if 8334 is the Load balancer port number; enter drControllerPort: "8334" for drControllerPort: "<Load balancer port number>". localClusterName and clusterID can have maximum 20 characters.
Run the following command on the master node of the source cluster.
kubectl apply -f /YAML/DR/SampleGlobalClusterMembership.yaml
Edit another instance of
/YAML/DR/SampleGlobalClusterMembership.yaml
to add DR site as underapiVersion: infoscale.veritas.com/v1 kind: GlobalClusterMembership metadata: name: global-cluster-membership spec: localClusterName: <Cluster for which you want to create a DR backup> globalMemberClusters: - clusterID: <ID of the cluster for which you want a DR backup> drControllerAddress: "<Load balancer IP address (haproxy) of the local cluster>" drControllerPort: "<Load balancer port number>" - clusterID: <ID of the Cluster to be used for a backup> drControllerAddress: "<Load balancer IP address (haproxy) of the DR site>" drControllerPort: "<Load balancer port number>" # Required details if velero is not installed in "velero" namespace # and/or user needs to set a specific User ID, fsGroup in security # context veleroConfig: # Specify namespace in which velero is installed. This field is # optional # if velero is installed in the default "velero" namespace. veleroNamespace: "<Namespace where Velero is installed>" # User id to enable volume mount # This is to comply with default security context constraint. # This field is optional for Kubernetes but required for OpenShift # if default ID below needs to be changed. userID: 1000640000 <You can change the default value to a valid value for both Primary and DR clusters> # Supplemental group to enable volume mount. # This field is optional for Kubernetes but required for OpenShift # if default ID below needs to be changed. FSGroup: 1000640000 <You can change the default value to a valid value for both Primary and DR clusters>
Copy this file to the DR site and Run the following command again on the master node of the DR site.
kubectl apply -f /YAML/DR/SampleGlobalClusterMembership.yaml
Manually verify on all clusters whether the GLOBALCLUSTERSTATE is DISCOVER_WAIT by running the command on the master node of the cluster- kubectl get gcm.
Various states are
State
Description
UNKNOWN
A transient default Global-Cluster state. After initial configuration/setup, cluster state must transition to DISCOVER_WAIT. Prolonged UNKNOWN state indicates errors in initial configuration/setup. Review DR Controller log for the ongoing activities.
DISCOVER_WAIT
Although local cluster has a copy of GCM and member cluster details, it is not certain whether local copy of GCM and member cluster is up-to-date. Waits till you seed the cluster by updating GlobalClusterOperation to localbuild. When a member cluster transitions to RUNNING state, all peer clusters with identical membership transition to RUNNING state.
ADMIN_WAIT
If local membership definition does not match with peer cluster's membership definition, clusters transition to this state. Update membership on peer clusters and ensure that it is identical. Peer clusters then transition to RUNNING state.
RUNNING
Cluster transitions to RUNNING state if you seed cluster membership by updating GlobalClusterOperation to localbuild. Cluster transitions to RUNNING state even when local copy of membership matches with peer clusters.
EXITING
You have initiated DR Controller stop.
EXITED
DR Controller stopped.
DISCOVER_WAIT indicates that the cluster is initialized. You can now trigger localbuild. Verify the cluster membership details and initiate localbuild as under.
Run the following command on the master node of the primary/source cluster.
kubectl edit gcm global-cluster-membership
Update on the source cluster as under
globalClusterOperation: "localbuild"
The cluster transitions into RUNNING state and broadcasts membership copy to all peer clusters. A peer cluster with same membership also transitions into RUNNING state, whereas a peer cluster with different membership transitions into ADMIT_WAIT state. Update Spec:GlobalMemberClusters to rectify any discrepancy.
To verify whether the Global Cluster is successfully created, run the following command on the master node.
kubectl get gcm
Review the cluster names, GlobalClusterState, and PeerLinkState in the output similar to the following. GlobalClusterState must be Running and PeerLinkState must be Connected.
NAME LOCALCLUSTER GLOBALCLUSTERSTATE PEERLINKSTATE <Name of <Cluster ID Running {"<Cluster ID for back up>":"Connected" the Global for back up> ,"<Cluster ID for backing up>":"Connected"} cluster>