Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Configuring Disaster Recovery Plan
With a Disaster Recovery Plan (DR Plan) you can enable disaster recovery for a particular namespace. For a more granular control, you can selectively label components in the namespace and create a DR Plan with namespace and labels. A DR Plan cannot span multiple namespaces. DR Plan must be created only on the primary cluster. DR Plan is automatically created and synchronized on all peer clusters after its creation on the primary cluster. Migration and other operations on the namespace can be triggered by updating certain attributes.
To enable a non-kubeadmin user (any user other than infoscale-admin) access custom resources on a secondary cluster after applying Disaster Recovery Plan (DR Plan) custom resource on primary cluster, a role and role-binding must be created . After a DR Plan custom resource is created on the primary cluster, an application namespace is automatically created on the secondary cluster. The role and role-binding on the primary cluster can then be saved and applied on secondary cluster in the same namespace.
Note:
Clusterwide(across namespaces) RBAC permissions are granted to the service account of infoscale-dr-manager
pod as required. As a Storage Administrator, ensure that a non-privileged user does not have kubectl exec permissions on the infoscale-dr-manager pod.
- Edit
/YAML/DR/SampleDisasterRecoveryPlan.yaml
as under to create DR plan for application components in a given namespace.Note:
Ensure that this resource is applied in the same namespace as that of data replication configuration resource.
apiVersion: infoscale.veritas.com/v1 kind: DisasterRecoveryPlan metadata: name: <Name of the Disaster Recovery Plan> #namespace: <Enter the namespace> spec: # Name of cluster that should be treated as primary for this DR plan primaryCluster: <ID of the cluster you want to back up> # (optional) Set force to True if peer cluster(s) is not reachable # and local cluster needs to perform takeover force: false # List of member cluster(s) where this DRPlan can failover. # Sequence of MemberCluster specified in this list denotes # relative preference of member cluster(s) # Must be subset of Global Cluster Membership preferredClusterList: ["<ID of the cluster you want to back up>", "<ID of the cluster where you want to back up>"] # Kind of corrective action in case of disaster # default value will be "Manual" if not specified clusterFailOverPolicy: Manual # Specify namespace and optionally labels to decide what # all needs to be part of the disaster recovery plan selector: namespace: mysql #labels: # - "app=db,env=dev" # (optional) Pointer to manage data replication #dataReplicationPointer: test-datareplication # (optional) Pointer to manage DNS endpoints #dnsPointer: test-dns
Note:
dataReplicationPointer is needed only if you have stateful applications that require data replication across peer clusters.
- Run the following command on the master node
kubectl apply -f /YAML/DR/SampleDisasterRecoveryPlan.yaml
- Wait till the command run is successful and the following message appears.
disasterrecoveryplan.infoscale.veritas.com/ <Name of Disaster recovery plan> created
- Run the following command on the master node
kubectl get drplan
- Review the output similar to the following
NAME PREFERREDCLUSTERLIST SPEC.PRIMARYCLUSTER <Name of("ID of the cluster "ID of cluster Disaster you want " where you want recovery to back up to back up") plan>
STATUS.PRIMARYCLUSTER DATAREPLICATION DNS ID of the current ID of the current cluster cluster