Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Configuring Disaster Recovery Plan
With a Disaster Recovery Plan (DR Plan) you can enable disaster recovery for a particular namespace. For a more granular control, you can selectively label components in the namespace and create a DR Plan with namespace and labels. A DR Plan cannot span multiple namespaces. DR Plan must be created only on the primary cluster. DR Plan is automatically created and synchronized on all peer clusters after its creation on the primary cluster. Migration and other operations on the namespace can be triggered by updating certain attributes.
Note:
Clusterwide(across namespaces) RBAC permissions are granted to the service account of infoscale-dr-manager
pod as required. As a Storage Administrator, ensure that a non-privileged user does not have oc exec permissions on the infoscale-dr-manager pod.
- Edit
/YAML/DR/SampleDisasterRecoveryPlan.yaml
as under to create DR plan for application components in a given namespace.apiVersion: infoscale.veritas.com/v1 kind: DisasterRecoveryPlan metadata: name: test-disaster-recovery-plan spec: # Name of cluster that should be treated as primary for this DR plan primaryCluster: <ID of the cluster you want to back up> # (optional) Set Force To True If Peer Cluster(S) Is Not Reachable # And Local Cluster Needs To Perform Takeover force: false # List Of Member Cluster(s) Where This DRPlan Can FailOver # Sequence Of MemberCluster Specified In This List Denotes Relative # Preference Of Member Cluster(s) # Must Be Subset Of Global Cluster Membership preferredClusterList: ["<ID of the cluster you want to back up>", "<ID of the cluster where you want to back up>"] # Kind Of Corrective Action In Case Of Disaster # default value will be "Manual" if not specified clusterFailOverPolicy: Manual # Specify Namespace And Optionally Labels to decide what all # needs to be part of the disaster recovery plan selector: namespace: sample labels: app: sise # (optional) Pointer To Manage Storage Replication dataReplicationPointer: test-datareplication # (optional) Pointer To Manage DNS Endpoints dnsPointer: test-dns
Note:
dataReplicationPointer is needed only if you have stateful applications that require data replication across peer clusters.
- Run the following command on the bastion node
oc apply -f /YAML/DR/SampleDisasterRecoveryPlan.yaml
- Wait till the command run is successful and the following message appears.
disasterrecoveryplan.infoscale.veritas.com/ <Name of Disaster recovery plan> created
- Run the following command on the bastion node
oc get drplan
- Review the output similar to the following
NAME PREFERREDCLUSTERLIST SPEC.PRIMARYCLUSTER <Name of("ID of the cluster "ID of cluster Disaster you want " where you want recovery to back up to back up") plan>
STATUS.PRIMARYCLUSTER DATAREPLICATION DNS ID of the current ID of the current cluster cluster