Please enter search query.
Search <book_title>...
Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux
Last Published:
2023-10-16
Product(s):
InfoScale & Storage Foundation (8.0.220)
Platform: Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Configuring InfoScale DR Manager by using web console
Complete the following steps.
- Click Operators > Installed Operators. Select InfoScale DR Manager.
- In the main menu, click Global Cluster Membership. You can create a cluster membership.
- Click Create GlobalClusterMembership in the upper-right corner of the screen as under.
- Assign a Name and click Global Member Clusters. You can enter cluster details here.
- Enter Cluster ID of the primary cluster first and its IP address in DR Controller Address. Optionally, you can enter its port number in DR Controller Port.
Note:
The IP address and port number are generated by using Load Balancer. An example of the command to generate the IP address is - oc -n infoscale-vtas expose deployment infoscale-dr-manager --name dr-lb-service --type LoadBalancer --protocol TCP --port 14155 --target-port 14155
- Enter Cluster ID of the secondary cluster and its IP address in DR Controller Address. Optionally, you can enter its port number in DR Controller Port.
Note:
This is the IP address and port number of the peer cluster generated by using Load Balancer. Use the command mentioned in the above step.
- Enter the Cluster ID of the primary cluster in Local Cluster Name.
- Click Create. Wait till it is created. The newly created cluster is listed under Global Cluster Membership.
- Repeat steps 1 to 8 on the secondary cluster. In step 7, ensure that you enter the cluster ID of the secondary cluster. After a successful configuration on the secondary cluster, the secondary cluster is the DR cluster.
- Run the following command on the primary cluster to verify whether GCM is successfully configured - oc get gcm
An output similar to the following indicates a successful configuration.
NAME CLUSTER NAME CLUSTER STATE PROTOCOL global-cluster-membership Clus1 RUNNING 10
PEER LINK STATE {"Clus1":"CONNECTED","Clus2":"CONNECTED"}
- You can now configure Data Replication. Click Data Replication.
- Click Create DataReplication in the upper-right corner of the screen. The following screen opens.
- For the primary cluster, enter the Cluster Name, its IP address (Can be any Virtual IP address available in the same subnet) in Local Host Address, and its corresponding Net Mask in Local Net Mask.
- For the secondary cluster, enter its name in Cluster Name, its IP address (Can be any Virtual IP address available in the same subnet) in Remote Host Address, and its corresponding Net Mask in Remote Netmask. Enter the network interface in Remote NIC.
- Enter the Namespace for which you want to configure data replication.
- In Local NIC, enter the network interface of the primary cluster.
- Click Create. Wait till Data Replication is configured and gets listed.
- Run the following command on the primary cluster to verify whether Data Replication is successfully configured - oc get datarep -o wide
- Review output for
consistent,up-to-date
- Click Disaster Recovery Plan in the main menu to create a plan.
- Click Create DisasterRecoveryPlan in the upper-right corner of the screen. The following screen opens.
- Assign a Name to this plan.
Note:
The names of clusters and the Data replication plan appear here.
- Review the Primary Cluster. Enter the Namespace that you want to be a part of this plan.
- Review the Data Replication Pointer and Preferred Cluster List.
- Click Create and wait till the Disaster Recovery Plan is created and listed.
After these successful configurations, Disaster Recovery (DR) is ready.