Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Installing InfoScale DR Manager by using YAML
This section informs you how to install and configure Disaster Recovery for your InfoScale cluster by using YAML.
Note:
When you download, unzip, and untar YAML_8.0.220.tar.gz
, all files required for installation are available.
Complete the following steps to install the InfoScale DR Manager on the source and the target DR cluster.
Creating application user role
- Run the following command on the bastion node.
oc apply -f YAML/DR/infoscale-dr-admin-role.yaml
- Copy the following content to
YAML/DR/ClusterRoleBinding .yaml
.--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: infoscaledr-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: infoscaledr-admin-role subjects: - kind: User name: postgres-admin apiGroup: rbac.authorization.k8s.io
This is an example for a ClusterRoleBinding role.
- Run the following command on the bastion node.
oc apply -f YAML/DR/ClusterRoleBinding.yaml
For DR controller installation and Global Cluster Membership (GCM) configuration, you must use the kubeadmin role only. To configure DR plan and data replication, you can use this role or the kubeadmin role.
- Run the following command on the bastion node of each cluster.
oc apply -f /YAML/DR/OpenShift/dro_deployment.yaml
- Wait till the command execution is complete.
- Run the following command on the bastion node to verify if the deployment is successful.
oc -n infoscale-vtas get pods
See the Status in the output similar to the following
NAME READY STATUS RESTARTS AGE infoscale-dr-manager-xxxx 1/1 Running 0 114m
Status must change from
ContainerCreating
toRunning
. - Run the following commands to configure ExternalIP for DR pod.
Note:
Run these steps only if you want Metallb as the load balancer. If you choose any other load balancer, refer to its documentation for installation and configuration.
oc -n infoscale-vtas expose deployment infoscale-dr-manager --name my-lb-service --type LoadBalancer --protocol TCP --port 14155 --target-port 14155
Here, DR controller uses port 14155 internally to communicate across peer clusters. After a successful installation and configuration, you can verify by running the following command
- oc get svc my-lb-service
An output similar to the following indicates that installation and configuration is successful.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-lb-service LoadBalancer <IP address> <IP address> 14155:14155/TCP 13h
Run this command on both the clusters and verify if installation and configuration is successful. Verify whether
EXTERNAL-IP
is accessible from one cluster to the other cluster.