Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
If any node is removed from an Azure RedHat OpenShift (ARO) cluster, you can run the following steps to remove that node from the InfoScale cluster.
Be ready with the names of the nodes to be removed and the IP addresses of the nodes to be added.
Note:
You can remove one node at a time. To remove multiple nodes, run the following steps for every node.
- Run the following command by logging on the infoscale-driver-container pods of all nodes other than the node to be removed.
gabconfig -m <Number of nodes minus 1>
- Log on by using Exec command to one of the infoscale-driver-container pods other than the pod scheduled on the node you want to be removed.
- Run the following commands
/opt/VRTSvcs/bin/haclus -value ReadOnly
Skip if the output of the above command is 1. If the output of the above command is not 1, run /opt/VRTSvcs/bin/haconf -makerw
./opt/VRTSvcs/bin/hares -modify cvm_clus CVMNodeId -delete <name of the node to be removed>
/opt/VRTSvcs/bin/hagrp -modify RestSG AutoStartList -delete <name of the node to be removed>
/opt/VRTSvcs/bin/hagrp -modify RestSG SystemList -delete <name of the node to be removed>
/opt/VRTSvcs/bin/hagrp -modify cvm AutoStartList -delete <name of the node to be removed>
/opt/VRTSvcs/bin/hagrp -modify DISK_GROUP SystemList -delete <name of the node to be removed>
/opt/VRTSvcs/bin/hagrp -modify cvm SystemList -delete <name of the node to be removed>
/opt/VRTSvcs/bin/hasys -delete <name of the node to be removed>
/opt/VRTSvcs/bin/haconf -dump -makero
/opt/VRTSvcs/bin/haclus -value DumpingMembership
- One-by-one, log on by using Exec command to all of the infoscale-driver-container pods other than the pod scheduled on the node you want to be removed and run the following commands.
/opt/VRTS/bin/vxclustadm -m vcs reinit
Edit
/etc/llthosts
and delete the name of the node you want to be removed. - Run the following command
oc edit infoscalecluster infoscalecluster-dev
Delete all information about the node you want to remove from
clusterInfo
. - Run the following commands to verify that the node is removed.
oc describe cm infoscalecluster-dev-configmap -n infoscale-vtas
oc describe infoscalecluster
- Run the following command on all nodes in the cluster to add a new node.
lltconfig -a set <ID of the new node> link0 <IP address of the new node>
- Update
/YAML/OpenShift/cr.yaml
with the new node information. - Run oc apply -f /YAML/OpenShift/cr.yaml on the bastion node.