Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Adding nodes to an existing cluster
Complete the following steps to add nodes to an existing InfoScale cluster-
- Ensure that you add the worker nodes to the OCP cluster.
Note:
You must add all OpenShift worker nodes to the InfoScale cluster.
- Run the following command on the bastion node to check whether the newly added node is Ready.
oc get nodes -A
Review output similar to the following
NAME STATUS ROLES AGE VERSION ocp-cp-1.lab.ocp.lan Ready master 54d v1.22.1+d8c4430 ocp-cp-2.lab.ocp.lan Ready master 54d v1.22.1+d8c4430 ocp-cp-3.lab.ocp.lan Ready master 54d v1.22.1+d8c4430 ocp-w-1.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430 ocp-w-2.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430 ocp-w-3.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430 ocp-w-4.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430
- To add new nodes to an existing cluster, the cluster must be in a running state. Run the following command on the bastion node to verify.
oc get infoscalecluster
See the State in the output similar to the following -
NAME NAMESPACE VERSION STATE AGE infoscalecluster-dev infoscale-vtas 8.0.200 Running 1m15s
- Edit clusterInfo section of the sample
/YAML/OpenShift/cr.yaml
to add information about the new nodes.In this example, worker-node-1 and worker-node-2 exist. worker-node-3 is being added.
Note:
The number of IP addresses must be same for all nodes.
apiVersion: infoscale.veritas.com/v1 kind: InfoScaleCluster metadata: name: infoscalecluster-dev spec: clusterID: <Optional- Enter an ID for this cluster> isSharedStorage: true clusterInfo: - nodeName: "worker-node-1" ip: - "<IP address of worker-node-1>" - nodeName: "worker-node-2" ip: - "<IP address of worker-node-2>" - nodeName: "worker-node-3" ip: - "<IP address of worker-node-3>" excludeDevice: - /dev/sdm - /dev/sdn . . YOU CAN ADD UP TO 16 NODES. fencingDevice: ["<Hardware path to the first fencing device>", "<Hardware path to the second fencing device>", "<Hardware path to the third fencing device>", ] encrypted: false sameEnckey: false
- Run the following command on the bastion node to initiate add node workflow.
oc apply -f /YAML/OpenShift/cr.yaml
- You can run the following commands on the bastion node when node addition is in progress.
a. oc get infoscalecluster
See the State in the output as under. ProcessingAddNode indicates node is getting added.
NAME NAMESPACE VERSION STATE AGE infoscalecluster-dev infoscale-vtas 8.0.200 ProcessingAddNode 26m
b. oc describe infoscalecluster -n infoscale-vtas
Output similar to following indicates the cluster status during add node. The cluster is Degraded when node addition is in progress.
Cluster Name: infoscalecluster-dev Cluster Nodes: Exclude Device: /dev/sdm /dev/sdn Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Out of Cluster Cluster State: Degraded enableScsi3pr: false Images: Csi: Csi External Attacher Container: csi-attacher:v3.1.0
- Run the following command on the bastion node to verify if pods are created successfully. It may take some time for the pods to be created.
oc get pods -n infoscale-vtas
Output similar to the following indicates a successful creation.
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-1234-0 5/5 Running 0 19h infoscale-csi-node-1234-gf9pf 2/2 Running 0 19h infoscale-csi-node-1234-gg5dq 2/2 Running 0 18h infoscale-csi-node-1234-nmt85 2/2 Running 0 18h infoscale-csi-node-1234-r6jv8 2/2 Running 0 19h infoscale-csi-node-1234-w5bln 2/2 Running 2 19h infoscale-fencing-controller- 234-864468775c-4sbxw 1/1 Running 0 18h infoscale-fencing-enabler-1234-8b65z 1/1 Running 0 19h infoscale-fencing-enabler-1234-bkbbh 1/1 Running 3 (18h ago) 18h infoscale-fencing-enabler-1234-jvzjk 1/1 Running 5 (18h ago) 18h infoscale-fencing-enabler-1234-pxfmt 1/1 Running 4 (18h ago) 19h infoscale-fencing-enabler-1234-qmjrv 1/1 Running 0 19h infoscale-sds-1234-e383247e62b56585- 2xxvh 1/1 Running 1 19h infoscale-sds-1234-e383247e62b56585- cnvkg 1/1 Running 0 18h infoscale-sds-1234-e383247e62b56585- l5z7m 1/1 Running 0 19h infoscale-sds-1234-e383247e62b56585- xlkf8 1/1 Running 0 18h infoscale-sds-1234-e383247e62b56585- zkpgt 1/1 Running 0 19h infoscale-sds-operator-bb55cfc4d- pclt5 1/1 Running 0 18h infoscale-licensing-operator -5fd897f68f-7p2f7 1/1 Running 0 18h nfd-controller-manager-6bbf6df4d9- dbxgl 2/2 Running 2 20h nfd-master-2h7x6 1/1 Running 0 19h nfd-master-kclkq 1/1 Running 0 19h nfd-master-npjzm 1/1 Running 0 19h nfd-worker-8q4lz 1/1 Running 0 19h nfd-worker-cvkqp 1/1 Running 0 19h nfd-worker-js7tt 1/1 Running 1 19h special-resource-controller-manager- 86b6c7-wv2tc 2/2 Running 0 20h
- Run the following command on the bastion node to verify if the cluster is 'Running'
oc get infoscalecluster
See the State in the output similar to the following -
NAME NAMESPACE VERSION STATE AGE infoscalecluster-dev infoscale-vtas 8.0.200 Running 1m15s
- Run the following command on the bastion node to verify whether the cluster is 'Healthy'.
oc describe infoscalecluster
Check the Cluster State in the output similar to the following-
Status: Cluster Name: infoscalecluster-dev Cluster Nodes: Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Joined,Slave Cluster State: Healthy