Veritas InfoScale™ for Kubernetes Environments 8.0.210 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Adding nodes to an existing cluster
Complete the following steps to add nodes to an existing InfoScale cluster-
- Ensure that you add the worker nodes to the Kubernetes cluster.
Note:
You must add all Kubernetes worker nodes to the InfoScale cluster.
- Run the following command on the master node to check whether the newly added node is Ready.
kubectl get nodes -A
Review output similar to the following
NAME STATUS ROLES AGE VERSION worker-node-1 Ready control-plane, master 222d v1.21.0 worker-node-2 Ready worker 222d v1.21.0 worker-node-3 Ready worker 222d v1.21.0
- To add new nodes to an existing cluster, the cluster must be in a running state. Run the following command on the master node to verify.
kubectl get infoscalecluster
See the State in the output similar to the following -
NAME NAMESPACE VERSION STATE AGE infoscalecluster-dev infoscale-vtas 8.0.210 Running 1m15s
- Edit clusterInfo section of the sample
/YAML/Kubernetes/cr.yaml
to add information about the new nodes.In this example, worker-node-1 and worker-node-2 exist. worker-node-3 is being added.
Note:
If you specify IP addresses, the number of IP addresses for the new nodes must be same as the number of IP addresses for the existing nodes.
apiVersion: infoscale.veritas.com/v1 kind: InfoScaleCluster metadata: name: infoscalecluster-dev spec: clusterID: <Optional- Enter an ID for this cluster> isSharedStorage: true clusterInfo: - nodeName: "worker-node-1" ip: - "<IP address of worker-node-1>" - nodeName: "worker-node-2" ip: - "<IP address of worker-node-2>" - nodeName: "worker-node-3" ip: - "<IP address of worker-node-3>" excludeDevice: - /dev/sdm - /dev/sdn . . . YOU CAN ADD UP TO 16 NODES. fencingDevice: ["<Hardware path to the first fencing device>", "<Hardware path to the second fencing device>", "<Hardware path to the third fencing device>", ] encrypted: false sameEnckey: false customImageRegistry: <Custom registry name / <IP address of the custom registry>:<port number> >
- Run the following command on the master node to initiate add node workflow.
kubectl apply -f /YAML/Kubernetes/cr.yaml
- You can run the following commands on the master node when node addition is in progress.
a. kubectl get infoscalecluster
See the State in the output as under. ProcessingAddNode indicates node is getting added.
NAME NAMESPACE VERSION STATE AGE infoscalecluster-dev infoscale-vtas 8.0.210 ProcessingAddNode 26m
b. kubectl describe infoscalecluster -n infoscale-vtas
Output similar to following indicates the cluster status during add node. The cluster is Degraded when node addition is in progress.
Cluster Name: infoscalecluster-dev Cluster Nodes: Exclude Device: /dev/sdm /dev/sdn Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Out of Cluster Cluster State: Degraded enableScsi3pr: false Images: Csi: Csi External Attacher Container: csi-attacher:v3.1.0
- Run the following command on the master node to verify if pods are created successfully. It may take some time for the pods to be created.
kubectl get pods -n infoscale-vtas
Output similar to the following indicates a successful creation.
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-35359-0 5/5 Running 0 12d infoscale-csi-node-35359-7rjv9 2/2 Running 0 3d20h infoscale-csi-node-35359-dlrxh 2/2 Running 0 4d21h infoscale-csi-node-35359-dmxwq 2/2 Running 0 12d infoscale-csi-node-35359-j9x7v 2/2 Running 0 12d infoscale-csi-node-35359-w6wf2 2/2 Running 0 3d20h infoscale-fencing-controller- 35359-6cc6cd7b4d-l7jtc 1/1 Running 0 3d21h infoscale-fencing-enabler- 35359-9gkb4 1/1 Running 0 12d infoscale-fencing-enabler-35359- gwn7w 1/1 Running 0 3d20h infoscale-fencing-enabler-35359- jrf2l 1/1 Running 0 12d infoscale-fencing-enabler-35359- qhzdt 1/1 Running 1 3d20h infoscale-fencing-enabler-35359- zqdvj 1/1 Running 1 4d21h infoscale-sds-35359- ed05b7abb28053ad-7svqz 1/1 Running 0 13d infoscale-sds-35359- ed05b7abb28053ad-c272q 1/1 Running 0 13d infoscale-sds-35359- ed05b7abb28053ad-g4rbj 1/1 Running 0 4d21h infoscale-sds-35359- ed05b7abb28053ad-hgf6h 1/1 Running 0 3d20h infoscale-sds-35359- ed05b7abb28053ad-wk5ph 1/1 Running 0 3d20h infoscale-sds-operator- 7fb7cd57c-rskms 1/1 Running 0 3d20h infoscale-licensing-operator- 756c854fdb-xvdnr 1/1 Running 0 13d
- Run the following command on the master node to verify if the cluster is 'Running'
kubectl get infoscalecluster
See the State in the output similar to the following -
NAME NAMESPACE VERSION STATE AGE infoscalecluster-dev infoscale-vtas 8.0.210 Running 1m15s
- Run the following command on the master node to verify whether the cluster is 'Healthy'.
kubectl describe infoscalecluster
Check the Cluster State in the output similar to the following-
Status: Cluster Name: infoscalecluster-dev Cluster Nodes: Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Joined,Slave Cluster State: Healthy