Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Adding nodes to an existing cluster
Complete the following steps to add nodes to an existing InfoScale cluster-
- Ensure that you add the worker nodes to the OCP cluster.
- Run the following command on the bastion node to check whether the newly added node is Ready.
oc get nodes
Review output similar to the following
NAME STATUS ROLES AGE VERSION ocp-cp-1.lab.ocp.lan Ready master 54d v1.22.1+d8c4430 ocp-cp-2.lab.ocp.lan Ready master 54d v1.22.1+d8c4430 ocp-cp-3.lab.ocp.lan Ready master 54d v1.22.1+d8c4430 ocp-w-1.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430 ocp-w-2.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430 ocp-w-3.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430 ocp-w-4.lab.ocp.lan Ready worker 54d v1.22.1+d8c4430
- To add new nodes to an existing cluster, the cluster must be in a running state. Run the following command on the bastion node to verify.
oc get infoscalecluster -A
See the State in the output similar to the following -
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the InfoScale cluster> 8.0.300 <Cluster ID> Running 25h . .
- Edit clusterInfo section of the sample
/YAML/OpenShift/cr.yaml
to add information about the new nodes.In this example, worker-node-1 and worker-node-2 exist. worker-node-3 is being added.
Note:
The number of IP addresses must be same for all nodes.
metadata: name: < Assign a name to this cluster > namespace: < The namespace where you want to create this cluster> spec: clusterID: <Optional- Enter an ID for this cluster. The ID can be any number between 1 and 65535 > isSharedStorage: true - nodeName: <Name of the first node> ip: - <Optional - First IP address of the first node > - <Optional - Second IP address of the first node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the second node> ip: - <Optional - First IP address of the second node > - <Optional - Second IP address of the second node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the third node> ip: - <Optional - First IP address of the third node > - <Optional - Second IP address of the third node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> . . . YOU CAN ADD UP TO 16 NODES. fencingDevice: ["<Hardware path to the first fencing device>", "<Hardware path to the second fencing device>", "<Hardware path to the third fencing device>", ] enableScsi3pr:<Enter True to enable SCSI3 persistent reservation> encrypted: false sameEnckey: false
- Run the following command on the bastion node to initiate add node workflow.
oc apply -f /YAML/OpenShift/cr.yaml
- You can run the following commands on the bastion node when node addition is in progress.
a. oc get infoscalecluster -A
See the State in the output as under. ProcessingAddNode indicates node is getting added.
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the <Cluster InfoScale cluster> 8.0.300 ID> ProcessingAddNode 25h . .
b. oc describe infoscalecluster -n <Namespace>
Output similar to following indicates the cluster status during add node. The cluster is Degraded when node addition is in progress.
Cluster Name: <Name of the cluster> Cluster Nodes: Exclude Device: <Excluded device path 1> <Excluded device path 2> Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Out of Cluster Cluster State: Degraded enableScsi3pr: false Images: Csi: Csi External Attacher Container: csi-attacher:v3.1.0
- Run the following command on the bastion node to verify if pods are created successfully. It may take some time for the pods to be created.
oc get pods -n <Namespace>
Output similar to the following indicates a successful creation.
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-1234-0 5/5 Running 0 19h infoscale-csi-node-1234-gf9pf 2/2 Running 0 19h infoscale-csi-node-1234-gg5dq 2/2 Running 0 18h infoscale-csi-node-1234-nmt85 2/2 Running 0 18h infoscale-csi-node-1234-r6jv8 2/2 Running 0 19h infoscale-csi-node-1234-w5bln 2/2 Running 2 19h infoscale-fencing-controller- 234-864468775c-4sbxw 1/1 Running 0 18h infoscale-fencing-enabler-1234-8b65z 1/1 Running 0 19h infoscale-fencing-enabler-1234-bkbbh 1/1 Running 3 (18h ago) 18h infoscale-fencing-enabler-1234-jvzjk 1/1 Running 5 (18h ago) 18h infoscale-fencing-enabler-1234-pxfmt 1/1 Running 4 (18h ago) 19h infoscale-fencing-enabler-1234-qmjrv 1/1 Running 0 19h infoscale-sds-1234-e383247e62b56585- 2xxvh 1/1 Running 1 19h infoscale-sds-1234-e383247e62b56585- cnvkg 1/1 Running 0 18h infoscale-sds-1234-e383247e62b56585- l5z7m 1/1 Running 0 19h infoscale-sds-1234-e383247e62b56585- xlkf8 1/1 Running 0 18h infoscale-sds-1234-e383247e62b56585- zkpgt 1/1 Running 0 19h infoscale-sds-operator-bb55cfc4d- pclt5 1/1 Running 0 18h infoscale-licensing-operator -5fd897f68f-7p2f7 1/1 Running 0 18h nfd-controller-manager-6bbf6df4d9- dbxgl 2/2 Running 2 20h nfd-master-2h7x6 1/1 Running 0 19h nfd-master-kclkq 1/1 Running 0 19h nfd-master-npjzm 1/1 Running 0 19h nfd-worker-8q4lz 1/1 Running 0 19h nfd-worker-cvkqp 1/1 Running 0 19h nfd-worker-js7tt 1/1 Running 1 19h
- Run the following command on the bastion node to verify if the cluster is 'Running'
oc get infoscalecluster -A
See the State in the output similar to the following -
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the InfoScale cluster> 8.0.300 <Cluster ID> Running 25h . .
- Run the following command on the bastion node to verify whether the cluster is 'Healthy'.
oc describe infoscalecluster <Name of the cluster> -n <Namespace>
Check the Cluster State in the output similar to the following-
Status: Cluster Name: <Name of the cluster> Cluster Nodes: Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Joined,Slave Cluster State: Healthy