Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Adding nodes to an existing cluster
Complete the following steps to add nodes to an existing InfoScale cluster-
- Ensure that you add the worker nodes to the Kubernetes cluster.
- Run the following command on the master node to check whether the newly added node is Ready.
kubectl get nodes
Review output similar to the following
NAME STATUS ROLES AGE VERSION worker-node-1 Ready control-plane, master 222d v1.21.0 worker-node-2 Ready worker 222d v1.21.0 worker-node-3 Ready worker 222d v1.21.0
- To add new nodes to an existing cluster, the cluster must be in a running state. Run the following command on the master node to verify.
kubectl get infoscalecluster -A
See the State in the output similar to the following -
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the InfoScale cluster> 8.0.300 <Cluster ID> Running 25h . .
- Edit clusterInfo section of the sample
/YAML/Kubernetes/cr.yaml
to add information about the new nodes.In this example, worker-node-1 and worker-node-2 exist. worker-node-3 is being added.
Note:
If you specify IP addresses, the number of IP addresses for the new nodes must be same as the number of IP addresses for the existing nodes.
metadata: name: < Assign a name to this cluster > namespace: < The namespace where you want to create this cluster> spec: clusterID: <Optional- Enter an ID for this cluster. The ID can be any number between 1 and 65535 > isSharedStorage: true - nodeName: <Name of the first node> ip: - <Optional - First IP address of the first node > - <Optional - Second IP address of the first node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the second node> ip: - <Optional - First IP address of the second node > - <Optional - Second IP address of the second node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the third node> ip: - <Optional - First IP address of the third node > - <Optional - Second IP address of the third node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> . . . YOU CAN ADD UP TO 16 NODES. enableScsi3pr:<Enter True to enable SCSI3 persistent reservation> fencingDevice: ["<Hardware path to the first fencing device>", "<Hardware path to the second fencing device>", "<Hardware path to the third fencing device>", ] encrypted: false sameEnckey: false customImageRegistry: customImageRegistry: <Custom registry name / <IP address of the custom registry>:<port number> >
- Run the following command on the master node to initiate add node workflow.
kubectl apply -f /YAML/Kubernetes/cr.yaml
- You can run the following commands on the master node when node addition is in progress.
a. kubectl get infoscalecluster -A
See the State in the output as under. ProcessingAddNode indicates node is getting added.
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the <Cluster InfoScale cluster> 8.0.300 ID> ProcessingAddNode 25h . .
b. kubectl describe infoscalecluster -n <Namespace>
Output similar to following indicates the cluster status during add node. The cluster is Degraded when node addition is in progress.
Cluster Name: infoscalecluster-dev Cluster Nodes: Exclude Device: <Excluded device path 1> <Excluded device path 2> Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Out of Cluster Cluster State: Degraded enableScsi3pr: false Images: Csi: Csi External Attacher Container: csi-attacher:v3.1.0
- Run the following command on the master node to verify if pods are created successfully. It may take some time for the pods to be created.
kubectl get pods -n infoscale-vtas
Output similar to the following indicates a successful creation.
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-35359-0 5/5 Running 0 12d infoscale-csi-node-35359-7rjv9 2/2 Running 0 3d20h infoscale-csi-node-35359-dlrxh 2/2 Running 0 4d21h infoscale-csi-node-35359-dmxwq 2/2 Running 0 12d infoscale-csi-node-35359-j9x7v 2/2 Running 0 12d infoscale-csi-node-35359-w6wf2 2/2 Running 0 3d20h infoscale-fencing-controller- 35359-6cc6cd7b4d-l7jtc 1/1 Running 0 3d21h infoscale-fencing-enabler- 35359-9gkb4 1/1 Running 0 12d infoscale-fencing-enabler-35359- gwn7w 1/1 Running 0 3d20h infoscale-fencing-enabler-35359- jrf2l 1/1 Running 0 12d infoscale-fencing-enabler-35359- qhzdt 1/1 Running 1 3d20h infoscale-fencing-enabler-35359- zqdvj 1/1 Running 1 4d21h infoscale-sds-35359- ed05b7abb28053ad-7svqz 1/1 Running 0 13d infoscale-sds-35359- ed05b7abb28053ad-c272q 1/1 Running 0 13d infoscale-sds-35359- ed05b7abb28053ad-g4rbj 1/1 Running 0 4d21h infoscale-sds-35359- ed05b7abb28053ad-hgf6h 1/1 Running 0 3d20h infoscale-sds-35359- ed05b7abb28053ad-wk5ph 1/1 Running 0 3d20h infoscale-sds-operator- 7fb7cd57c-rskms 1/1 Running 0 3d20h infoscale-licensing-operator- 756c854fdb-xvdnr 1/1 Running 0 13d
- Run the following command on the master node to verify if the cluster is 'Running'
kubectl get infoscalecluster -A
See the State in the output similar to the following -
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the InfoScale cluster> 8.0.300 <Cluster ID> Running 25h . .
- Run the following command on the master node to verify whether the cluster is 'Healthy'.
kubectl describe infoscalecluster <Cluster Name> -n <Namespace>
Check the Cluster State in the output similar to the following-
Status: Cluster Name: <Cluster Name> Cluster Nodes: Node Name: worker-node-1 Role: Joined,Master Node Name: worker-node-2 Role: Joined,Slave Node Name: worker-node-3 Role: Joined,Slave Cluster State: Healthy