Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Configuring cluster
Complete the following steps for each cluster.
Note:
Ensure that you add a Kubernetes node only to a single InfoScale cluster.
Edit clusterInfo section of the sample
/YAML/Kubernetes/cr.yaml
for InfoScale specifications as under -metadata: name: < Assign a name to this cluster > namespace: < The namespace where you want to create this cluster> spec: clusterID: <Optional- Enter an ID for this cluster. The ID can be any number between 1 and 65535 > isSharedStorage: true - nodeName: <Name of the first node> ip: - <Optional - First IP address of the first node > - <Optional - Second IP address of the first node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the second node> ip: - <Optional - First IP address of the second node > - <Optional - Second IP address of the second node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the third node> ip: - <Optional - First IP address of the third node > - <Optional - Second IP address of the third node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> . . . YOU CAN ADD UP TO 16 NODES. enableScsi3pr:<Enter True to enable SCSI3 persistent reservation> fencingDevice: ["<Hardware path to the first fencing device>", "<Hardware path to the second fencing device>", "<Hardware path to the third fencing device>", ] encrypted: false sameEnckey: false customImageRegistry: customImageRegistry: <Custom registry name / <IP address of the custom registry>:<port number> >
Note:
Do not enclose parameter values in angle brackets (<>). For example, Primarynode is the name of the first node; for nodeName : <Name of the first node> , enter nodeName : Primarynode. InfoScale on Kubernetes is a keyless deployment.
You can choose to rename
cr.yaml
. If you rename the file, ensure that you use that name in the next step.Note:
Veritas recommends renaming
cr.yaml
and maintaining a custom resource for each cluster. The renamedcr.yaml
is used to add more nodes to that InfoScale cluster.Run the following command on the master node.
kubectl create -f /YAML/Kubernetes/cr.yaml
Run the following command on the master node to know the name and namespace of the cluster.
kubectl get infoscalecluster -A
Use the namespace from the output similar to the following -
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the InfoScale cluster> 8.0.300 <Cluster ID> Running 25h . .
Run the following command on the master node to verify whether the pods are created successfully.
kubectl get pods -n infoscale-vtas
An output similar to the following indicates a successful creation of nodes
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-35359-0 5/5 Running 0 12d infoscale-csi-node-35359-7rjv9 2/2 Running 0 3d20h infoscale-csi-node-35359-dlrxh 2/2 Running 0 4d21h infoscale-csi-node-35359-dmxwq 2/2 Running 0 12d infoscale-csi-node-35359-j9x7v 2/2 Running 0 12d infoscale-csi-node-35359-w6wf2 2/2 Running 0 3d20h infoscale-fencing-controller- 35359-6cc6cd7b4d-l7jtc 1/1 Running 0 3d21h infoscale-fencing-enabler- 35359-9gkb4 1/1 Running 0 12d infoscale-fencing-enabler-35359- gwn7w 1/1 Running 0 3d20h infoscale-fencing-enabler-35359- jrf2l 1/1 Running 0 12d infoscale-fencing-enabler-35359- qhzdt 1/1 Running 1 3d20h infoscale-fencing-enabler-35359- zqdvj 1/1 Running 1 4d21h infoscale-sds-35359- ed05b7abb28053ad-7svqz 1/1 Running 0 13d infoscale-sds-35359- ed05b7abb28053ad-c272q 1/1 Running 0 13d infoscale-sds-35359- ed05b7abb28053ad-g4rbj 1/1 Running 0 4d21h infoscale-sds-35359- ed05b7abb28053ad-hgf6h 1/1 Running 0 3d20h infoscale-sds-35359- ed05b7abb28053ad-wk5ph 1/1 Running 0 3d20h infoscale-sds-operator- 7fb7cd57c-rskms 1/1 Running 0 3d20h infoscale-licensing-operator- 756c854fdb-xvdnr 1/1 Running 0 13d
InfoScale SDS pods are created in the namespace you specify in cr.yaml
. Fencing and CSI pods are created in the operator namespace.
After a successful InfoScale deployment, a disk group is automatically created. You can now create Persistent Volumes/ Persistent Volume Claims (PV / PVC) by using the corresponding Storage class.