Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Configuring cluster
Complete the following steps for each cluster.
Note:
Ensure that you add an OpenShift node only to a single InfoScale cluster.
Edit clusterInfo section of the sample
/YAML/OpenShift/cr.yaml
for InfoScale specifications as under -metadata: name: < Assign a name to this cluster > namespace: < The namespace where you want to create this cluster> spec: clusterID: <Optional- Enter an ID for this cluster. The ID can be any number between 1 and 65535 > isSharedStorage: true - nodeName: <Name of the first node> ip: - <Optional - First IP address of the first node > - <Optional - Second IP address of the first node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the second node> ip: - <Optional - First IP address of the second node > - <Optional - Second IP address of the second node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> - nodeName: <Name of the third node> ip: - <Optional - First IP address of the third node > - <Optional - Second IP address of the third node> excludeDevice: - <Optional - Device path of the disk on the node that you want to exclude from Infoscale disk group.> . . . YOU CAN ADD UP TO 16 NODES. fencingDevice: ["<Hardware path to the first fencing device>", "<Hardware path to the second fencing device>", "<Hardware path to the third fencing device>", ] enableScsi3pr:<Enter True to enable SCSI3 persistent reservation> encrypted: false sameEnckey: false
Note:
Do not enclose parameter values in angle brackets (<>). For example, Primarynode is the name of the first node; for nodeName : <Name of the first node> , enter nodeName : Primarynode. InfoScale on OpenShift is a keyless deployment.
You can choose to rename
cr.yaml
. If you rename the file, ensure that you use that name in the next step.Note:
Veritas recommends renaming
cr.yaml
and maintaining a custom resource for each cluster. The renamedcr.yaml
is used to add more nodes to that InfoScale cluster.Run the following command on the bastion node.
oc create -f /YAML/OpenShift/cr.yaml
Run the following command on the bastion node to know the name and namespace of the cluster.
oc get infoscalecluster -A
Use the namespace from the output similar to the following.
NAMESPACE NAME VERSION CLUSTERID STATE AGE . . <Namespace> <Name of the InfoScale cluster> 8.0.300 <Cluster ID> Running 25h . .
Run the following command on the bastion node to verify whether the pods are created successfully. Run the command for infoscale-vtas also.
oc get pods -n <Namespace>
An output similar to the following indicates a successful creation of nodes
NAME READY STATUS RESTARTS AGE infoscale-csi-controller-1234-0 5/5 Running 0 19h infoscale-csi-node-1234-gf9pf 2/2 Running 0 19h infoscale-csi-node-1234-gg5dq 2/2 Running 0 18h infoscale-csi-node-1234-nmt85 2/2 Running 0 18h infoscale-csi-node-1234-r6jv8 2/2 Running 0 19h infoscale-csi-node-1234-w5bln 2/2 Running 2 19h infoscale-fencing-controller- 234-864468775c-4sbxw 1/1 Running 0 18h infoscale-fencing-enabler-1234-8b65z 1/1 Running 0 19h infoscale-fencing-enabler-1234-bkbbh 1/1 Running 3 (18h ago) 18h infoscale-fencing-enabler-1234-jvzjk 1/1 Running 5 (18h ago) 18h infoscale-fencing-enabler-1234-pxfmt 1/1 Running 4 (18h ago) 19h infoscale-fencing-enabler-1234-qmjrv 1/1 Running 0 19h infoscale-sds-1234-e383247e62b56585- 2xxvh 1/1 Running 1 19h infoscale-sds-1234-e383247e62b56585- cnvkg 1/1 Running 0 18h infoscale-sds-1234-e383247e62b56585- l5z7m 1/1 Running 0 19h infoscale-sds-1234-e383247e62b56585- xlkf8 1/1 Running 0 18h infoscale-sds-1234-e383247e62b56585- zkpgt 1/1 Running 0 19h infoscale-sds-operator-bb55cfc4d- pclt5 1/1 Running 0 18h infoscale-licensing-operator -5fd897f68f-7p2f7 1/1 Running 0 18h nfd-controller-manager-6bbf6df4d9- dbxgl 2/2 Running 2 20h nfd-master-2h7x6 1/1 Running 0 19h nfd-master-kclkq 1/1 Running 0 19h nfd-master-npjzm 1/1 Running 0 19h nfd-worker-8q4lz 1/1 Running 0 19h nfd-worker-cvkqp 1/1 Running 0 19h nfd-worker-js7tt 1/1 Running 1 19h
InfoScale SDS pods are created in the namespace you specify in cr.yaml
. Fencing and CSI pods are created in the operator namespace.
After a successful InfoScale deployment, a disk group is automatically created. You can now create Persistent Volumes/ Persistent Volume Claims (PV / PVC) by using the corresponding Storage class.