Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Additional Prerequisites for Azure RedHat OpenShift (ARO)
In an Azure RedHat OpenShift (ARO) environment, for InfoScale to create and manage PVC, disks must be assigned to worker nodes for creation of an InfoScale cluster. Perform the following steps.
Copy the following content into
storage-provisioning.yaml
.{{- if eq .Values.runArgs.platform "cloud-platform" }} apiVersion: apps/v1 kind: StatefulSet metadata: name: storage labels: app: infoscale spec: podManagementPolicy: "Parallel" serviceName: "provisioner" replicas: <number of nodes> selector: matchLabels: app: infoscale template: metadata: labels: app: infoscale spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchExpressions: - key: app operator: In values: - infoscale - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway labelSelector: matchExpressions: - key: app operator: In values: - infoscale containers: - image: busybox name: busybox command: ['sh','-c','--'] args: ["while true; do sleep 300; done;"] volumeDevices: - devicePath: "/var/" name: infoscale-provision-pvc volumeClaimTemplates: - metadata: name: infoscale-provision-pvc namespace: infoscale-vtas spec: storageClassName: <Name of the storage class> volumeMode: Block accessModes: - ReadWriteOnce resources: requests: storage: <Size of the disk you want to configure under Infoscale> --- {{- end }}
Run oc apply -f storage-provisioning.yaml to apply the file for all worker nodes.
To verify whether the PVC creation is successful, run oc get pvc -n infoscale-vtas on the bastion node.
To verify whether selector is being used, run oc get statefulset -n infoscale-vtas on the bastion node.
Output similar to the following indicates a successful creation of pods.
default storage-0 1/1 Running 0 22h default storage-1 1/1 Running 0 22h default storage-2 1/1 Running 0 22h default storage-3 1/1 Running 0 22h default storage-4 1/1 Running 0 22h
Run oc get po -n infoscale-vtas on the bastion node.
Be ready with the temporary storage path on each node of the cluster. Ensure that you specify this path as
excludeDevice
while configuringcr.yaml
. If you do not specify this path, the temporary storage is consumed. As this storage is not persistent, it might lead to a data loss.