Arctera InfoScale™ for Kubernetes 8.0.400 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Arctera InfoScale on Kubernetes
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Installing Arctera InfoScale on RKE2
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
Additional prerequisites for Azure Red Hat OpenShift (ARO)
In an Azure Red Hat OpenShift (ARO) environment, for InfoScale to create and manage PVC, disks must be assigned to worker nodes for creation of an InfoScale cluster. Perform the following steps.
Copy the following content into
storage-provisioning.yaml
for every cluster you want to install.apiVersion: apps/v1 kind: StatefulSet metadata: name: storage labels: app: infoscale spec: podManagementPolicy: "Parallel" serviceName: "provisioner" replicas: <number of nodes> selector: matchLabels: app: infoscale template: metadata: labels: app: infoscale spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchExpressions: - key: app operator: In values: - infoscale - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway labelSelector: matchExpressions: - key: app operator: In values: - infoscale containers: - image: busybox name: busybox command: ['sh','-c','--'] args: ["while true; do sleep 300; done;"] volumeDevices: - devicePath: "/var/" name: infoscale-provision-pvc volumeClaimTemplates: - metadata: name: infoscale-provision-pvc namespace: <InfoScale namespace> spec: storageClassName: <Name of the storage class> volumeMode: Block accessModes: - ReadWriteOnce resources: requests: storage: <Size of the disk you want to configure under Infoscale> --- {{- end }}
Run oc apply -f storage-provisioning.yaml to apply the file for all worker nodes.
To verify whether the PVC creation is successful, run oc get pvc -n infoscale-vtas on the bastion node.
To verify whether selector is being used, run oc get statefulset -n <InfoScale namespace> on the bastion node.
Output similar to the following indicates a successful creation of pods.
default storage-0 1/1 Running 0 22h default storage-1 1/1 Running 0 22h default storage-2 1/1 Running 0 22h default storage-3 1/1 Running 0 22h default storage-4 1/1 Running 0 22h
Run oc get po -n infoscale-vtas on the bastion node.
Be ready with the temporary storage path on each node of the cluster. Ensure that you specify this path as
excludeDevice
while configuringcr.yaml
. If you do not specify this path, the temporary storage is consumed. As this storage is not persistent, it might lead to a data loss.Ensure that you do not power off the Azure RedHat OpenShift (ARO) cluster after provisioning storage.
Note:
Auto scaling is not supported on Azure RedHat OpenShift (ARO) cluster.