Please enter search query.
Search <book_title>...
Veritas InfoScale™ for Kubernetes Environments 8.0.100 - Linux
Last Published:
2022-07-11
Product(s):
InfoScale & Storage Foundation (8.0.100)
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Tech Preview: Configuring KMS-based Encryption on an OpenShift cluster
- Tech Preview: Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Troubleshooting
Additional Prerequisites for Azure RedHat OpenShift (ARO)
In an Azure RedHat OpenShift (ARO) environment, for InfoScale to create and manage PVC, disks must be assigned to worker nodes for creation of an InfoScale cluster. Perform the following steps.
Copy the following content into
infoscale-pvc-init.yaml
. Create a file for each worker node.--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <Name of the PVC> namespace: infoscale-vtas spec: storageClassName: managed-premium volumeMode: Block accessModes: - ReadWriteOnce resources: requests: storage:<Size of the disk you want to configure under Infoscale> --- apiVersion: apps/v1 kind: Deployment metadata: name: <Name of the deployment> labels: app: infoscale spec: selector: matchLabels: app: infoscale template: metadata: labels: app: infoscale spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <name of the worker node on which you are applying the YAML> containers: - image: busybox name: busybox command: ['sh', '-c', '--' ] args: [ "while true; do sleep 300; done;" ] volumeDevices: - devicePath: "/var/" name: <Volume name> volumes: - name: <Volume name> persistentVolumeClaim: claimName: <Name of the PVC> ---
Run oc apply -f infoscale-pvc-init.yaml to apply the file for every worker node.
To verify whether the PVC creation is successful, run oc get pvc -n infoscale-vtas on the bastion node.
An output similar to the following output indicates a successful creation.
NAME STATUS VOLUME infoscale-pvc-1 Bound pvc-8114beda-56ac-4559-9753-888dcdacae89 infoscale-pvc-2 Bound pvc-5de16eca-b5a7-4f7d-a02c-1cc68ea50f1e infoscale-pvc-3 Bound pvc-687ab26c-a0b8-4628-8644-93a779d80552
CAPACITY ACCESS MODES STORAGECLASS AGE 33Gi RWO managed-premium 3d19h 33Gi RWO managed-premium 3d19h 33Gi RWO managed-premium 3d19h
To verify deployment, run oc get deployment --selector app=infoscale on the bastion node.
Review output similar to the following.
NAME READY UP-TO-DATE AVAILABLE AGE infoscale-init-1 1/1 1 1 3d19h infoscale-init-2 1/1 1 1 3d19h infoscale-init-3 1/1 1 1 3d19h