Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
CSI plugin deployment
CSI (Container Storage Interface) is a standardized mechanism for Container Orchestrators (COs) to expose arbitrary storage systems to their containerized workloads. InfoScale CSI plugin is used to provide persistent storage to OpenShift or Kubernetes. InfoScale CSI also supports creation of storage classes for high availability, performance, and capacity. It also supports online expansion of capacity as well as snapshot and clone functionality.
InfoScale CSI is automatically deployed while installing InfoScale on OpenShift or Kubernetes.
After you download, unzip, and untar YAML_8.0.200.tar.gz
, a folder /YAML/Common-CSI-yamls
is automatically created. Within /YAML/Common-CSI-yamls
, following sub folders are created and the files listed are saved.
Note:
The commands listed in this chapter are applicable to OpenShift. If you are on Kubernetes, replace oc with kubectl.
-- dynamic-provisioning
-- csi-dynamic-pvc.yaml
-- csi-dynamic-snapshot-restore.yaml
-- csi-dynamic-snapshot.yaml
-- csi-dynamic-volume-clone.yaml
-- csi-pod.yaml
-- csi-dynamic-block-pvc.yaml
-- csi-block-pod.yaml
-- snapshot-class-templates
-- csi-infoscale-snapclass.yaml
-- static-provisioning
-- csi-pod.yaml
-- csi-static-pvc.yaml
-- csi-static-pv.yaml
-- csi-static-snapshot-content.yaml
-- csi-static-snapshot.yaml
-- csi-static-block-pv.yaml
-- csi-static-block-pvc.yaml
-- csi-static-block-pod.yaml
-- storage-class-templates
-- csi-infoscale-performance-sc.yaml
-- csi-infoscale-resiliency-sc.yaml
-- csi-infoscale-sc.yaml
After CSI deployment is complete, you can create YAML files specific to your requirements and use these for:
Dynamic provisioning of volumes
Static provisioning of volumes
Snapshot provisioning (Creating volume snapshots)
Creating volume clones
Enabling raw block volume support
InfoScale CSI supports static and dynamic provisioning of volumes on shared storage as well as shared nothing storage (FSS).
Note:
Only one disk group - vrts_kube_dg_<cluster id>
is supported for all CSI operations, and the same disk group is used throughout the CSI plugin lifecycle. The command examples are applicable to OpenShift. For Kubernetes, replace oc by kubectl. vrts_kube_dg
is created automatically during cluster creation by using disks which are not under any other File System or Logical Volume Manager.
An application container requests for the required storage through a Persistent Volume claim (PVC). The PVC uses the storage class to identify and provision the Persistent Volume that belongs to the storage class. After the volume is created, a Persistent Volume object is created and is bound to the PVC, and persistent storage is made available to the application.
While provisioning volumes, the InfoScale CSI plugin supports the following access modes that determine how the volumes can be mounted:
ReadWriteOnce (RWO) -- the volume can be mounted as read-write by a single node.
ReadOnlyMany (ROX) -- the volume can be mounted read-only by many nodes.
ReadWriteMany (RWX) -- the volume can be mounted as read-write by many nodes.
Note:
The permission in a Persistent Volume Claim is per node and not per pod. For example, a PVC with RWO mode does not prevent mounting same volume in more than one pod on same node.