Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Static provisioning
You can use static provisioning if you want to make the existing persistent storage objects available to the cluster. You can statically provision a volume over shared storage (CVM) and shared nothing (FSS) storage.
Static provisioning allows cluster administrators to make existing storage objects available to a cluster. To use static provisioning, you must know the details of the storage object, its supported configurations, and mount options. To make existing storage available to a cluster user, you must manually create a Persistent Volume, and a Persistent Volume Claim before referencing the storage in a pod.
Note:
In case you want to use filesystem-persistent volumes, ensure that the Veritas File System is created before provisioning the volumes. If the Veritas File System does not exist, you must create it manually by using the mkfs command from the InfoScale SDS pod .
Creating Static Provisioning
- You can create a Storage Class by running the csi-infoscale-sc.yaml file which is as under-.
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-infoscale-sc annotations: storageclass.kubernetes.io/is-default-class: "false" provisioner: org.veritas.infoscale reclaimPolicy: Delete allowVolumeExpansion: true parameters: fstype: vxfs # (optional) Specifies a volume layout type. # Supported layouts: stripe, mirror, stripe-mirror,,mirror-stripe, # concat, concat-mirror, mirror-concat # If omitted, InfoScale internally chooses the best suited layout # based on the environment. # layout: "mirror" # (optional) Specifies the number of disk or host failures a storage # object can tolerate. # faultTolerance: "1"
# (optional) Specifies the number of stripe columns to use when # creating a striped volume. # nstripe: "3" # (optional) Specifies the stripe unit size to use # for striped volume. # stripeUnit: "64k" # (optional) Specifies disks with the specified media type. # All disks with the given mediatype are selected for volume creation. # Supported values: hdd, ssd # mediaType: "hdd" # (optional) Specifies whether to store encrypted data on disks or not. # Valid values are true or false # encryption: "false" #(optional) Specifies how to initialize a new volume. # Valid values are "active", "zero" and "sync" # initType: "active"
Note:
The supported
initType
values are"sync"
,"active"
, or"zero"
.Run oc/kubectl create -f csi-infoscale-sc.yaml
- You must be ready with the VxVM volume name to define the Persistent Volume object.
Run oc/kubectl exec -ti -n <namespace> <InfoScale SDS> -- <cmd> to list Volumes from the InfoScale SDS pod.
An example of this command is oc/kubectl exec -ti -n infoscale-vtas infoscale-sds-rhel8-bwvwb -- vxprint -g vrts_kube_dg -vuh | grep -w fsgen
- In the
csi-static-pv-ocp.yaml
orcsi-static-pv-k8s.yaml
, define the Persistent Volume object and specify the existing VxVM volume name in the volumeHandle attribute.csi-static-pv-<ocp/k8s>.yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: csi-infoscale-pv annotations: pv.kubernetes.io/provisioned-by: org.veritas.infoscale spec: storageClassName: csi-infoscale-sc persistentVolumeReclaimPolicy: Retain capacity: storage: 2Gi accessModes: - ReadWriteOnce csi: driver: org.veritas.infoscale # Please provide pre-provisioned Infoscale volume name. volumeHandle: clust_<cluster_id>/vrts_kube_dg-<cluster_id>/testVol fsType: vxfs
Note:
Here,
clusterid
is what you use while configuringcr.yaml
. - Add the following to
csi-static-pv-ocp.yaml
orcsi-static-pv-k8s.yaml
for node affinity.nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.org.veritas.infoscale/cluster operator: In values: - <Name of the InfoScale cluster>
Note:
You can skip this step and add nodeSelector: topology.org.veritas.infoscale/cluster: "new1" to
csi-pod.yaml
. - Create a Persistent Volume using the yaml.
oc create -f csi-static-pv-ocp.yaml
or
kubectl create -f csi-static-pv-k8s.yaml
- Define the Persistent Volume Claim (PVC) with appropriate access mode and storage capacity.
csi-static-pvc.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-infoscale-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: csi-infoscale-sc
- Create a Persistent Volume Claim by using the yaml. This PVC automatically gets bound with the newly created PV.
oc/kubectl create -f csi-static-pvc.yaml
- Update the application yaml file (
mysql-deployment.yaml
) and specify the persistent Volume Claim name.apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:latest ports: - containerPort: 3306 volumeMounts: - mountPath: "/var/lib/mysql" name: mysql-data env: - name: MYSQL_ROOT_PASSWORD value: root123 volumes: - name: mysql-data persistentVolumeClaim: claimName: csi-infoscale-pvc
- Create the application pod.
oc/kubectl create -f mysql-deployment.yaml
- Check that old data exists on the persistent volume. Run the following commands
oc/kubectl get pods | grep mysql and oc/kubectl exec -it mysql-deployment<id> -- mysql -uroot -pRoot12345!.
Enabling raw block support with static provisioning
- Run oc/kubectl exec -ti -n <namespace> <InfoScale SDS> -- <cmd> to list Volumes from the InfoScale SDS pod. Be ready with the names of the Volumes.
- Update
volumeHandle
incsi-static-block-pv-<ocp/k8s>.yaml
as under.--- apiVersion: v1 kind: PersistentVolume metadata: name: csi-infoscale-block-pv annotations: pv.kubernetes.io/provisioned-by: org.veritas.infoscale spec: volumeMode: Block storageClassName: csi-infoscale-sc persistentVolumeReclaimPolicy: Retain capacity: storage: 5Gi accessModes: - ReadWriteOnce csi: driver: org.veritas.infoscale # Please provide pre-provisioned Infoscale volume name. volumeHandle: clust_<cluster_id>/vrts_kube_dg-<cluster_id>/testVol volumeAttributes: volumePath: "/dev/blk"
Note:
Here,
clusterid
is what you use while configuringcr.yaml
. - Run oc create -f csi-static-pv-ocp.yaml or kubectl create -f csi-static-pv-k8s.yaml to apply the yaml.
- Define the Persistent Volume Claim (PVC) with appropriate access mode and storage capacity in
csi-static-block-pvc.yaml
as under.--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-infoscale-block-pvc spec: volumeMode: Block accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: csi-infoscale-sc
- Add the following to
csi-static-block-pv-<ocp/k8s>.yaml
for node affinity.nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.org.veritas.infoscale/cluster operator: In values: - <Name of the InfoScale cluster>
Note:
You can skip this step and add nodeSelector: topology.org.veritas.infoscale/cluster: "new1" to
csi-pod.yaml
. - Run oc/kubectl create -f csi-static-block-pvc.yaml to apply the yaml.
- Update the Persistent Volume Claim in
csi-static-block-pod.yaml
as under.--- apiVersion: v1 kind: Pod metadata: name: redis labels: name: redis spec: containers: - name: redis image: redis imagePullPolicy: IfNotPresent volumeDevices: - devicePath: "/dev/blk" name: vol1 volumes: - name: vol1 persistentVolumeClaim: claimName: csi-infoscale-block-pvc
- Run oc create -f csi-static-block-pod.yaml to create the application pod.