Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.220)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    7.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    4.  
      Applying licenses
    5.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    8.  
      Installing InfoScale by using the plugin
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Static provisioning

You can use static provisioning if you want to make the existing persistent storage objects available to the cluster. You can statically provision a volume over shared storage (CVM) and shared nothing (FSS) storage.

Static provisioning allows cluster administrators to make existing storage objects available to a cluster. To use static provisioning, you must know the details of the storage object, its supported configurations, and mount options. To make existing storage available to a cluster user, you must manually create a Persistent Volume, and a Persistent Volume Claim before referencing the storage in a pod.

Note:

In case you want to use filesystem-persistent volumes, ensure that the Veritas File System is created before provisioning the volumes. If the Veritas File System does not exist, you must create it manually by using the mkfs command from the InfoScale driver container .

Creating Static Provisioning

  1. You can create a Storage Class by running the csi-infoscale-sc.yaml file which is as under-.
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:  
      name: csi-infoscale-sc 
    annotations:    
      storageclass.kubernetes.io/is-default-class: "false"
    provisioner: org.veritas.infoscale
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    parameters:  fstype: vxfs
    
      # (optional) Specifies a volume layout type. 
      # Supported layouts: stripe, mirror, stripe-mirror,,mirror-stripe,
      #  concat, concat-mirror, mirror-concat 
      # If omitted, InfoScale internally chooses the best suited layout 
      #   based on the environment. 
      # layout: "mirror"
    
      # (optional) Specifies the number of disk or host failures a storage
      #                             object can tolerate.  
      # faultTolerance: "1"
    
      
    
    # (optional) Specifies the number of stripe columns to use when 
      # creating a striped volume. 
      # nstripe: "3"
    
      # (optional) Specifies the stripe unit size to use 
      #  for striped volume.  
      # stripeUnit: "64k"
    
      # (optional) Specifies disks with the specified media type. 
      #  All disks with the given mediatype are selected for volume creation.
      # Supported values: hdd, ssd  
      # mediaType: "hdd"
    
      # (optional) Specifies whether to store encrypted data on disks or not. 
      # Valid values are true or false 
      # encryption: "false"
    
      #(optional) Specifies how to initialize a new volume.
      # Valid values are "active", "zero" and "sync"
      # initType: "active"

    Note:

    The supported initType values are "sync", "active", or "zero".

    Run oc create -f csi-infoscale-sc.yaml

  2. You must be ready with the VxVM volume name to define the Persistent Volume object.

    Run oc exec -ti -n <namespace> <driver-container> -- <cmd> to list Volumes from the InfoScale Driver Container.

    An example of this command is oc exec -ti -n infoscale-vtas infoscale-vtas-driver-container-rhel8-bwvwb -- vxprint -g vrts_kube_dg -vuh | grep -w fsgen

  3. In the csi-static-pv.yaml, define the Persistent Volume object and specify the existing VxVM volume name in the volumeHandle attribute.
    csi-static-pv.yaml
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: csi-infoscale-pv
      annotations:
        pv.kubernetes.io/provisioned-by: org.veritas.infoscale
    spec:
      storageClassName: csi-infoscale-sc
      persistentVolumeReclaimPolicy: Delete
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      csi:
        driver: org.veritas.infoscale
        # Please provide pre-provisioned Infoscale volume name.
        volumeHandle: <existing_VxVM_volume_name>
        fsType: vxfs
    
  4. Create a Persistent Volume using the yaml.
    oc create -f csi-static-pv.yaml
  5. Define the Persistent Volume Claim (PVC) with appropriate access mode and storage capacity.
    csi-static-pvc.yaml
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-infoscale-pvc
    spec:
      accessModes:
       - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: csi-infoscale-sc
    
  6. Create a Persistent Volume Claim by using the yaml. This PVC automatically gets bound with the newly created PV.
    oc create -f csi-static-pvc.yaml
  7. Update the application yaml file ( mysql-deployment.yaml) and specify the persistent Volume Claim name.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mysql-deployment
      labels:
        app: mysql
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mysql
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
            - name: mysql
              image: mysql:latest
              ports:
                - containerPort: 3306
              volumeMounts:
                - mountPath: "/var/lib/mysql"
                  name: mysql-data
              env:
                - name: MYSQL_ROOT_PASSWORD
                  value: root123
          volumes:
            - name: mysql-data
              persistentVolumeClaim:
                claimName: csi-infoscale-pvc
    
  8. Create the application pod.
    oc create -f mysql-deployment.yaml
  9. Check that old data exists on the persistent volume. Run the following commands

    oc get pods | grep mysql and oc exec -it mysql-deployment<id> -- mysql -uroot -pRoot12345!.

Enabling raw block support with static provisioning

  1. Run oc exec -ti -n <namespace> <driver-container> -- <cmd> to list Volumes from the InfoScale Driver Container. Be ready with the names of the Volumes.
  2. Update volumeHandle in csi-static-block-pv.yaml as under.
    --- 
    apiVersion: v1 
    kind: PersistentVolume 
    metadata: 
      name: csi-infoscale-block-pv 
      annotations: 
        pv.kubernetes.io/provisioned-by: org.veritas.infoscale 
    spec: 
      volumeMode: Block 
      storageClassName: csi-infoscale-sc 
      persistentVolumeReclaimPolicy: Delete 
      capacity: 
        storage: 5Gi 
      accessModes: 
        - ReadWriteOnce 
      csi: 
        driver: org.veritas.infoscale 
        # Please provide pre-provisioned Infoscale volume name. 
        volumeHandle: <Name of the Volume>
        volumeAttributes: 
          volumePath: "/dev/blk" 
  3. Run oc create -f csi-static-pv.yaml to apply the yaml.
  4. Define the Persistent Volume Claim (PVC) with appropriate access mode and storage capacity in csi-static-block-pvc.yaml as under.
    --- 
    apiVersion: v1 
    kind: PersistentVolumeClaim 
    metadata: 
      name: csi-infoscale-block-pvc 
    spec: 
      volumeMode: Block 
      accessModes: 
       - ReadWriteOnce 
      resources: 
        requests: 
          storage: 5Gi 
      storageClassName: csi-infoscale-sc 
  5. Run oc create -f csi-static-block-pvc.yaml to apply the yaml.
  6. Update the Persistent Volume Claim in csi-static-block-pod.yaml as under.
    --- 
    apiVersion: v1 
    kind: Pod 
    metadata: 
      name: redis 
      labels: 
        name: redis 
    spec: 
      containers: 
        - name: redis 
          image: redis 
          imagePullPolicy: IfNotPresent 
          volumeDevices: 
            - devicePath: "/dev/blk" 
              name: vol1 
      volumes: 
        - name: vol1 
          persistentVolumeClaim: 
            claimName: csi-infoscale-block-pvc 
  7. Run oc create -f csi-static-block-pod.yaml to create the application pod.