Veritas InfoScale™ for Kubernetes Environments 8.0.100 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.100)
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    5. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML
      2.  
        Prerequisites to install by using OLM
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4.  
      Applying licenses
    5. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    6. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    7.  
      Undeploying and uninstalling InfoScale
  6. Tech Preview: Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
  7. Tech Preview: Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional steps for Azure RedHat OpenShift(ARO) environment
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Configuring DNS
      4.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
  13. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Static provisioning

You can use static provisioning if you want to make the existing persistent storage objects available to the cluster. You can statically provision a volume over shared storage (CVM) and shared nothing (FSS) storage.

Static provisioning allows cluster administrators to make existing storage objects available to a cluster. To use static provisioning, you must know the details of the storage object, its supported configurations, and mount options. To make existing storage available to a cluster user, you must manually create a Persistent Volume, and a Persistent Volume Claim before referencing the storage in a pod.

Note:

You must ensure that the VxFS file system is created before provisioning the volumes statically. If the VxFS file system does not exist, you must create it manually by using the mkfs command from the InfoScale driver container .

Creating Static Provisioning

  1. You can create a Storage Class by running the csi-infoscale-sc.yaml file which is as under-.
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: csi-infoscale-sc
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
    provisioner: org.veritas.infoscale
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    parameters:
      fstype: vxfs
    
      # (optional) Specifies a volume layout type.
      # Supported layouts: stripe, mirror, stripe-mirror, mirror-stripe, 
      #                            concat, concat-mirror, mirror-concat
      # If omitted, InfoScale internally chooses the best suited layout
      #                                  based on the environment.
      # layout: "mirror"
      #
      # (optional) Specifies the number of disk or host failures a 
      #                                     storage object can tolerate.
      # faultTolerance: "1"
      #
      # (optional) Specifies the number of stripe columns to use when
      #                                      creating a striped volume.
      # nstripe: "3"
    
      # (optional) Specifies the stripe unit size to use for striped 
      #                                       volume. 
      # stripeUnit: "64k"
      #
      # (optional) Specifies disks with the specified media type. All
      # disks with the given mediatype are selected for volume creation.
      # Supported values: hdd, ssd
      # mediaType: "hdd"
    

    Run oc create -f csi-infoscale-sc.yaml

  2. You must be ready with the VxVM volume name to define the Persistent Volume object.

    Run oc exec -ti -n <namespace> <driver-container> -- <cmd> to list Volumes from the InfoScale Driver Container.

    An example of this command is oc exec -ti -n infoscale-vtas infoscale-vtas-driver-container-rhel8-bwvwb -- vxprint -g vrts_kube_dg -vuh | grep -w fsgen

  3. In the csi-static-pv.yaml, define the Persistent Volume object and specify the existing VxVM volume name in the volumeHandle attribute.
    csi-static-pv.yaml
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: csi-infoscale-pv
      annotations:
        pv.kubernetes.io/provisioned-by: org.veritas.infoscale
    spec:
      storageClassName: csi-infoscale-sc
      persistentVolumeReclaimPolicy: Delete
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      csi:
        driver: org.veritas.infoscale
        # Please provide pre-provisioned Infoscale volume name.
        volumeHandle: <existing_VxVM_volume_name>
        fsType: vxfs
    
  4. Create a Persistent Volume using the yaml.
    oc create -f csi-static-pv.yaml
  5. Define the Persistent Volume Claim (PVC) with appropriate access mode and storage capacity.
    csi-static-pvc.yaml
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: csi-infoscale-pvc
    spec:
      accessModes:
       - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: csi-infoscale-sc
    
  6. Create a Persistent Volume Claim by using the yaml. This PVC automatically gets bound with the newly created PV.
    oc create -f csi-static-pvc.yaml
  7. Update the application yaml file ( mysql-deployment.yaml) and specify the persistent Volume Claim name.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mysql-deployment
      labels:
        app: mysql
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mysql
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
            - name: mysql
              image: mysql:latest
              ports:
                - containerPort: 3306
              volumeMounts:
                - mountPath: "/var/lib/mysql"
                  name: mysql-data
              env:
                - name: MYSQL_ROOT_PASSWORD
                  value: root123
          volumes:
            - name: mysql-data
              persistentVolumeClaim:
                claimName: csi-infoscale-pvc
    
  8. Create the application pod.
    oc create -f mysql-deployment.yaml
  9. Check that old data exists on the persistent volume. Run the following commands

    oc get pods | grep mysql and oc exec -it mysql-deployment<id> -- mysql -uroot -pRoot12345!.