Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.220)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    7.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    4.  
      Applying licenses
    5.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    8.  
      Installing InfoScale by using the plugin
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Additional requirements for replication on Cloud

If any of your sites is on the Cloud, you must configure load balancer service. Ensure that you have specified the cloud service for cloudVendor or remoteCloudVendor while configuring data replication.

Load balancer can be used for both managed and non-managed clouds. lbEnabled and remoteLbEnabled are set in the datareplication yaml. Default value is false (set to true only in case where network traffic goes over load balancer). For example, if the primary site is on premises and secondary site is on AKS (with a load balancer on front end), lbEnabled must be set to false and remoteLbEnabled must be set to true. In this configuration, load balancer Virtual IP address must be provided as HostAddress (local and/or remote) in the datareplication yaml. The prerequisite for this feature is that the load balancer network Kubernetes service must have the following selector in its spec- cvmaster:true. The sample file below is an example of load balancer service.

Note:

The TCP/UDP ports that Veritas Volume Replicator (VVR) uses must be open on all worker nodes of the cluster to enable communication between primary and secondary site. See Choosing the network ports used by VVR on the Veritas support portal.

Note:

Perform the following steps only if you want to use a Load balancer for data replication.

  1. Copy the following content into a yaml file and save the file as /YAML/DR/vvr-lb-svc.yaml.
    apiVersion: v1
    kind: Service
    metadata:
     annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    
      name: vvr-lb-svc
      namespace: infoscale-vtas
    spec:
      allocateLoadBalancerNodePorts: true
      externalTrafficPolicy: Cluster
      internalTrafficPolicy: Cluster
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - name: tcpportone
        port: 4145
        protocol: TCP
        targetPort: 4145
      - name: tcpporttwo
        port: 8199
        protocol: TCP
        targetPort: 8199
      - name: tcpportthree
        port: 8989
        protocol: TCP
        targetPort: 8989
      selector:
        cvmmaster: "true"
      sessionAffinity: None
      type: LoadBalancer
  2. Run the following command on the master node.

    kubectl apply -f /YAML/DR/vvr-lb-svc.yaml

  3. Run the following command on the master node.

    kubectl get svc vvr-lb-svc  -n infoscale-vtas

    Load Balancer IP address is returned as the output.

  4. Update the load Balancer IP address and copy the following content into a yaml file and save as /YAML/DR/loadbalancer.yaml. Veritas Volume Replicator (VVR) requires both ports - TCP and UDP. Hence, a Load balancer service with mixed protocol support (TCP and UDP) is needed.
    apiVersion: v1
    kind: Service
    metadata:
     annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal:"true"
      service.beta.kubernetes.io/azure-load-balancer-internal-subnet:"worker"
                          
      name: vvr-lb-svc
      namespace: infoscale-vtas
    spec:
      loadBalancerIP: <Load Balancer IP address>
      allocateLoadBalancerNodePorts: true
      externalTrafficPolicy: Cluster
      internalTrafficPolicy: Cluster
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - name: tcpportone
        port: 4145
        protocol: UDP
        targetPort: 4145
      - name: tcpporttwo
        port: 8199
        protocol: UDP
        targetPort: 8199
      - name: tcpportthree
        port: 8989
        protocol: UDP
        targetPort: 8989
      selector:
        cvmaster: "true"
      sessionAffinity: None
      type: LoadBalancer
  5. Run the following command on the master node.

    kubectl apply -f /YAML/DR/loadbalancer.yaml