Veritas InfoScale™ for Kubernetes Environments 8.0.200 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.200)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5.  
      Applying licenses
    6.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    7.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    8. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    9.  
      Installing InfoScale by using the plugin
    10.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Configuring Global Cluster Membership (GCM)

With Global Cluster Membership (GCM), you can define membership of clusters for disaster recovery. The GCM CR must be configured and applied on all clusters. When configured, the Global Cluster Membership forms a logical notion called 'Global Cluster' with all underlying clusters as 'Member Clusters'. Member clusters are OpenShift clusters providing disaster recovery capabilities to application components. To provide DR, these member clusters

  1. Send heartbeats with each other periodically.

  2. Exchange information like state, configuration, operation.

  3. Perform/participate in operation like migration.

Complete the following steps

  1. Edit /YAML/DR/SampleGlobalClusterMembership.yaml as under

    apiVersion: infoscale.veritas.com/v1
    kind: GlobalClusterMembership
    metadata:
      name: global-cluster-membership
    spec:
      # Local cluster name in the global membership
      localClusterName: <Local cluster 
                               where you want to apply this YAML>
      globalMemberClusters:
          # Cluster ID of each member of global cluster membership
        - clusterID: <A unique ID of the primary cluster>
          # Address Used For Communicating With Peer Cluster's DR Controller
          drControllerAddress: "<Load balancer IP address or haproxy
                                                 of the local cluster>"
          # Port used for DR controller
          drControllerPort: "<Load balancer port number>"
        - clusterID: <A unique ID of the secondary cluster>
          drControllerAddress: "<Load balancer IP address or haproxy 
                                                  of the DR site>"
          drControllerPort: "<Load balancer port number>"
      # If heartbeat with peer cluster missed more than CounterMissTolerance 
             times,then cluster will be moved to FAULTED state
      counterMissTolerance: 5
      globalClusterOperation: "none"
      # Application metadata backup sync frequency to DR site(s) in minutes
      metadataBackupInterval: 15
      # Refresh data replication status after specified minutes
      datarepRefreshStatusFrequency: 10
      # Include cluster-scoped Custom Resource Definitions (CRDs) 
      #                          in disaster recovery plan backup
      backupClusterScopeCRD: true
      # Maximum metadata backup copies stored per DR plan
      maximumMetadataCopies: 5
    

    Note:

    Do not enclose the parameter values in angle brackets(< >) . For example, if 8334 is the Load balancer port number; enter drControllerPort: "8334" for drControllerPort: "<Load balancer port number>". localClusterName and clusterID can have maximum 20 characters.

  2. Run the following command on the bastion node of the source cluster.

    oc apply -f /YAML/DR/SampleGlobalClusterMembership.yaml

  3. Edit another instance of /YAML/DR/SampleGlobalClusterMembership.yaml to add DR site as under

    apiVersion: infoscale.veritas.com/v1
    kind: GlobalClusterMembership
    metadata:
      name: global-cluster-membership
    spec:
      # Local cluster name in the global membership
    localClusterName: <DR site cluster name
                           where you want to apply this YAML>
    
      globalMemberClusters:
      # Cluster ID of each member of global cluster membership
       - clusterID: <A unique ID of the primary cluster>
      # Address Used For Communicating With Peer Cluster's DR Controller
        drControllerAddress: "<Load balancer IP address or haproxy
                                       of the local cluster>"
    
      # Port used for DR controller
         drControllerPort: "<Load balancer port number>"
          - clusterID: <A unique ID of the secondary cluster> 
         drControllerAddress: "<Load balancer IP address or haproxy
         of the DR site>" 
          drControllerPort: "<Load balancer port number>"
      # If heartbeat with peer cluster missed more than CounterMissTolerance  
      # times, then cluster will be moved to FAULTED state
      counterMissTolerance: 5
      globalClusterOperation: "none"
      # Application metadata backup sync frequency to DR site(s) in minutes
      metadataBackupInterval: 15
      # Refresh data replication status after specified minutes
      datarepRefreshStatusFrequency: 10
      # Include cluster-scoped Custom Resource Definitions (CRDs) in 
      # disaster recovery plan backup
      backupClusterScopeCRD: true
      # Maximum metadata backup copies stored per DR plan
      maximumMetadataCopies: 5
    
  4. Copy this file to the DR site and Run the following command again on the bastion node of the DR site.

    oc apply -f /YAML/DR/SampleGlobalClusterMembership.yaml

  5. Manually verify on all clusters whether the GLOBALCLUSTERSTATE is DISCOVER_WAIT by running oc get gcm.

    Various states are

    State

    Description

    UNKNOWN

    A transient default Global-Cluster state. After initial configuration/setup, cluster state must transition to DISCOVER_WAIT. Prolonged UNKNOWN state indicates errors in initial configuration/setup. Review DR Controller log for the ongoing activities.

    DISCOVER_WAIT

    Although local cluster has a copy of GCM and member cluster details, it is not certain whether local copy of GCM and member cluster is up-to-date. If GCM and member cluster details are identical on all peer clusters then all clusters automatically transition to RUNNING state. If the details are not identical, waits till you seed the cluster by updating GlobalClusterOperation to localbuild. When a member cluster transitions to RUNNING state, all peer clusters with identical membership transition to RUNNING state.

    ADMIN_WAIT

    If local membership definition does not match with peer cluster's membership definition, clusters transition to this state. Update membership on peer clusters and ensure that it is identical. Peer clusters then transition to RUNNING state.

    RUNNING

    Cluster transitions to RUNNING state if you seed cluster membership by updating GlobalClusterOperation to localbuild. Cluster transitions to RUNNING state even when local copy of membership matches with peer clusters.

    EXITING

    You have initiated DR Controller stop.

    EXITED

    DR Controller stopped.

  6. To verify whether the Global Cluster is successfully created, run the following command on the bastion node.

    oc get gcm

  7. Review the cluster names, GlobalClusterState, and PeerLinkState in the output similar to the following. GlobalClusterState must be Running and PeerLinkState must be Connected.

    NAME                      CLUSTER NAME   CLUSTER STATE   PROTOCOL   
    global-cluster-membership Clus1          RUNNING         10         
    PEER LINK STATE
    {"Clus1":"CONNECTED","Clus2":"CONNECTED"}

    Here, NAME is the Name of GlobalClusterMembership custom resource, CLUSTER NAME is the Local Cluster ID, and Clus1, Clus2 are the Cluster IDs that you defined for global membership.