InfoScale™ 9.0 Support for Containers - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Arctera InfoScale™ on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3. Installing InfoScale on a system with Internet connectivity
      1. Using web console of OperatorHub
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML.tar
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    4. Installing InfoScale in an air gapped system
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
      3.  
        Undeploying and uninstalling InfoScale
  5. Installing Arctera InfoScale™ on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    6.  
      Undeploying and uninstalling InfoScale
  6. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  7. Installing InfoScale DR on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  8. Installing InfoScale DR on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  9. TECHNOLOGY PREVIEW: Disaster Recovery scenarios
    1.  
      Migration
  10. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
  11. Troubleshooting
    1.  
      Known Issues
    2.  
      Limitations

Configuring Global Cluster Membership (GCM)

With Global Cluster Membership (GCM), you can define membership of clusters for disaster recovery. The GCM CR must be configured and applied on all clusters. When configured, the Global Cluster Membership forms a logical notion called 'Global Cluster' with all underlying clusters as 'Member Clusters'. Member clusters are OpenShift clusters providing disaster recovery capabilities to application components. To provide DR, these member clusters

  1. Send heartbeats with each other periodically.

  2. Exchange information like state, configuration, operation.

  3. Perform/participate in operation like migration.

Complete the following steps

  1. Edit /YAML/DR/SampleGlobalClusterMembership.yaml as under

    apiVersion: infoscale.veritas.com/v1
    kind: GlobalClusterMembership
    metadata:
      name: global-cluster-membership
    spec:
      
      localClusterName: <Cluster for which you want to create a DR backup>
      globalMemberClusters:
          
        - clusterID: <ID of the cluster for which you want a DR backup>
          drControllerAddress: "<Load balancer IP address (haproxy)
                                                 of the local cluster>"
          drControllerPort: "<Load balancer port number>"
       
       - clusterID: <ID of the Cluster to be used for a backup>
          drControllerAddress: "<Load balancer IP address (haproxy) 
                                                  of the DR site>"
          drControllerPort: "<Load balancer port number>"
      # Required details if velero is not installed in "velero" namespace
      # and/or user needs to set a specific User ID, fsGroup in security
      #  context
      veleroConfig:
        # Specify namespace in which velero is installed. This field is 
        #  optional
        # if velero is installed in the default "velero" namespace.
        veleroNamespace: "<Namespace where Velero is installed>"
    
        # User id to enable volume mount
        # This is to comply with default security context constraint.
        # This field is optional for Kubernetes but required for OpenShift
        # if default ID below needs to be changed.
        userID: 1000640000
    							<You can change the default value to a valid value for 
                          both Primary and DR clusters> 
    
        # Supplemental group to enable volume mount.
        # This field is optional for Kubernetes but required for OpenShift
        # if default ID below needs to be changed.
        FSGroup: 1000640000
           <You can change the default value to a valid value for 
                          both Primary and DR clusters>
    

    Note:

    Do not enclose the parameter values in angle brackets(< >) . For example, if 8334 is the Load balancer port number; enter drControllerPort: "8334" for drControllerPort: "<Load balancer port number>". localClusterName and clusterID can have maximum 20 characters.

  2. Run the following command on the bastion node of the source cluster.

    oc apply -f /YAML/DR/SampleGlobalClusterMembership.yaml

  3. Edit another instance of /YAML/DR/SampleGlobalClusterMembership.yaml to add DR site as under

    apiVersion: infoscale.veritas.com/v1
    kind: GlobalClusterMembership
    metadata:
      name: global-cluster-membership
    spec:
      
      localClusterName: <Cluster for which you want to create a DR backup>
      globalMemberClusters:
          
        - clusterID: <ID of the cluster for which you want a DR backup>
          drControllerAddress: "<Load balancer IP address (haproxy)
                                                 of the local cluster>"
          drControllerPort: "<Load balancer port number>"
       
       - clusterID: <ID of the Cluster to be used for a backup>
          drControllerAddress: "<Load balancer IP address (haproxy) 
                                                  of the DR site>"
          drControllerPort: "<Load balancer port number>"
      # Required details if velero is not installed in "velero" namespace
      # and/or user needs to set a specific User ID, fsGroup in security 
      # context
      veleroConfig:
        # Specify namespace in which velero is installed. This field is
        # optional
        # if velero is installed in the default "velero" namespace.
        veleroNamespace: "<Namespace where Velero is installed>"
    
        # User id to enable volume mount
        # This is to comply with default security context constraint.
        # This field is optional for Kubernetes but required for OpenShift
        # if default ID below needs to be changed.
        userID: 1000640000
    							<You can change the default value to a valid value for 
                          both Primary and DR clusters>
    
        # Supplemental group to enable volume mount.
        # This field is optional for Kubernetes but required for OpenShift
        # if default ID below needs to be changed.
        FSGroup: 1000640000
           <You can change the default value to a valid value for 
                          both Primary and DR clusters> 
    
  4. Copy this file to the DR site and Run the following command again on the bastion node of the DR site.

    oc apply -f /YAML/DR/SampleGlobalClusterMembership.yaml

  5. Manually verify on all clusters whether the GLOBALCLUSTERSTATE is DISCOVER_WAIT by running oc get gcm.

    Various states are

    State

    Description

    UNKNOWN

    A transient default Global-Cluster state. After initial configuration/setup, cluster state must transition to DISCOVER_WAIT. Prolonged UNKNOWN state indicates errors in initial configuration/setup. Review DR Controller log for the ongoing activities.

    DISCOVER_WAIT

    Although local cluster has a copy of GCM and member cluster details, it is not certain whether local copy of GCM and member cluster is up-to-date. Waits till you seed the cluster by updating GlobalClusterOperation to localbuild. When a member cluster transitions to RUNNING state, all peer clusters with identical membership transition to RUNNING state.

    ADMIN_WAIT

    If local membership definition does not match with peer cluster's membership definition, clusters transition to this state. Update membership on peer clusters and ensure that it is identical. Peer clusters then transition to RUNNING state.

    RUNNING

    Cluster transitions to RUNNING state if you seed cluster membership by updating GlobalClusterOperation to localbuild. Cluster transitions to RUNNING state even when local copy of membership matches with peer clusters.

    EXITING

    You have initiated DR Controller stop.

    EXITED

    DR Controller stopped.

    DISCOVER_WAIT indicates that the cluster is initialized. You can now trigger localbuild. Verify the cluster membership details and initiate localbuild as under.

  6. Run the following command on the bastion node of the primary/source cluster.

    oc edit gcm global-cluster-membership

  7. Update on the source cluster as under

    globalClusterOperation: "localbuild"
    

    The cluster transitions into RUNNING state and broadcasts membership copy to all peer clusters. A peer cluster with same membership also transitions into RUNNING state, whereas a peer cluster with different membership transitions into ADMIT_WAIT state. Update Spec:GlobalMemberClusters to rectify any discrepancy.

  8. To verify whether the Global Cluster is successfully created, run the following command on the bastion node.

    oc get gcm

  9. Review the cluster names, GlobalClusterState, and PeerLinkState in the output similar to the following. GlobalClusterState must be Running and PeerLinkState must be Connected.

    NAME        LOCALCLUSTER GLOBALCLUSTERSTATE     PEERLINKSTATE
    <Name of    <Cluster ID  Running  {"<Cluster ID for back up>":"Connected"  
     the Global for back up>       ,"<Cluster ID for backing up>":"Connected"}
      cluster>