InfoScale™ 9.0 Support for Containers - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Arctera InfoScale™ on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3. Installing InfoScale on a system with Internet connectivity
      1. Using web console of OperatorHub
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML.tar
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    4. Installing InfoScale in an air gapped system
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
      3.  
        Undeploying and uninstalling InfoScale
  5. Installing Arctera InfoScale™ on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    6.  
      Undeploying and uninstalling InfoScale
  6. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  7. Installing InfoScale DR on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  8. Installing InfoScale DR on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  9. TECHNOLOGY PREVIEW: Disaster Recovery scenarios
    1.  
      Migration
  10. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
  11. Troubleshooting
    1.  
      Known Issues
    2.  
      Limitations

Configuring Data Replication

Using Data Replication custom resource you can configure replication for persistent data(PVs and PVCs) associated with application components in a namespace. Custom resource created on a cluster is automatically synchronized on all peer clusters. Hence, this CR needs to be configured on the primary cluster only. After CR is configured, replication is set up. Arctera Volume Replicator(VVR) is responsible for performing replication. You can check status of underlying replication and perform operations like stop, pause, resume, and migrate data replication.

You must also configure Data Replication custom resources for Velero. Velero is used to capture application metadata on the primary cluster and restore it on the DR cluster by using VVR. For configuring Velero, you must run the CR on both clusters.

Note:

You must configure at least three CR files. One for Velero replication from the primary to the DR, one for Velero replication from the DR to the primary, and one per application /namespace you want to replicate.

Complete the following steps

  1. Edit /YAML/DR/SampleDataReplication.yaml to configure Velero replication from the primary to the DR as under

    apiVersion: infoscale.veritas.com/v1
    kind: DataReplication
    metadata:
      name: <Name for Data replication>
    spec:
      localHostAddress: <Virtual IP address to configure VVR>
      localNetMask: <Corresponding netmask to configure VVR>
      localNICMap: <corresponding network interface to configure VVR>
      "host1" : "eth0"
      "host2" : "eth0"
      "host3" : "eth0"
      "host4" : "eth1" selector:
        namespace: <namespace where velero is installed, same 
                                              as specified in GCM>
    			labels: 
        component: minio-infoscale-dr-bkp
    			
      currentPrimary: <Current primary cluster name - 
                              Name of the cluster you want to back up>
     
      remoteClusterDetails:
        - clusterName: <ID of the Cluster to be used for a backup>
          remoteHostAddress: <Virtual IP address for VVR configuration of
                                                          this cluster>
          remoteNetMask: <Netmask of this cluster>
          remoteNICMap: <Network interface of this cluster>
          "host5" : "eth1"
     				 "host6" : "eth0"
     				 "host7" : "eth0"
      				"host8" : "eth1"
          replicationType: sync
    	
    			
  2. Run the following command on the master node

    kubectl apply -f /YAML/DR/SampleDataReplication.yaml

  3. Similarly copy SampleDataReplication.yaml and edit the file to update currentPrimary, local/remote cluster details appropriately. Apply SampleDataReplication.yaml to configure metadata replication from the DR site to the primary.

  4. Run the following command on the master node to verify whether data replication is set up on both clusters.

    kubectl get datarep

  5. Edit another copy of /YAML/DR/SampleDataReplication.yaml on the primary cluster as under for replication of persistent data(PVs and PVCs) associated with application components in the specified namespace and labels.

    apiVersion: infoscale.veritas.com/v1
    kind: DataReplication
    metadata:
      name: <Name for Data replication>
    spec:
      # Virtual IP address to configure VVR
      localHostAddress:  <Virtual IP address to configure VVR>
      # Corresponding netmask to configure VVR
      localNetMask: <Corresponding netmask to configure VVR>
      # Corresponding network interface map (hostname and NIC name map) 
      # to configure VVR
      localNICMap: <corresponding network interface to configure VVR>
        "host1" : "eth0"
        "host2" : "eth0"
        "host3" : "eth0"
        "host4" : "eth1"
      # Namespace and optionally labels for which you 
      # want to configure data replication
        selector:
        namespace: prod
        labels:
          env: prod
      # Current primary cluster name - Name of the cluster you want 
      # to back up
      currentPrimary: <Current primary cluster name - 
                              Name of the cluster you want to back up>
      # (optional) In case of takeover operation, specify force to 
      # true along with
      # the updated currentPriamry value. In case of migrate operation,
      # force should be specified as false and only currentPrimary 
      # needs to be updated.
      #force: false
    
      # Secondary cluster details
      remoteClusterDetails:
          # ID of the Cluster to be used for a backup
        - clusterName: <ID of the Cluster to be used for a backup>
          # Virtual IP address for VVR configuration of this cluster
          remoteHostAddress: <Virtual IP address for 
                               VVR configuration of this cluster>
          # Corresponsding Netmask of this cluster
          remoteNetMask: <Netmask of this cluster>
          # Corresponding Network interface map of this cluster
          remoteNICMap:<Network interface of this cluster>
            "host5" : "eth1"
            "host6" : "eth0"
            "host7" : "eth0"
            "host8" : "eth1"
          # (optional) replication type can be sync or async.
          # default value will be async if not specified.
          #replicationType: async
           
          # (optional) replicationState can have values start, stop,
          #  pause and resume.
          # This field can be updated to start/stop/pasue/resume
          #  replication.
          # Default value will be set to start during initial
          # configuration.
          #replicationState: start
    
          # (optional) network transport protocol can be TCP or UDP.
          # Default value will be set to TCP during initial configuration and
          # can be later changed to UDP. 
          #networkTransportProtocol: TCP
    
          # (optional) By default, it will be set to N/A during
          # initial configuration, which means the available bandwidth 
          #      will be used.
          # It can be later changed to set the maximum network bandwidth 
          # (in bits per second).
          #bandwidthLimit: N/A
    
          # (optional) Supported values for latency protection are: fail,
          #   disable and override.
          # By default it will be set to disable during initial configuration 
          # and can be changed later.
          #latencyProtection: disable
    
          # (optional) Supported values log (SRL) protection are: autodcm,
          # dcm, fail, disable and override.
          # By default it will be set to autodcm during initial configuration 
          #  and can be changed later.
          #logProtection: autodcm
    

    Note:

    Ensure that the current primary cluster name you enter here must be the same that you plan to specify in DisasterRecoveryPlan.yaml. For every Disaster Recovery Plan, you must create a separate Data Replication CR. Ensure that namespace and labels in Disaster Recovery Plan and its corresponding Data Replication CR are identical.

  6. Run the following command on the master node

    kubectl apply -f /YAML/DR/SampleDataReplication.yaml

  7. After these commands are executed, run the following command on the master node

    kubectl get datarep

  8. Review the output similar to the following

    NAME        SPECCURRENTPRIMARY  STATUSCURRENTPRIMARY RVGNAME
    <Name for   ID of the cluster  ID of the current  
    Data        which you want     working cluster 
    replication> to back up   
                                       
                                
                                                                           
    
  9. Wait for the initial synchronization of the application Persistent Volumes to complete on the DR site. Run the following command on the master node of the DR site.

    kubectl describe datareplications.infoscale.veritas.com <Data rep name for the application>

    Review the status in the output similar to the following. Data Status must be consistent up-to-date.

    Spec:
    ..
    ..
    Status:
    ..
    ..
       Primary Status:
       ..
       ..
       Secondary Status:
       ..
       Data Status:  consistent,up-to-date