InfoScale™ 9.0 Support for Containers - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Arctera InfoScale™ on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3. Installing InfoScale on a system with Internet connectivity
      1. Using web console of OperatorHub
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML.tar
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    4. Installing InfoScale in an air gapped system
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
      3.  
        Undeploying and uninstalling InfoScale
  5. Installing Arctera InfoScale™ on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    6.  
      Undeploying and uninstalling InfoScale
  6. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  7. Installing InfoScale DR on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  8. Installing InfoScale DR on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  9. TECHNOLOGY PREVIEW: Disaster Recovery scenarios
    1.  
      Migration
  10. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
  11. Troubleshooting
    1.  
      Known Issues
    2.  
      Limitations

Configuring Disaster Recovery Plan

With a Disaster Recovery Plan (DR Plan) you can enable disaster recovery for a particular namespace. For a more granular control, you can selectively label components in the namespace and create a DR Plan with namespace and labels. A DR Plan cannot span multiple namespaces. DR Plan must be created only on the primary cluster. DR Plan is automatically created and synchronized on all peer clusters after its creation on the primary cluster. Migration and other operations on the namespace can be triggered by updating certain attributes.

  1. Edit /YAML/DR/SampleDisasterRecoveryPlan.yaml as under to create DR plan for application components in a given namespace.
    apiVersion: infoscale.veritas.com/v1
    kind: DisasterRecoveryPlan
    metadata:
      name: test-disaster-recovery-plan
    spec:
      # Name of cluster that should be treated as primary for this DR plan
      primaryCluster: <ID of the cluster you want to back up>
      # (optional) Set Force To True If Peer Cluster(S) Is Not Reachable 
      # And Local Cluster Needs To Perform Takeover
      force: false
      # List Of Member Cluster(s) Where This DRPlan Can FailOver
      # Sequence Of MemberCluster Specified In This List Denotes Relative 
      # Preference Of Member Cluster(s)
      # Must Be Subset Of Global Cluster Membership
      preferredClusterList: ["<ID of the cluster you want to back up>",
                        "<ID of the cluster where you want to back up>"]
      # Kind Of Corrective Action In Case Of Disaster
      # default value will be "Manual" if not specified
      clusterFailOverPolicy: Manual
      # Specify Namespace And Optionally Labels to decide what all 
      #  needs to be part of the disaster recovery plan
      selector:
        namespace: sample
        labels:
          app: sise
      # (optional) Pointer To Manage Storage Replication
      dataReplicationPointer: test-datareplication
      # (optional) Pointer To Manage DNS Endpoints
      dnsPointer: test-dns
    

    Note:

    If you are configuring multiple Disaster Recovery plans, ensure that any two plans do not have first 24 characters identical. dataReplicationPointer is needed only if you have stateful applications that require data replication across peer clusters.

  2. Run the following command on the bastion node

    oc apply -f /YAML/DR/SampleDisasterRecoveryPlan.yaml

  3. Wait till the command run is successful and the following message appears.
    disasterrecoveryplan.infoscale.veritas.com/
                      <Name of Disaster recovery plan> created
  4. Run the following command on the bastion node

    oc get drplan

  5. Review the output similar to the following
    NAME     PREFERREDCLUSTERLIST SPEC.PRIMARYCLUSTER 
    <Name of("ID of the cluster  "ID of cluster   
    Disaster you want "           where you want  
    recovery  to back up          to back up")      
    plan>                 
                      
                                              
    
    
    STATUS.PRIMARYCLUSTER DATAREPLICATION DNS
    ID of the current     ID of the current
    cluster               cluster