InfoScale™ 9.0 Support for Containers - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Arctera InfoScale™ on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3. Installing InfoScale on a system with Internet connectivity
      1. Using web console of OperatorHub
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML.tar
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    4. Installing InfoScale in an air gapped system
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
      3.  
        Undeploying and uninstalling InfoScale
  5. Installing Arctera InfoScale™ on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    6.  
      Undeploying and uninstalling InfoScale
  6. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  7. Installing InfoScale DR on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  8. Installing InfoScale DR on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  9. TECHNOLOGY PREVIEW: Disaster Recovery scenarios
    1.  
      Migration
  10. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
  11. Troubleshooting
    1.  
      Known Issues
    2.  
      Limitations

Configuring cluster

After successfully installing InfoScale operator, you can create a cluster.

  1. Edit clusterInfo section of the sample /YAML/OpenShift/cr.yaml for InfoScale specifications as under -

    Note:

    You can specify up to 16 worker nodes in cr.yaml. Although cluster configuration is allowed even with one Network Interface Card, Arctera recommends a minimum of two physical links for performance and High Availability (HA). Number of links for each network link must be same on all nodes. Optionally, you can enter node level IP addresses. If IP addresses are not provided, IP addresses of OpenShift cluster nodes are used.

      
    spec:
      clusterID: <Optional- Enter an ID for this cluster>
      - nodeName: <Name of the first node>
        ip:
        -  <Optional - First IP address of the first node >
        -  <Optional - Second IP address of the first node>
        excludeDevice:
        - <Optional - Device path of the disk on the node that you want
                                   to exclude from Infoscale disk group.>
      - nodeName: <Name of the second node>
        ip:
       -  <Optional - First IP address of the second node >
       -  <Optional - Second IP address of the second node>
        excludeDevice:
        - <Optional - Device path of the disk on the node that you want
                                   to exclude from Infoscale disk group.>
      - nodeName: <Name of the third node>
        ip:           
       -  <Optional - First IP address of the third node >
       -  <Optional - Second IP address of the third node>
        excludeDevice:
        - <Optional - Device path of the disk on the node that you want
                                   to exclude from Infoscale disk group.> 
    .
    .
    .                                               
     
     YOU CAN ADD UP TO 16 NODES.
    
    
        
     
     

    Note:

    Do not enclose parameter values in angle brackets (<>). For example, Primarynode is the name of the first node; for nodeName : <Name of the first node> , enter nodeName : Primarynode. InfoScale on OpenShift is a keyless deployment.

  2. Run the following command on the bastion node.

    oc create -f /YAML/OpenShift/cr.yaml

  3. Run the following command on the bastion node to know the name and namespace of the cluster.

    oc get infoscalecluster

    Use the namespace from the output similar to the following.

    NAME                 NAMESPACE        VERSION   STATE     AGE
    infoscalecluster-dev infoscale-vtas   8.0.100   Running   1m15s
  4. Run the following command on the bastion node to verify whether the pods are created successfully.

    oc get pods -n infoscale-vtas

    An output similar to the following indicates a successful creation of nodes

    NAME                                 READY STATUS  RESTARTS AGE
    infoscale-csi-controller-1234-0      5/5   Running 0        19h
    infoscale-csi-node-1234-gf9pf        2/2   Running 0        19h
    infoscale-csi-node-1234-gg5dq        2/2   Running 0        18h
    infoscale-csi-node-1234-nmt85        2/2   Running 0        18h
    infoscale-csi-node-1234-r6jv8        2/2   Running 0        19h
    infoscale-csi-node-1234-w5bln        2/2   Running 2        19h
    infoscale-fencing-controller-
                    234-864468775c-4sbxw 1/1   Running 0        18h
    infoscale-fencing-enabler-1234-8b65z 1/1   Running 0        19h
    infoscale-fencing-enabler-1234-bkbbh 1/1   Running 3 (18h ago) 18h
    infoscale-fencing-enabler-1234-jvzjk 1/1   Running 5 (18h ago) 18h
    infoscale-fencing-enabler-1234-pxfmt 1/1   Running 4 (18h ago) 19h
    infoscale-fencing-enabler-1234-qmjrv 1/1   Running 0        19h
    infoscale-sds-1234-e383247e62b56585-
                                   2xxvh 1/1   Running 1        19h
    infoscale-sds-1234-e383247e62b56585-
                                   cnvkg 1/1   Running 0        18h
    infoscale-sds-1234-e383247e62b56585-
                                   l5z7m 1/1   Running 0        19h
    infoscale-sds-1234-e383247e62b56585-
                                   xlkf8 1/1   Running 0        18h
    infoscale-sds-1234-e383247e62b56585-
                                   zkpgt 1/1   Running 0        19h
    infoscale-sds-operator-bb55cfc4d- 
                                   pclt5 1/1   Running 0        18h
    licensing-operator-5fd897f68f-7p2f7  1/1   Running 0        18h
    nfd-controller-manager-6bbf6df4d9-
                                   dbxgl 2/2   Running 2        20h
    nfd-master-2h7x6                     1/1   Running 0        19h
    nfd-master-kclkq                     1/1   Running 0        19h
    nfd-master-npjzm                     1/1   Running 0        19h
    nfd-worker-8q4lz                     1/1   Running 0        19h
    nfd-worker-cvkqp                     1/1   Running 0        19h
    nfd-worker-js7tt                     1/1   Running 1        19h
    special-resource-controller-manager-
                            86b6c7-wv2tc 2/2   Running 0        20h
    
    

After a successful InfoScale deployment, a disk group is automatically created. You can now create Persistent Volumes/ Persistent Volume Claims (PV / PVC) by using the corresponding Storage class.