InfoScale™ 9.0 Support for Containers - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Arctera InfoScale™ on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3. Installing InfoScale on a system with Internet connectivity
      1. Using web console of OperatorHub
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML.tar
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    4. Installing InfoScale in an air gapped system
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
      3.  
        Undeploying and uninstalling InfoScale
  5. Installing Arctera InfoScale™ on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    6.  
      Undeploying and uninstalling InfoScale
  6. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  7. Installing InfoScale DR on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  8. Installing InfoScale DR on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      External dependencies
    4. Installing InfoScale DR
      1.  
        Configuring DR Operator
      2.  
        Configuring Global Cluster Membership (GCM)
      3.  
        Configuring Data Replication
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  9. TECHNOLOGY PREVIEW: Disaster Recovery scenarios
    1.  
      Migration
  10. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
  11. Troubleshooting
    1.  
      Known Issues
    2.  
      Limitations

Installing InfoScale in an air gapped system

An air gapped system is not connected to the Internet. It is therefore necessary to prepare the system.

Before installing InfoScale on an air gapped system, mirror the Node Feature Discovery (NFD) operator catalog first. You can perform mirroring and installation of Node Feature Discovery (NFD) from any OpenShift cluster node that has Internet connectivity and is also connected with the air gapped system.

Note:

In the following steps, ${JUMP_HOST}:5000 is on the same network. JUMP_HOST is a system connected to Internet and has a registry setup. 5000 is an indicative port number.

Mirroring the Node Feature Discovery (NFD) operator catalog

  1. Run the following command on the bastion node to authenticate with registry.redhat.io and your custom registry.

    export REGISTRY_AUTH_FILE=<path_to_pull_secret>/pull-secret.json

  2. Run the following command on the bastion node to set the following environment variable export

    JUMP_HOST="<IP address of custom registry>"

  3. Run the following command on the bastion node to disable the sources for the default catalogs.

    oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

  4. Run the following command on the bastion node to retain only the specified package in the source index.

    nfd opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.9 -p nfd -t ${JUMP_HOST}:5000/catalog/redhat-operator-index:v4.9

  5. Run the following command on the bastion node to push the Node Feature Discovery Operator index image to your custom registry.

    podman push ${JUMP_HOST}:5000/catalog/redhat-operator-index:v4.9

  6. Run the following command on the bastion node to mirror the Node Feature Discovery Operator

    oc adm catalog mirror \ --insecure=true \ --index-filter-by-os='linux/amd64' \ -a ${REGISTRY_AUTH_FILE} \ ${JUMP_HOST}:5000/catalog/redhat-operator-index:v4.9 ${JUMP_HOST}:5000/operators

  7. Inspect the manifests directory that is generated in your current directory. The manifest directory format is manifests-<index_image_name>-<random_number>. For example manifests-redhat-operator-index-1638334101.
  8. Run the following command on the bastion node to create the ImageContentSourcePolicy (ICSP) object by specifying imageContentSourcePolicy.yaml in your manifests directory

    oc create -f <path to the manifests directory for your mirrored content>/imageContentSourcePolicy.yaml

  9. Run the following command on the bastion node to customize mapping.txt with REGISTRY_AUTH_FILE.

    oc image mirror -f <path/to/manifests/dir>/mapping.txt -a ${REGISTRY_AUTH_FILE} - insecure

  10. Copy the following content and save it as catalogSource_redhat_operator.yaml .
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
       name: redhat-operator-index
       namespace: openshift-marketplace
    spec:
      image: ${JUMP_HOST}:5000/operators/catalog-redhat-operator-index:v4.9
      sourceType: grpc
      displayName: My Operator Catalog
      publisher: <publisher_name>
      updateStrategy:
        registryPoll:
          interval: 30m
    
  11. Run the following command on the bastion node to create the CatalogSource object

    oc apply -f catalogSource_redhat_operator.yaml

  12. Run the following command on the bastion node to check the status of pods.

    oc get pods -n openshift-marketplace

    Review output as under. Status of the pods must be 'Running'.

    NAME                         READY   STATUS      RESTARTS      AGE
    certified-operator-
                  index-bq7bt    1/1     Running     0             17h
    marketplace-operator-
     d6985d479bc-7zbckj          1/1     Running     0             23d
    redhat-operator-index
               -785tv            1/1     Running     0             17h
    
  13. Check the package manifest

    oc get packagemanifest -n openshift-marketplace

    Review output to the following output.

    NAME                       DISPLAY              TYPE PUBLISHER      AGE
    certified-operator-index   Openshift Telco Docs grpc Openshift Docs 20h
    redhat-operator-index      Openshift Telco Docs grpc Openshift Docs 20h
    
  14. Run the following commands on the bastion node to check the catalogsource

    oc get catalogsource -n openshift-marketplace

    oc get pods -n openshift-marketplace

  15. Login to the OCP web console and click Operators > OperatorHub. The mirrored operator must be listed here.

Installing Node Feature Discovery (NFD) Operator

  1. Connect to the OpenShift console.
  2. In the left frame, click Operators > OperatorHub. You can select and install the operator here.
  3. In Filter by keyword, enter Node Feature Discovery. Node Feature Discovery is listed as under.

    Note:

    If the Operator is already installed, it is indicated. See the last step to apply Cert-Manager.

  4. Select the Node Feature Discovery Operator and follow onscreen instructions to install.
  5. After a successful installation, Node Feature Discovery is listed under Operators > Installed Operators in the left frame.
  6. In Node Feature Discovery, see a box under Provided APIs.
  7. Click Create instance. Edit the values of the NodeFeatureDiscovery CR.
  8. Click Create.
  9. To verify whether the installation is successful and check status of NFD instances on each node, run the following command on the bastion node.

    oc get pods -A |grep nfd

    Review the sample output as under. Here, the prefix nfd- is of the nfd operator.

    openshift-operators   nfd-master-4hqbq    1/1   Running     0   62m
    openshift-operators   nfd-master-brt9f    1/1   Running     0   62m
    openshift-operators   nfd-master-pplqr    1/1   Running     0   62m
    openshift-operators   nfd-operator-59454bd5c9-gf6h7 1/1  Running 0 5d2h
    openshift-operators   nfd-worker-8l6wh    1/1   Running     0   62m
    openshift-operators   nfd-worker-bngbq    1/1   Running     0   62m
    openshift-operators   nfd-worker-d5btm    1/1   Running     0   62m
    openshift-operators   nfd-worker-hx6xl    1/1   Running     0   62m
    

Note:

You can refer to the OpenShift documentation for Node Feature Discovery.

Installing cert-manager

  1. Pull the following images
    • quay.io/jetstack/cert-manager-cainjector:v1.6.1

    • quay.io/jetstack/cert-manager-controller:v1.6.1

    • quay.io/jetstack/cert-manager-webhook:v1.6.1

  2. Tag and push the images to the Custom registry at <IP address of custom registry>/veritas/.
  3. Edit /YAML/OpenShift/air-gapped-systems/cert-manager.yaml as under
    • Replace 192.168.1.21/veritas/cert-manager-cainjector:v1.6.1 with image: <IP address of custom registry>/veritas/cert-manager-cainjector:v1.6.1.

    • Replace 192.168.1.21/veritas/cert-manager-controller:v1.6.1 with image: <IP address of custom registry>/veritas/cert-manager-controller:v1.6.1.

    • Replace 192.168.1.21/veritas/cert-manager-webhook:v1.6.1 with image: <IP address of custom registry>/veritas/cert-manager-webhook:v1.6.1.

  4. Run the following command on the bastion node to install cert-manager

    oc apply -f /YAML/OpenShift/air-gapped-systems/cert-manager.yaml

  5. Run the following command on the bastion node to check the status of pods.

    oc get all -n cert-manager

    Status similar to the following indicates a successful installation.

    NAME                                   READY STATUS    RESTARTS AGE
    pod/cert-manager-5986867bb9-v95t7      1/1   Running   0        56s
    pod/cert-manager-cainjector-b475c485b
                                    -bxj89 1/1   Running   0        56s
    pod/cert-manager-webhook-55b6c54579
                                  -95gcw   1/1   Running   0        56s
    
    NAME                 TYPE      CLUSTER-IP   EXTERNAL-IP PORT(S)  AGE
    service/cert-manager ClusterIP 172.30.72.54 <none>      9402/TCP 57s
    service/cert-manager
                -webhook ClusterIP 172.30.180.10<none>      443/TCP  57s
    
    NAME                           READY UP-TO-DATE AVAILABLE   AGE
    deployment.apps/cert-manager   1/1   1          1           57s
    deployment.apps/cert-manager
                      -cainjector  1/1   1          1           57s
    deployment.apps/cert-manager
                        -webhook   1/1   1          1           57s
    NAME                           DESIRED CURRENT READY AGE
    replicaset.apps/cert-manager
                     -5986867bb9   1       1       1     56s
    replicaset.apps/cert-manager
           -cainjector-b475c485b   1       1       1     56s
    replicaset.apps/cert-manager
             -webhook-55b6c54579   1       1       1     56s
    

You must install a Special Resource Operator (SRO) first, before installing Arctera InfoScale™. After the SRO is installed, the system is enabled for installing Arctera InfoScale™.

Installing Special Resource Operator (SRO) and InfoScale Operator

  1. Download YAML.tar from the Arctera Download Center.
  2. Untar YAML.tar.

    After you untar YAML.tar, the folders /YAML/OpenShift/, /YAML/OpenShift/air-gapped-systems , /YAML/DR, and /YAML/Kubernetes are created. Each folder contains files required for installation.

  3. On the bastion node -
    • Download

      registry.redhat.io/openshift4/special-resource-rhel8-operator:
             v4.9.0-202111161916.p0.gf6ed01a.assembly.stream

      , tag, and push it to custom registry as

      <IP address of custom registry>/special-resource-rhel8-operator:
              v4.9.0-202111161916.p0.gf6ed01a.assembly.stream
    • Download

      registry.redhat.io/openshift4/ose-kube-rbac-proxy, tag and push it to custom registry as

      <IP address of custom registry>/ose-kube-rbac-proxy:v4.9.

    • Edit /YAML/OpenShift/air-gapped-systems/sro.yaml as under

      Replace

      192.168.1.21/veritas/special-resource-rhel8-operator:
          v4.9.0-202111161916.p0.gf6ed01a.assembly.stream
      

      with

      image:<IP address of custom registry>/special-resource-rhel8-operator:
      v4.9.0-202111161916.p0.gf6ed01a.assembly.stream

      and

      Replace image:192.168.1.21/veritas/ose-kube-rbac-proxy:v4.9 with

      image:<IP address of custom registry>/ose-kube-rbac-proxy:v4.9.

    • Run the following command

      oc create -f /YAML/OpenShift/air-gapped-systems/sro.yaml

    • Run oc create -f /YAML/OpenShift/air-gapped-systems/sr.yaml to create Special Resource.

  4. Run the following commands and review the output to verify whether SR creation and SRO installation is successful.
    • oc get pods -n openshift-special-resource-operator

      Output similar to the following indicates a successful installation.

      NAME                                READY  STATUS    RESTARTS   AGE
      special-resource-controller-manager-66c8fc64b5-9wv6l 1/1  Running 0       2m35s
      

      Note:

      The name in the output here is used in the following command.

    • oc logs special-resource-controller-manager-66c8fc64b5-9wv6l -n openshift-special-resource-operator -c manager

      Output similar to the following indicates a successful installation.

      <timestamp>    INFO    status         
       RECONCILE SUCCESS: Reconcile
    • oc get SpecialResource

      Output similar to the following indicates a successful installation.

      NAME                        AGE
      special-resource-preamble   2m24s
      

Applying Licenses

  1. Run oc create -f /YAML/OpenShift/air-gapped-systems/lico.yaml on the bastion node.
  2. Run oc get pods -n infoscale-vtas|grep -i licensing on the bastion node to verify whether lico.yaml is successfully applied.

    An output similar to the following indicates that lico.yaml is successfully applied.

    NAME                                  READY STATUS  RESTARTS AGE
    licensing-operator-fbd8c7dc4-rcfz5    1/1   Running 0        2m
    
    
  3. After lico.yaml is successfully applied, licensing endpoints must be available.

    Run oc describe service/lico-webhook-service -n infoscale-vtas|grep Endpoints on the master node and review the output.

  4. Run the command again till you get an output in the following format.
    Endpoints: <IP address of the endpoint>:<Port number>
  5. Edit /YAML/OpenShift/air-gapped-systems/license_cr.yaml for the license edition. The default license edition is Developer. You can change the licenseEdition. If you want to configure Disaster Recovery (DR), you must have Trialware or SubscriptionEnterprise as the license edition.
    apiVersion: vlic.veritas.com/v1
    kind: License
    metadata:
      name: license-dev
    spec:
      # valid licenseEdition values are Developer, Trialware, 
                 SubscriptionStorage or SubscriptionEnterprise
      licenseEdition: "Developer"
    
  6. Run oc create -f /YAML/OpenShift/air-gapped-systems/license_cr.yaml on the bastion node.
  7. Run oc get licenses on the bastion node to verify whether licenses have been successfully applied.

    An output similar to the following indicates that license_cr.yaml is successfully applied.

    NAME          NAMESPACE   LICENSE-EDITION   AGE
    license                   DEVELOPER         27s
    
    

All information about the worker nodes must be added to the cr.yaml file. All worker nodes become part of InfoScale cluster after cr.yaml is applied. After you download and untar YAML.tar, all files required for installation are available.

Note:

You must download images required for installation from the Red Hat registry and push those to the Custom registry.

Optionally, configure a new user - infoscale-admin, associated with a Role-based Access Control ( RBAC) clusterrole defined ininfoscale-admin-role.yaml, to deploy InfoScale and its dependent components. infoscale-admin as a user when configured has clusterwide access to only those resources needed to deploy InfoScale and its dependent components such as SRO/NFD/Cert Manager in the desired namespaces.

To provide a secure and isolated environment for InfoScale deployment and associated resources, the namespace associated with these resources must be protected from access of all other users (except super user of the cluster), with appropriate RBAC implemented.

Run the following commands on the bastion node to create a new user - infoscale-admin and a new project and assign role or clusterrole to infoscale-admin. You must be logged in as a super user.

  1. oc new-project <New Project name>

    A new project is created for InfoScale deployment.

  2. oc adm policy add-role-to-user admin infoscale-admin

    Following output indicates that administrator privileges are assigned to the new user - infoscale-admin within the new project.

    clusterrole.rbac.authorization.k8s.io/admin added: "infoscale-admin"
  3. oc apply -f /YAML/OpenShift/air-gapped-systems/infoscale-admin-role.yaml

    Following output indicates that a clusterrole is created.

    clusterrole.rbac.authorization.k8s.io/infoscale-admin-role created
  4. oc adm policy add-cluster-role-to-user infoscale-admin-role infoscale-admin

    Following output indicates that clusterrole created is associated with infoscale-admin.

    clusterrole.rbac.authorization.k8s.io/infoscale-admin-role added:
              "infoscale-admin"
    

You must perform all installation activities by logging in as infoscale-admin.

Download the following images -

  • registry.connect.redhat.com/veritas-technologies/infoscale-sds-operator:8.0.100-rhel8

  • registry.connect.redhat.com/veritas-technologies/infoscale-fencing:2.0.0.0000-rhel8

  • registry.connect.redhat.com/veritas-technologies/infoscale-csi:2.0.0.0000-rhel8

  • registry.connect.redhat.com/veritas-technologies/infoscale-licensing:8.0.100-rhel8

  • registry.connect.redhat.com/veritas-technologies/infoscale-dr-operator:1.0.0.0000-rhel8

  • registry.connect.redhat.com/veritas-technologies/infoscale-sds-operator:8.0.100-rhel8

  • registry.connect.redhat.com/veritas-technologies/infoscale:8.0.100-rhel8.4-<kernel release version> where, kernel release version = uname -r output from worker node.

  • registry.redhat.io/openshift4/ose-csi-driver-registrar:v4.3

  • registry.redhat.io/openshift4/ose-csi-external-provisioner-rhel8:v4.7

  • registry.redhat.io/openshift4/ose-csi-external-attacher:v4.7

  • registry.redhat.io/openshift4/ose-csi-external-resizer-rhel8:v4.7

  • registry.redhat.io/openshift4/ose-csi-external-snapshotter-rhel8:v4.7

  • docker.io/kvaps/kube-fencing-switcher:v2.1.0

  • docker.io/kube-fencing-controller:v2.1.0

After you download, tag the images, and push those to the Custom registry.

  1. Edit /YAML/OpenShift/air-gapped-systems/iso.yaml as under

    Replace image: 192.168.1.21/veritas/infoscale-sds-operator:8.0.100-rhel8 with image: <IP address of custom registry>/infoscale-sds-operator:8.0.100-rhel8.

  2. Run the following command on the bastion node to install Arctera InfoScale™.

    oc create -f /YAML/OpenShift/air-gapped-systems/iso.yaml

  3. Run the following command on the bastion node to verify whether the installation is successful

    oc get pods -n infoscale-vtas|grep infoscale-sds-operator

    An output similar to the following indicates a successful installation. READY 1/1 indicates that Storage cluster resources can be created.

    NAME                                   READY STATUS  RESTARTS AGE
    infoscale-sds-operator-bb55cfc4d-pclt5 1/1   Running 0        20h