Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.300)
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5.  
      Creating multiple InfoScale clusters
    6. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    7. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    8.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Downloading Installer
    4. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    5.  
      Applying licenses
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7.  
      Creating multiple InfoScale clusters
    8. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Renewing with an external CA certificate
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Renewing with an external CA certificate
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
    13.  
      Creating ephemeral volumes
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
    3.  
      Monitoring InfoScale
    4.  
      Configuring Alerts for monitoring InfoScale
    5.  
      Draining InfoScale nodes
    6.  
      Using InfoScale toolset
  14. Migrating applications to InfoScale
    1.  
      Migrating applications to InfoScale from earlier versions
  15. Troubleshooting
    1.  
      Adding a sort data collector utility
    2.  
      Collecting logs by using SORT Data Collector
    3.  
      Approving certificate signing requests (csr) for OpenShift
    4.  
      Cert Renewal related
    5.  
      Known Issues
    6.  
      Limitations

Adding nodes to an existing cluster

Complete the following steps to add nodes to an existing InfoScale cluster-

  1. Ensure that you add the worker nodes to the Kubernetes cluster.
  2. Run the following command on the master node to check whether the newly added node is Ready.

    kubectl get nodes

    Review output similar to the following

    
    NAME           STATUS  ROLES         AGE  VERSION
    worker-node-1  Ready   control-plane,
                                  master 222d v1.21.0
    worker-node-2  Ready   worker        222d v1.21.0
    worker-node-3  Ready   worker        222d v1.21.0
    
    
  3. To add new nodes to an existing cluster, the cluster must be in a running state. Run the following command on the master node to verify.

    kubectl get infoscalecluster -A

    See the State in the output similar to the following -

    NAMESPACE   NAME                VERSION   CLUSTERID   STATE   AGE
    .
    .
    <Namespace> <Name of the 
                 InfoScale cluster> 8.0.300  <Cluster ID> Running 25h
    .
    .
    
  4. Edit clusterInfo section of the sample /YAML/Kubernetes/cr.yaml to add information about the new nodes.

    In this example, worker-node-1 and worker-node-2 exist. worker-node-3 is being added.

    Note:

    If you specify IP addresses, the number of IP addresses for the new nodes must be same as the number of IP addresses for the existing nodes.

      
    metadata:
     	name: < Assign a name to this cluster >
     	namespace: < The namespace where you want to 
                          create this cluster>
    
     
    	spec:
     	clusterID: <Optional- Enter an ID for this cluster.
                     The ID can be any number between 1 and 65535 >
     	isSharedStorage: true 
    	 - nodeName: <Name of the first node>
        ip:
        -  <Optional - First IP address of the first node >
        -  <Optional - Second IP address of the first node>
        excludeDevice:
        - <Optional - Device path of the disk on the node that you want
                                   to exclude from Infoscale disk group.>
      - nodeName: <Name of the second node>
        ip:
       -  <Optional - First IP address of the second node >
       -  <Optional - Second IP address of the second node>
        excludeDevice:
        - <Optional - Device path of the disk on the node that you want
                                   to exclude from Infoscale disk group.>
      - nodeName: <Name of the third node>
        ip:           
       -  <Optional - First IP address of the third node >
       -  <Optional - Second IP address of the third node>
        excludeDevice:
        - <Optional - Device path of the disk on the node that you want
                                   to exclude from Infoscale disk group.> 
    .
    .
    .                                               
     
     YOU CAN ADD UP TO 16 NODES.
    
      enableScsi3pr:<Enter True to enable SCSI3 persistent reservation>
      
      fencingDevice: ["<Hardware path to the first fencing device>",
                      "<Hardware path to the second fencing device>",    
                      "<Hardware path to the third fencing device>", ]
      encrypted: false
      sameEnckey: false
     
      customImageRegistry:  customImageRegistry: <Custom registry name /
                         <IP address of the custom registry>:<port number> >
     
  5. Run the following command on the master node to initiate add node workflow.

    kubectl apply -f /YAML/Kubernetes/cr.yaml

  6. You can run the following commands on the master node when node addition is in progress.

    a. kubectl get infoscalecluster -A

    See the State in the output as under. ProcessingAddNode indicates node is getting added.

     
    NAMESPACE   NAME            VERSION   CLUSTERID STATE             AGE
    .
    .
    <Namespace> <Name of the              <Cluster
              InfoScale cluster> 8.0.300  ID>       ProcessingAddNode 25h
    .
    .
          
     
    

    b. kubectl describe infoscalecluster -n <Namespace>

    Output similar to following indicates the cluster status during add node. The cluster is Degraded when node addition is in progress.

    Cluster Name:  infoscalecluster-dev
      Cluster Nodes:
        Exclude Device:
          <Excluded device path 1>
          <Excluded device path 2>
        Node Name:  worker-node-1
        Role:       Joined,Master
        Node Name:  worker-node-2
        Role:       Joined,Slave
        Node Name:  worker-node-3
        Role:         Out of Cluster
       Cluster State:  Degraded
       enableScsi3pr:  false
       Images:
        Csi:
          Csi External Attacher Container:      csi-attacher:v3.1.0
    
  7. Run the following command on the master node to verify if pods are created successfully. It may take some time for the pods to be created.

    kubectl get pods -n infoscale-vtas

    Output similar to the following indicates a successful creation.

    NAME                             READY STATUS  RESTARTS AGE
    infoscale-csi-controller-35359-0 5/5   Running 0        12d
    infoscale-csi-node-35359-7rjv9   2/2   Running 0        3d20h
    infoscale-csi-node-35359-dlrxh   2/2   Running 0        4d21h
    infoscale-csi-node-35359-dmxwq   2/2   Running 0        12d
    infoscale-csi-node-35359-j9x7v   2/2   Running 0        12d
    infoscale-csi-node-35359-w6wf2   2/2   Running 0        3d20h
    infoscale-fencing-controller-
              35359-6cc6cd7b4d-l7jtc 1/1   Running 0        3d21h
    infoscale-fencing-enabler-
                         35359-9gkb4 1/1   Running 0        12d
    infoscale-fencing-enabler-35359-
                               gwn7w 1/1   Running 0        3d20h
    infoscale-fencing-enabler-35359-
                               jrf2l 1/1   Running 0        12d
    infoscale-fencing-enabler-35359-
                               qhzdt 1/1   Running 1        3d20h
    infoscale-fencing-enabler-35359-
                               zqdvj 1/1   Running 1        4d21h
    infoscale-sds-35359-
              ed05b7abb28053ad-7svqz 1/1   Running 0        13d
    infoscale-sds-35359-
              ed05b7abb28053ad-c272q 1/1   Running 0        13d
    infoscale-sds-35359-
              ed05b7abb28053ad-g4rbj 1/1   Running 0        4d21h
    infoscale-sds-35359-
              ed05b7abb28053ad-hgf6h 1/1   Running 0        3d20h
    infoscale-sds-35359-
              ed05b7abb28053ad-wk5ph 1/1   Running 0        3d20h
    infoscale-sds-operator-
                     7fb7cd57c-rskms 1/1   Running 0        3d20h
    infoscale-licensing-operator-
                    756c854fdb-xvdnr 1/1   Running 0        13d
  8. Run the following command on the master node to verify if the cluster is 'Running'

    kubectl get infoscalecluster -A

    See the State in the output similar to the following -

    NAMESPACE   NAME                VERSION   CLUSTERID   STATE   AGE
    .
    .
    <Namespace> <Name of the 
                 InfoScale cluster> 8.0.300  <Cluster ID> Running 25h
    .
    .
    
  9. Run the following command on the master node to verify whether the cluster is 'Healthy'.

    kubectl describe infoscalecluster <Cluster Name> -n <Namespace>

    Check the Cluster State in the output similar to the following-

    Status:
      Cluster Name:  <Cluster Name>
      Cluster Nodes:
        Node Name:    worker-node-1
        Role:         Joined,Master
        Node Name:    worker-node-2
        Role:         Joined,Slave
        Node Name:    worker-node-3
        Role:         Joined,Slave
      Cluster State:  Healthy