Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.220)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    7.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    4.  
      Applying licenses
    5.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    8.  
      Installing InfoScale by using the plugin
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Installing InfoScale by using the plugin

On an already deployed Kubernetes cluster with storage provisioned, you can download the InfoScale installers and install InfoScale. When you download the installers, a plugin is downloaded with which you can deploy the mandatory operators and then configure an InfoScale cluster.

As a prerequisite, git must be installed on the system. To enable autocompletion of commands on the bash shell, see the documentation of bash, fish, powershell, and zsh. These are the supported shells. Run kubectl-infoscale completion bash/fish/zsh/powershell --help for help.

  1. Download kubectl_plugin_8.0.220.tar.gz from the Veritas Download Center and extract the tar file.
  2. Copy kubectl-plugin binary at a $PATH location.
  3. On a Kubernetes cluster, run the following command to download the InfoScale tar file.

    kubectl-infoscale fetch-yaml

    Note:

    With this command the latest installer is downloaded.

    The following output indicates a successful download

    successfully downloaded InfoScale 8.0.220

    You can run the following commands.

    configure

    Configures Kubernetes cluster

    delete

    Deletes InfoScale cluster along with the third-party dependencies.

    deploy

    Deploys InfoScale cluster with operators along with third-party dependencies.

    show

    Shows commands for displaying various operations.

    get-available-infoscale-version

    Gets available InfoScale versions from SORT.

    help

    Displays help on the available commands.

    scale

    Scales up the InfoScale cluster. (Adds storage)

  4. Run the following command to deploy operators which are mandatory for InfoScale deployment.

    kubectl-infoscale deploy dependent-operators

    Review output as under

    Step 1/2 started 
     Deploying: cert-manager
    Step 2/2 started 
     Deploying: node-feature-discovery
    
    
  5. Run the following command to verify whether the dependent operators are installed successfully.

    kubectl-infoscale show get-resource-info --namespace cert-manager, node-feature-discovery

  6. Run the following command to deploy licensing and InfoScale operators.

    kubectl-infoscale deploy operator --image-registry <registry_name>

    An example of a registry name is infoscale_registry.vxindia.veritas.com/8_0_200/lxrt-8.0-vike2-2022-10-13b/veritas.

    Review output similar to the following.

    Step 1/2 started 
     Deploying: infoscale-licensing-operator
    Step 2/2 started 
     Deploying: infoscale-sds-operator
    
  7. Run the following command to verify whether licensing and InfoScale operators are installed successfully.

    kubectl-infoscale show get-resource-info --namespace infoscale-vtas

    Review output similar to the following.

    Deployment State: Ready
    
    Resource: all           Namespace: infoscale-vtas
    NAME                            READY   STATUS    RESTARTS   AGE
    pod/infoscale-licensing
          -operator-c895d9dd8-lpzdl 1/1     Running   0          3d3h
    pod/infoscale-sds-operator
           -55c78f4db9-pmh85        1/1     Running   0          3d3h
    
    NAME                 TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)   
    service/
    iso-webhook-service  ClusterIP 10.99.89.137  <none>      443/TCP   
    service/
    lico-webhook-service ClusterIP 10.107.252.17 <none>      443/TCP   
    
    AGE
    
    3d4h
    
    3d4h
    
    NAME                         READY   UP-TO-DATE AVAILABLE AGE
    deployment.apps/
    infoscale-licensing-operator 1/1     1          1         3d4h
    deployment.apps/
    infoscale-sds-operator       1/1     1          1         3d4h
    
    NAME                                    DESIRED CURRENT READY AGE
    replicaset.apps/
    infoscale-licensing-operator-c895d9dd8  1       1       1    3d3h
    replicaset.apps/
    infoscale-licensing-operator-d9fccb946  0       0       0    3d4h
    replicaset.apps/
    infoscale-sds-operator-55c78f4db9       1       1       1     3d3h
    replicaset.apps/
    infoscale-sds-operator-5dd857d6c6       0       0       0     3d4h
    
  8. Copy the following node information into a file and save the file at an appropriate location
     - nodeName: <Name of first node>
        excludeDevice:
        - <Device path of this disk you want exclude 
                                   from the Infoscale disk group>
        - <Device path of this disk you want exclude 
                                   from the Infoscale disk group>
        fencingDevice:
        - <Device path of this disk you want add as fencing device>
        - <Device path of this disk you want add as fencing device>
     - nodeName: <Name of second node>
        excludeDevice:
        - <Device path of this disk you want exclude
                                   from the Infoscale disk group>
        - <Device path of this disk you want exclude 
                                    from the Infoscale disk group>
        fencingDevice:
        - <Device path of this disk you want add as fencing device>
        - <Device path of this disk you want add as fencing device>
    .
    .
    .
    YOU CAN ADD UPTO 16 NODES
    
    

    Note:

    excludeDevice and fencingDevice are optional parameters.

  9. Run the following command to deploy an InfoScale cluster.

    kubectl-infoscale deploy cluster --image-registry <image-registry> --license-edition <Developer/Storage/Enterprise> --cluster-id <Cluster ID> --cluster-name <Cluster Name> --node-info-file <Name of the file with the path>

    Parameters listed in the following table are optional. You can add any of these parameters to the command as --<parameter name > <parameter value>.

    cluster-id

    ID of the cluster

    enable-scsi3pr

    Enable scsi3pr. The default value is false. You can set it to true as --enable-scsi3pr=true.

    encrypted

    Enable encryption at the disk group level. The default value is false. You can set it to true.

    isSharedStorage

    You can set it to true if you want to create a disk group by using storage available across nodes. The default value is false.

    sameEncKey

    You can set it to false if you want a different encryption key for every Volume. The default value is true.

    kubeconfig

    Path to the kubeconfig file which is used for CLI requests

    verbose

    Set log level from 0 to 5.

    Review output similar to the following to verify whether the cluster is deployed successfully.

    Step 1/2 started 
     Deploying: licence_cr
    Step 2/2 started 
     Deploying: infoscale_cr
    
  10. Run the following command to verify the status of the cluster and the SDS container pods.

    kubectl-infoscale show get-resource-info --namespace infoscale-vtas

Viewing disk group information

  • To view information about the disk group that is created, run the following command.

    kubectl-infoscale show storage-info

    Review output similar to the following.

    Disk Group Summary:
    DiskList:  
    node000_vmdk0_0  node000_vmdk0_1  node000_vmdk0_2  
    node000_vmdk0_3  node000_vmdk0_4  node000_vmdk0_5  
    .
    .
    Name: <Disk group name>
    State: imported
    TotalSize: 239.24g
    FreeSize: 219.11g
    
    Disk Information:
    {"diskName":"node001_vmdk0_3","lunSize":"10.00g","mediaType":"hdd"} 
    {"diskName":"node001_vmdk0_7","lunSize":"10.00g","mediaType":"hdd"} 
    .
    .
    

Adding storage to an InfoScale cluster

  1. Create and save a new node information file as under. Ensure that it is different from the file you used while creating the cluster.
    - nodeName: <Name of node you want to add>
        excludeDevice:
        - <Device path of this disk you want exclude 
                                    from the Infoscale disk group>
        - <Device path of this disk you want exclude 
                                    from the Infoscale disk group>
        fencingDevice:
        - <Device path of this disk you want add as fencing device>
        - <Device path of this disk you want add as fencing device>
    .
    .
    .YOU CAN ADD MULTIPLE NODES.
    
  2. Run the following command to deploy the new nodes.

    kubectl-infoscale scale up node --node-info-file <Name of the new file with the path>

    Review output as under.

    Step 1/2 started 
     Deploying: infoscale_cr
    

    You have successfully scaled up nodes of the InfoScale cluster. With the next step, storage is added and storage is scaled up.

  3. Run the following command to add storage.

    kubectl-infoscale scale up storage

    Review output as under.

    Step 1/2 started 
     Deploying: infoscale_cr
    

Undeploying and uninstalling InfoScale cluster

  • Run the following commands to undeploy and uninstall the InfoScale cluster with all the installed operators.

    kubectl-infoscale delete cluster

    kubectl-infoscale delete operator

    kubectl-infoscale delete dependent-operators

    Note:

    Ensure that you run the command in the same order as mentioned here. After uninstallation, ensure that stale InfoScale kernel modules (vxio/vxdmp/veki/vxspec/vxfs/odm/glm/gms) do not remain loaded on any of the worker nodes. Rebooting a worker node deletes all such modules. When fencing is configured, certain keys are configured. Those must also be deleted after uninstallation. Run ./clearkeys.sh <Path to the first disk>, <Path to the second disk>,... to remove stale keys that might have remained.