Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.220)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    7.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    4.  
      Applying licenses
    5.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    8.  
      Installing InfoScale by using the plugin
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Configuring InfoScale DR Manager by using web console

Complete the following steps.

  1. Click Operators > Installed Operators. Select InfoScale DR Manager.
  2. In the main menu, click Global Cluster Membership. You can create a cluster membership.
  3. Click Create GlobalClusterMembership in the upper-right corner of the screen as under.
  4. Assign a Name and click Global Member Clusters. You can enter cluster details here.
  5. Enter Cluster ID of the primary cluster first and its IP address in DR Controller Address. Optionally, you can enter its port number in DR Controller Port.

    Note:

    The IP address and port number are generated by using Load Balancer. An example of the command to generate the IP address is - oc -n infoscale-vtas expose deployment infoscale-dr-manager --name dr-lb-service --type LoadBalancer --protocol TCP --port 14155 --target-port 14155

  6. Enter Cluster ID of the secondary cluster and its IP address in DR Controller Address. Optionally, you can enter its port number in DR Controller Port.

    Note:

    This is the IP address and port number of the peer cluster generated by using Load Balancer. Use the command mentioned in the above step.

  7. Enter the Cluster ID of the primary cluster in Local Cluster Name.
  8. Click Create. Wait till it is created. The newly created cluster is listed under Global Cluster Membership.
  9. Repeat steps 1 to 8 on the secondary cluster. In step 7, ensure that you enter the cluster ID of the secondary cluster. After a successful configuration on the secondary cluster, the secondary cluster is the DR cluster.
  10. Run the following command on the primary cluster to verify whether GCM is successfully configured - oc get gcm

    An output similar to the following indicates a successful configuration.

    NAME                        CLUSTER NAME   CLUSTER STATE   PROTOCOL   
    global-cluster-membership   Clus1          RUNNING         10       
                                                
    PEER LINK STATE
    {"Clus1":"CONNECTED","Clus2":"CONNECTED"}
  11. You can now configure Data Replication. Click Data Replication.
  12. Click Create DataReplication in the upper-right corner of the screen. The following screen opens.
  13. For the primary cluster, enter the Cluster Name, its IP address (Can be any Virtual IP address available in the same subnet) in Local Host Address, and its corresponding Net Mask in Local Net Mask.
  14. For the secondary cluster, enter its name in Cluster Name, its IP address (Can be any Virtual IP address available in the same subnet) in Remote Host Address, and its corresponding Net Mask in Remote Netmask. Enter the network interface in Remote NIC.
  15. Enter the Namespace for which you want to configure data replication.
  16. In Local NIC, enter the network interface of the primary cluster.
  17. Click Create. Wait till Data Replication is configured and gets listed.
  18. Run the following command on the primary cluster to verify whether Data Replication is successfully configured - oc get datarep -o wide
  19. Review output for
    consistent,up-to-date
  20. Click Disaster Recovery Plan in the main menu to create a plan.
  21. Click Create DisasterRecoveryPlan in the upper-right corner of the screen. The following screen opens.
  22. Assign a Name to this plan.

    Note:

    The names of clusters and the Data replication plan appear here.

  23. Review the Primary Cluster. Enter the Namespace that you want to be a part of this plan.
  24. Review the Data Replication Pointer and Preferred Cluster List.
  25. Click Create and wait till the Disaster Recovery Plan is created and listed.

After these successful configurations, Disaster Recovery (DR) is ready.