Veritas InfoScale™ for Kubernetes Environments 8.0.100 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.100)
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    5. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML
      2.  
        Prerequisites to install by using OLM
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3.  
      Installing the Special Resource Operator
    4.  
      Applying licenses
    5. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    6. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    7.  
      Undeploying and uninstalling InfoScale
  6. Tech Preview: Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
  7. Tech Preview: Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Static provisioning
    3. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    4.  
      Resizing Persistent Volumes (CSI volume expansion)
    5. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
    6. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    7. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    8.  
      Using InfoScale with non-root containers
    9.  
      Using InfoScale in SELinux environments
    10.  
      CSI Drivers
    11.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional steps for Azure RedHat OpenShift(ARO) environment
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Configuring DNS
      4.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
  13. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Migration

You can initiate migration on a primary cluster when peer clusters are connected, configured for Disaster Recovery (DR), and application stack is online. Migration must be initiated from the primary cluster only - this is the source cluster. Initiating migration from target cluster can result in unstable cluster states. You can run the command kubectl or oc edit/patch <Name of DR plan> and update Spec:PrimaryCluster to change current primary cluster details. A different Spec:PrimaryCluster in specifications and status indicates that migration is ongoing.

Figure: Migration initiated. Namespace resides on Cluster 1.

Migration initiated. Namespace resides on Cluster 1.

Following entities are updated during migration

  1. Application metadata - When migration is initiated from source cluster, latest snapshot of managed application's metadata is taken and tagged for restoration. This latest snapshot is replicated across peer clusters (target cluster) for restoring. Thereafter, application goes offline on the source cluster. On the target cluster, this latest snapshot is used for restoring application stack.

  2. Application data - For stateful applications, you must have configured Data Replication CR and updated DisasterRecoveryPlan:Spec:DataReplicationPointer accordingly. Data Replication CR manages replication of application data from primary cluster to peer clusters (source to target). Currently, Veritas Volume Replicator(VVR) is used for application data replication. When migration is initiated from the source cluster, the cluster roles are swapped. The proposed primary cluster assumes 'Primary' role whereas current primary cluster assumes 'Secondary' role.

  3. DNS endpoints - The DNS custom resource updates and monitors the mapping for:

    • The host name to IP address (A, AAAA, or PTR record)

    • Alias to hostname or canonical name (CNAME)

    When migration is initiated, the DNS resource records are updated appropriately.

Figure: Migration complete. Namespace resides on Cluster 2.

Migration complete. Namespace resides on Cluster 2.

You can check intermediate transient states like BackupStatus, ScheduleStatus, RestoreStatus, and DataReplicationStatus attributes of Disaster Recovery Plan during migration. To check logs if migration is stuck, run kubectl/oc logs -f --tail=100 deployments.apps/infoscale-dr-manager -n infoscale-vtas. After migration is complete, these transient states are cleaned and Status:PrimaryCluster in Disaster Recovery Plan is updated to the new primary.

Note:

If you want to resize Volume after migration, ensure you create a Storage Class on the secondary cluster (the new primary).