Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.220)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    7.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    4.  
      Applying licenses
    5.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    8.  
      Installing InfoScale by using the plugin
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Configuring Data Replication

Using Data Replication custom resource you can configure replication for persistent data(PVs and PVCs) associated with application components in a namespace. Custom resource created on a cluster is automatically synchronized on all peer clusters. Hence, this CR needs to be configured on the primary cluster only. After CR is configured, replication is set up. Veritas Volume Replicator(VVR) is responsible for performing replication. You can check status of underlying replication and perform operations like stop, pause, resume, and migrate data replication.

If you are configuring data replication for an on-premise and cloud combination, ensure that you select the appropriate value for cloudVendor and for load balancer-based network traffic management, set cvmaster:true.

Complete the following steps

  1. Edit /YAML/DR/SampleDataReplication.yaml on the primary cluster as under for replication of persistent data(PVs and PVCs) associated with application components in the specified namespace and labels.

    apiVersion: infoscale.veritas.com/v1
    kind: DataReplication
    metadata:
      name: <Name for Data replication>
    spec:
      # In case of load balancer based n/w traffic management, 
      # set lbEnabled to true. else the default  
      # value should be always kept false.
      lbEnabled : false
      # Virtual IP address to configure VVR
      localHostAddress: <Any free Virtual IP address 
                         to configure VVR for the primary cluster>
    
      # Corresponding netmask to configure VVR
      localNetMask: <Corresponding netmask to configure VVR>
      # Corresponding network interface to configure VVR 
      # (If NIC name is identical for all nodes)
      localNIC: eth0
      # Corresponding network interface map 
      # (hostname and NIC name map) to configure VVR
      # (If NIC name is not identical for all nodes)
      #localNICMap:
      #  "host1" : "eth0"
      #  "host2" : "eth0"
      #  "host3" : "eth0"
      #  "host4" : "eth1"
      # Namespace and optionally labels 
      # for which you want to configure data replication
    
      # (optional) Cloud Vendor (e.g Azure/Aws) on Primary VVR site.
      cloudVendor: Local
    
      # (optional) Applicable for Cloud-Vendor based environments,
      # If "localHostAddress" value is an Overlay N/w IP, then 
      # specify all applicable Route table resource ids.
      #routeTableResourceIds:
      #  - "rtb-fb97ac9d"
      #  - "rtb-f416eb8d"
      #  - "rtb-e48be49d"
    
      selector:
        namespace: mysql
        #labels:
        #  app: db
      # Current primary cluster name - Name of the cluster 
      #                              you want to back up
      currentPrimary: <Current primary cluster name -
                               Name of the cluster you want to back up>
      # (optional) In case of takeover operation, specify force to true 
      # along with the updated current Primary value. 
      # In case of migrate operation, force should be specified as false 
      # and only currentPrimary needs to be updated.
      force: false
    # Secondary cluster details
      remoteClusterDetails:
          # ID of the Cluster to be used for a backup
        - clusterName: <ID of the Cluster to be used for a backup>
          # In case of load balancer based n/w traffic management,
          # set remoteLbEnabled to true. else the default 
          # value should be always kept false.
          remoteLbEnabled : false
          # Virtual IP address for VVR configuration of this cluster
          remoteHostAddress: <Any free Virtual IP address for
                              VVR configuration of the remote cluster>
          # Corresponding Netmask of this cluster
          remoteNetMask: <Corresponding Netmask of the remote cluster>
          # Corresponding Network interface of this cluster
          remoteNIC: eth0
          # Corresponding Network interface map of this cluster
          #remoteNICMap:
          #  "host5" : "eth1"
          #  "host6" : "eth0"
          #  "host7" : "eth0"
          #  "host8" : "eth1"
          
          # (optional) Cloud Vendor (e.g Azure/Aws) on remote VVR site.     
          remoteCloudVendor: Local
    
          # (optional) Applicable for Cloud-Vendor based environments, 
          # If "remoteHostAddress" value is an Overlay N/w IP, then 
          # specify all applicable Route table resource ids.
          #remoteRouteTableResourceIds:
          #  - "rtb-fb97ac9d"
          #  - "rtb-f416eb8d"
          #  - "rtb-e48be49d"
    
          # (optional) replication type can be sync or async.
          # default value will be async if not specified.
          replicationType: async
          # (optional) replicationState can have 
          # values start, stop, pause and resume.
          # This field can be updated to
          # start/stop/pause/resume replication.
          # Default value will be set to start 
          #during initial configuration.
          replicationState: start
          # (optional) network transport protocol can be TCP or UDP.
          # Default value will be set to TCP during 
          # initial configuration and can be later changed to UDP.
          networkTransportProtocol: TCP
          # (optional) By default, it will be set to N/A  
          # during initial configuration,which means the available bandwidth 
          # will be used.
          # It can be later changed to set the 
          # maximum network bandwidth (in bits per second).
          bandwidthLimit: N/A
          # (optional) Supported values for latency protection are: 
          # fail, disable and override.
          # By default it will be set to disable during initial 
          # configuration and can be changed later.
          latencyProtection: disable
          # (optional) Supported values log (SRL) protection are: 
          # autodcm, dcm, fail, disable and override.
          # By default,it will be set to autodcm during initial 
          # configuration and can be changed later.
          logProtection: autodcm
    

    Note:

    Ensure that the current primary cluster name you enter here must be the same that you plan to specify in DisasterRecoveryPlan.yaml. For every Disaster Recovery Plan, you must create a separate Data Replication CR. Ensure that namespace and labels in Disaster Recovery Plan and its corresponding Data Replication CR are identical.

  2. Run the following command on the master node

    kubectl apply -f /YAML/DR/SampleDataReplication.yaml

  3. After these commands are executed, run the following commands on the master node

    kubectl get datarep

  4. Review the output similar to the following

    NAME PROPOSED PRIMARY CURRENT PRIMARY NAMESPACE LABELS
    <Name of data replication resource> <proposed cluster ID> 
                <current cluster ID> <namespace> <labels if any>
                                       
    

    kubectl get datarep -o wide

    Review the output similar to the following

    NAME PROPOSED PRIMARY CURRENT PRIMARY NAMESPACE LABELS 
    postgres-rep Clus1 Clus1 postgres <none> 
     | replicating (connected) | behind by 0h 0m 0s
    REPLICATION   SUMMARY
    asynchronous | consistent,up-to-date
  5. Wait for the initial synchronization of the application Persistent Volumes to complete on the DR site. Run the following command on the master node of the DR site.

    kubectl describe datarep <Data rep name for the application>

    Review the status in the output similar to the following. Data Status must be consistent up-to-date.

    Spec:
    ..
    ..
    Status:
    ..
    ..
       Primary Status:
       ..
       ..
       Secondary Status:
       ..
       Data Status:  consistent,up-to-date