Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.220)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1. Setting up the private network
      1.  
        Guidelines for setting the media speed for LLT interconnects
      2.  
        Guidelines for setting the maximum transmission unit (MTU) for LLT
    2.  
      Synchronizing time settings on cluster nodes
    3.  
      Securing your InfoScale deployment
    4.  
      Configuring kdump
  4. Installing Veritas InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional Prerequisites for Azure RedHat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5. Installing InfoScale on a system with Internet connectivity
      1. Installing from OperatorHub by using web console
        1.  
          Adding Nodes to an InfoScale cluster by using OLM
        2.  
          Undeploying and uninstalling InfoScale
      2. Installing from OperatorHub by using Command Line Interface (CLI)
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale by using CLI
      3. Installing by using YAML
        1.  
          Configuring cluster
        2.  
          Adding nodes to an existing cluster
        3.  
          Undeploying and uninstalling InfoScale
    6. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    7.  
      Removing and adding back nodes to an Azure RedHat OpenShift (ARO) cluster
  5. Installing Veritas InfoScale on Kubernetes
    1.  
      Introduction
    2. Prerequisites
      1.  
        Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    3. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    4.  
      Applying licenses
    5.  
      Tech Preview: Installing InfoScale on an Azure Kubernetes Service(AKS) cluster
    6.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    7. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    8.  
      Installing InfoScale by using the plugin
    9.  
      Undeploying and uninstalling InfoScale
  6. Configuring KMS-based Encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  7. Configuring KMS-based Encryption on a Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Enabling rekey for an encrypted Volume
  8. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
  9. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  10. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  12. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Veritas Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  13. Administering InfoScale on Containers
    1.  
      Adding Storage to an InfoScale cluster
    2.  
      Managing licenses
  14. Upgrading InfoScale
    1.  
      Prerequisities
    2.  
      On a Kubernetes cluster
    3.  
      On an OpenShift cluster
  15. Troubleshooting
    1.  
      Collecting logs by using SORT Data Collector
    2.  
      Known Issues
    3.  
      Limitations

Enabling user access and other pod-related logs in Container environment

OpenShift and Kubernetes clusters have an in-built logging mechanism. You can configure the /etc/kubernetes/manifests/kube-apiserver.yaml to include the following types of logs.

  • Software upgrades and configuration file changes

  • System (Virtual or Physical) boot and halt

  • Process launches

  • Non-normal process exits

  • SELinux policy violations

  • System login attempts

  • Services starts and stops

  • Container starts and exits

You can thus log all events related to InfoScale pods, secrets and config maps.

Note:

After you configure these files, API server must be restarted. Hence, the server experiences a downtime. Ensure that you inform about the downtime to the user community. If the files are not correctly configured, the API server might not restart. The configuration must be performed by a competent Storage Administrator.

Add the following code to /etc/kubernetes/manifests/kube-apiserver.yamlto log all user login attempts to the InfoScale pods. The user name, time, and whether the attempt is successful or not is logged.

apiVersion: audit.k8s.io/v1 # This is required.

kind: Policy

rules

# Log pod/exec requests at RequestResponse level

- level: RequestResponse

namespaces: ["infoscale-vtas"]

resources:

- group: ""

resources: ["pods/exec"]

# Log everything else at Metadata level

- level: Metadata

omitStages:

- "RequestReceived"

Similarly, add the following code to log pods creation and deletion, config map changes, and secrets changes.

apiVersion: audit.k8s.io/v1

kind: Policy

omitStages:

- "RequestReceived"

rules:

#Log the Metadata of Pod changes in given namespace

- level: Metadata

resources:

- group: ""

resources: ["pods"]

verbs: ["create", "patch", "update", "delete"]

namespaces: [""] #Fill namespace

#Please use the appropriate namespace where

# the infoscale pods are deployed in the namespace tag

#For eg: namespaces["infoscale-vtas"]

#Log the Request body of configmap changes in given namespace

- level: Request

resources:

- group: ""

resources: ["configmaps"]

namespaces: [""] #Fill namespace

#Log the Request of secrets changes in given namespace

- level: Request

resources:

- group: ""

resources: ["secrets"]

namespaces: [""] #Fill namespace

Login attempts to an OpenShift cluster get recorded in oauth-openshift- pod logs. The log level must be 'debug'. You can run oc edit authentications.operator.openshift.io to change the log level to 'debug'.

On an OpenShift cluster, pods creation and deletion gets logged in journalctl. Run journalctl no-pager on all the worker nodes for information about pods creation and deletion.

To enable extended logging for all core InfoScale components like VxVM, VxFS, and VCS, set the EO_COMPLIANCE to enabled.

Note:

You must enable EO_COMPLIANCE for your InfoScale deployment first. As a prerequisite, ensure that DNS is correctly configured or /etc/hosts is used to define IP address, full-qualified domain name(FQDN), and host name for the cluster node. Edit sds-operator deployment. Run the following command oc/kubectl edit deployment -n infoscale-vtas <deployment_name> . Ensure that you update EO_COMPLIANCE here to enabled as under.

name: EO_COMPLIANCE

value: enabled

After you edit and save the sds-operator deployment, the InfoScale sds operator restarts automatically. You have to manually restart infoscale-sds pods. Ensure that you restart one pod at a time. After a restarted pod is in a 'Ready' state, restart the next pod.

If you want to enable EO_COMPLIANCE on an OpenShift cluster by using OLM, run oc edit subscription infoscale-sds-operator -n infoscale-vtas and add the following to spec:

config:

env:

name: EO_COMPLIANCE

value: enabled