Arctera InfoScale™ for Kubernetes 8.0.400 - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0.400)
Platform: Linux
  1. Overview
    1.  
      Introduction
    2.  
      Features of InfoScale in Containerized environment
    3.  
      CSI Introduction
    4.  
      I/O fencing
    5.  
      Disaster Recovery
    6.  
      Licensing
    7.  
      Encryption
  2. System requirements
    1.  
      Introduction
    2.  
      Supported platforms
    3.  
      Disk space requirements
    4.  
      Hardware requirements
    5.  
      Number of nodes supported
    6.  
      DR support
  3. Preparing to install InfoScale on Containers
    1.  
      Setting up the private network
    2.  
      Guidelines for setting the media speed for LLT interconnects
    3.  
      Guidelines for setting the maximum transmission unit (MTU) for LLT
    4.  
      Synchronizing time settings on cluster nodes
    5.  
      Securing your InfoScale deployment
    6.  
      Configuring kdump
  4. Installing Arctera InfoScale on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Additional prerequisites for Azure Red Hat OpenShift (ARO)
    4.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    5.  
      Creating multiple InfoScale clusters
    6. Installing InfoScale on a system with Internet connectivity
      1.  
        Installing from OperatorHub by using web console
      2.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      3.  
        Installing by using YAML
    7. Installing InfoScale in an air gapped system
      1.  
        Prerequisites to install by using YAML or OLM
      2.  
        Additional prerequisites to install by using yaml
      3.  
        Installing from OperatorHub by using web console
      4.  
        Installing from OperatorHub by using Command Line Interface (CLI)
      5.  
        Installing by using YAML
    8.  
      Removing and adding back nodes to an Azure Red Hat OpenShift (ARO) cluster
  5. Installing Arctera InfoScale on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Installing Node Feature Discovery (NFD) Operator and Cert-Manager on Kubernetes
    4.  
      Downloading Installer
    5. Tagging the InfoScale images on Kubernetes
      1.  
        Downloading side car images
    6.  
      Applying licenses
    7.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    8.  
      Creating multiple InfoScale clusters
    9. Installing InfoScale on Kubernetes
      1.  
        Configuring cluster
      2.  
        Adding nodes to an existing cluster
    10.  
      Undeploying and uninstalling InfoScale
  6. Installing Arctera InfoScale on RKE2
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Installing Node Feature Discovery (NFD) Operator and Cert-Manager on RKE2 cluster
    4.  
      Downloading Installer
    5.  
      Tagging the InfoScale images on RKE2
    6.  
      Applying licenses
    7.  
      Considerations for configuring cluster or adding nodes to an existing cluster
    8.  
      Creating multiple InfoScale clusters
    9.  
      Installing InfoScale on RKE2
    10.  
      Undeploying and uninstalling InfoScale
  7. Configuring KMS-based encryption on an OpenShift cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Renewing with an external CA certificate
  8. Configuring KMS-based encryption on an Kubernetes cluster
    1.  
      Introduction
    2.  
      Adding a custom CA certificate
    3.  
      Configuring InfoScale to enable transfer of keys
    4.  
      Renewing with an external CA certificate
  9. InfoScale CSI deployment in Container environment
    1.  
      CSI plugin deployment
    2.  
      Raw block volume support
    3.  
      Static provisioning
    4. Dynamic provisioning
      1.  
        Reclaiming provisioned storage
    5.  
      Resizing Persistent Volumes (CSI volume expansion)
    6. Snapshot provisioning (Creating volume snapshots)
      1.  
        Dynamic provisioning of a snapshot
      2.  
        Static provisioning of an existing snapshot
      3.  
        Using a snapshot
      4.  
        Restoring a snapshot to new PVC
      5.  
        Deleting a volume snapshot
      6.  
        Creating snapshot of a raw block volume
    7. Managing InfoScale volume snapshots with Velero
      1.  
        Setting up Velero with InfoScale CSI
      2.  
        Taking the Velero backup
      3.  
        Creating a schedule for a backup
      4.  
        Restoring from the Velero backup
    8. Volume cloning
      1.  
        Creating volume clones
      2.  
        Deleting a volume clone
    9.  
      Using InfoScale with non-root containers
    10.  
      Using InfoScale in SELinux environments
    11.  
      CSI Drivers
    12.  
      Creating CSI Objects for OpenShift
    13.  
      Creating ephemeral volumes
    14.  
      Creating node affine volumes
  10. Installing and configuring InfoScale DR Manager on OpenShift
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager by using OLM
      1.  
        Installing InfoScale DR Manager by using web console
      2.  
        Configuring InfoScale DR Manager by using web console
      3.  
        Installing from OperatorHub by using Command Line Interface (CLI)
    6. Installing InfoScale DR Manager by using YAML
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  11. Installing and configuring InfoScale DR Manager on Kubernetes
    1.  
      Introduction
    2.  
      Prerequisites
    3.  
      Creating Persistent Volume for metadata backup
    4.  
      External dependencies
    5. Installing InfoScale DR Manager
      1.  
        Configuring Global Cluster Membership (GCM)
      2.  
        Configuring Data Replication
      3.  
        Additional requirements for replication on Cloud
      4.  
        Configuring DNS
      5.  
        Configuring Disaster Recovery Plan
  12. Disaster Recovery scenarios
    1.  
      Migration
    2.  
      Takeover
  13. Configuring InfoScale
    1.  
      Logging mechanism
    2.  
      Configuring Arctera Oracle Data Manager (VRTSodm)
    3.  
      Enabling user access and other pod-related logs in Container environment
  14. Administering InfoScale on Containers
    1.  
      Adding storage to an InfoScale cluster
    2.  
      Managing licenses
    3.  
      Monitoring InfoScale
    4.  
      Configuring Alerts for monitoring InfoScale
    5.  
      Draining InfoScale nodes
    6.  
      Using InfoScale toolset
    7.  
      Changing the peerinact value in a cluster
    8.  
      PV rebuild
  15. Troubleshooting
    1.  
      Adding a sort data collector utility
    2.  
      Collecting logs by using SORT Data Collector
    3.  
      Approving certificate signing requests (csr) for OpenShift
    4.  
      Cert Renewal related
    5.  
      PVC deletions after PV rebuilds
    6.  
      Known Issues
    7.  
      Limitations

Creating node affine volumes

Overview

Node affine volumes are a specialized type of striped only (no mirrors) volume that allocate storage exclusively to a specific node, ensuring that all I/O operations are served from local storage. This eliminates the need for network-based storage access. This approach is particularly effective for OLAP applications that require high-performance file storage, where traditional solutions like NFS or iSCSI may not perform well in such use-cases.

VIKE, as a hyper-converged solution, leverages node affine volumes to storage allocation, thereby achieving high performance through local I/O and efficient striping across multiple disks.

Enabling node affinity

To enable node affinity, specify the nodeAffinity parameter in the storage class. This parameter can be set to either:

  • true - Enables node affinity, with node selection based on the volume binding mode.

  • Node name - Specifies the Kubernetes node to which the volume should be affined.

Volume binding modes

The node selection for storage allocation through node affinity depends on the volume binding mode configured in the storage class:

  1. Immediate binding

    • Storage is allocated as soon as the PersistentVolumeClaim (PVC) is created, potentially before the application pod is created.

    • The InfoScale CSI driver selects the node for storage allocation based on the policy specified in the storage class.

    • Once the volume is created, the appropriate nodeAffinity is set in the PersistentVolume (PV) object, ensuring that Kubernetes schedules the application pod on the same node.

  2. Delayed binding (wait for first consumer)

    • Storage allocation is delayed until the application pod is scheduled.

    • Kubernetes first selects the node for the application pod and then passes the node information to the InfoScale CSI driver for storage allocation on that node.

    • If the selected node does not have enough storage, the driver notifies the Kubernetes scheduler to reschedule the pod on a different node.

Supported allocation policies

Specify the policy using the nodeAffinityType parameter. The available options are:

  • bestspread (default) - Selects the node with the maximum available storage among the nodes that can fulfill the request. This policy evenly spreads application pods but may result in suboptimal storage utilization.

  • bestfit - Selects the node with the least available storage that can still satisfy the request. This policy optimizes storage utilization but may lead to uneven pod distribution.

Configuration parameters

The following parameters are used to configure node affinity:

Table: Configuration parameters for node affinity

Parameter

Description

Default value

nodeAffinity

Set to 'true' or the name of the Kubernetes node to which the volume should be affined.

N/A

nodeAffinityType

Set to 'bestspread' or 'bestfit'.

'bestspread'