Arctera InfoScale™ for Kubernetes 8.0.400 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Arctera InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Arctera InfoScale on Kubernetes
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Installing Arctera InfoScale on RKE2
- Configuring KMS-based encryption on an OpenShift cluster
- Configuring KMS-based encryption on an Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Troubleshooting
Hardware requirements
Note:
These requirements are for a basic cluster configuration. Depending on your workloads/applications, higher values might be required.
Table: Hardware requirements
Requirement | Description |
---|---|
Memory (Operating System) | Minimum 24 GB |
CPU (on Kubernetes) | On Physical servers - a minimum of 2 processors with 6/8 cores each. On Virtual machines (VMware-like environment) - a minimum of 4 vCPUs. |
CPU (on OpenShift) | On Physical servers - a minimum of 2 processors with 6/8 cores each. On Virtual machines (VMware-like environment) - a minimum of 12 vCPUs for master node and 8 vCPUs for worker nodes. |
Node | All nodes in a Cluster must have the same operating system version. |
Storage | Storage can be one or more shared disks, or a disk array connected either directly to the nodes of the cluster or through a Fibre Channel Switch. Nodes can also have non-shared or local devices on a local I/O channel. In a Flexible Storage Sharing (FSS) environment, shared storage may not be required. |
Cluster platforms | There are several hardware platforms that can function as nodes in a Arctera InfoScale cluster. For the InfoScale cluster to work correctly, all nodes must have the same time. If you are not running the Network Time Protocol (NTP) daemon, ensure the time on all the systems comprising your cluster is synchronized. |
SAS or FCoE | Each node in the cluster must have an SAS or FCoE I/O channel to access shared storage devices. The primary components of the SAS or Fibre Channel over Ethernet (FCoE) fabric are the switches and HBAs. |
For additional information, see the hardware compatibility list (HCL):
https://www.veritas.com/support/en_US/doc/infoscale_hcl_8x_unix