Veritas InfoScale™ for Kubernetes Environments 8.0.220 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Prerequisites
- Tagging the InfoScale images on Kubernetes
- Installing InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Upgrading InfoScale
- Troubleshooting
Considerations for configuring cluster or adding nodes to an existing cluster
You can specify up to 16 worker nodes in cr.yaml
. Although cluster configuration is allowed even with one Network Interface Card, Veritas recommends a minimum of two physical links for performance and High Availability (HA). Number of links for each network link must be same on all nodes. Optionally, you can enter node level IP addresses. If IP addresses are not provided, IP addresses of Kubernetes cluster nodes are used.
By default, Flexible Storage Sharing (FSS) disk group is created. If shared storage is the only storage available across nodes you can set isSharedStorage
to true and fully shared non-FSS disk group is created while configuring clusters. Mirroring is performed across enclosures thus ensuring redundancies. While using shared storage, Veritas recommends changing the default majority-based fencing to disk-based fencing. With shared storage and majority-based fencing, storage becomes inaccessible when majority of the nodes go down; although all disks are connected to all the nodes. With disk-based fencing, storage is available to applications even when one node is up. To configure disk-based fencing, specify hardware path of the fencing disks for at least one node. Veritas recommends using at least three disks for fencing purpose. Ensure that the number of disks is odd in number.
You can enable encryption at the disk group level or for specific Volumes within the disk group. Encryption is not enabled by default. You can set encryption
to true to enable encryption. If you want the same encryption key for all Volumes, you can set sameEnckey
to true. For different encryption keys per Volume, set sameEnckey
to false.
Note:
For Disaster Recovery (DR) configuration, only Volume level encryption is supported. Disk group level encryption is not supported.