Veritas InfoScale™ 7.4.2 Solutions in Cloud Environments
- Overview and preparation
- Configurations for Amazon Web Services - Linux
- Configurations for Amazon Web Services - Windows
- Replication configurations in AWS - Windows
- HA and DR configurations in AWS - Windows
- Configurations for Microsoft Azure - Linux
- Configurations for Microsoft Azure - Windows
- Configurations for Google Cloud Platform- Linux
- Configurations for Google Cloud Platform - Windows
- Replication to and across cloud environments
- Migrating files to the cloud using Cloud Connectors
- Troubleshooting issues in cloud deployments
InfoScale feature for storage sharing in cloud environments
InfoScale supports flexible storage sharing (FSS) in cloud environments for a cluster that is located within the same region. The nodes in the cluster may be located within the same zone or across zones (Availability Zone in case of AWS and user-defined site in case of Azure). FSS leverages cloud block storage to provide shared storage capability.
Storage devices that are under VxVM control are prefixed with the private IP address of the node. You can override the default behavior with the vxdctl set hostprefix command. For details, see the Storage Foundation Cluster File System High Availability Administrator's Guide - Linux.
In cloud environments, FSS in campus cluster configurations can be used as a disaster recovery mechanism, across data centers within a single region. For example, in AWS, nodes within an AZ can be configured as one of the campus cluster site, while the nodes in another AZ can be configured as the second site. For details, see the Veritas InfoScale Disaster Recovery Implementation Guide - Linux.
Note:
(Azure only) By default, in addition to the storage disks that you have attached, every virtual machine that is provisioned contains a temporary resource disk. Do not use the temporary resource disk as a data disk. A temporary resource disk is an ephemeral storage that must not be used for persistent data. The disk may change after a machine is redeployed or restarted, and the data is lost.
See About identifying a temporary resource disk - Linux.
For details on how Azure uses a temporary disk, see the Microsoft Azure documentation.
Note:
(GCP only) When VCS is stopped and started on VM instances, or after a node restarts, the import and recovery operations on FSS diskgroups may take longer than expected. The master cannot import a diskgroup until all the nodes have joined the cluster. Some nodes may join their cluster with some delay. In that case, a diskgroup import operation takes longer than expected to succeed. Even if the master initially fails to import the diskgroup due to such a delay, the operation completes successfully later on retry.
FSS in cloud environments is supported with LLT over UDP only.
The MTU size of a network path in Azure and in GCP is 1500 bytes, by default, and it cannot be changed. For slow networks like these, LLT uses a single UDP socket for each high priority link.
To achieve better LLT performance in such high-latency cloud networks:
Set the following tunable values before you start LLT or the LLT services:
set-flow window:10
set-flow highwater:10000
set-flow lowwater:8000
set-flow rporthighwater:10000
set-flow rportlowwater:8000
set-flow ackval:5
set-flow linkburst:32
Disable the LLT adaptive window in Azure and in GCP as follows:
/etc/sysconfig/llt LLT_ENABLE_AWINDOW=0
For details on the usage of these tunables, refer to the Cluster Server Administrator's Guide.