Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- How DMP works
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with InfoScale product components in the VMware guest
- Optimizing storage with InfoScale product components in the VMware guest
- Migrating data with InfoScale product components in the VMware guest
- Improving database performance with InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Configuring coordination point (CP) servers
- Section IV. Reference
About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
This sample deployment illustrates how to install and configure Storage Foundation Cluster File System High Availability (SFCFSHA) in a VMware virtual server using VMware filesystem (VMFS) virtual disks (VMDKs) as the storage subsystem.
The information provided here is not a replacement or substitute for SFCFSHA documentation nor for the VMware documentation. This is a deployment illustration which complements the information found in other documents.
The following product versions and architecture are used in this example deployment:
RedHat Enterprise Linux (RHEL) Server 6.2
Storage Foundation Cluster File System High Availability 7.4.1
ESXi 5.1
A four node virtual machine cluster will be configured on two VMware ESXi servers. Shared storage between the two ESXi servers using Fibre Channel has been setup. The Cluster File System will exist across four virtual machines: cfs01, cfs02, cfs03, and cfs04. Three Coordination Point (CP) servers will be used: cps1, cps2, and cps3 (this one placed in a different ESXi server). For storage, five data stores will be used and one shared VMDK file will be placed in each data store.
Two private networks will be used for cluster heartbeat. They are called PRIV1 and PRIV2. Virtual switch vSwitch2 also has the VMkernel Port for vMotion enabled. vSwitch0 is used for management traffic and the public IP network.
Some blades have a two network limit. If this is the case, configure one network for heartbeats and the other one as a heartbeat backup (low priority setting).