Veritas InfoScale™ 7.3.1 Virtualization Guide - Linux on ESXi
- Section I. Overview
- Overview of Veritas InfoScale solutions in a VMware environment
- Introduction to using Veritas InfoScale solutions in the VMware virtualization environment
- Introduction to using Dynamic Multi-Pathing for VMware
- About Veritas InfoScale solutions support for the VMware ESXi environment
- Overview of Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Getting started
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Storage to application visibility using Veritas InfoScale Operations Manager
- About storage to application visibility using Veritas InfoScale Operations Manager
- About discovering the VMware Infrastructure using Veritas InfoScale Operations Manager
- About the multi-pathing discovery in the VMware environment
- About near real-time (NRT) update of virtual machine states
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- How DMP works
- Improving I/O performance using SmartPool
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with Veritas InfoScale product components in the VMware guest
- Optimizing storage with Veritas InfoScale product components in the VMware guest
- Migrating data with Veritas InfoScale product components in the VMware guest
- Improving database performance with Veritas InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Configuring coordination point (CP) servers
- Configuring storage
- Storage to application visibility using Veritas InfoScale Operations Manager
- Section IV. Reference
I/O fencing considerations in an ESXi environment
Storage Foundation Cluster File System High Availability (SFCFSHA) is supported when running inside a virtual machine; however, if I/O fencing is used then it requires special attention.
VMware does not support SCSI-3 Persistent Reservations (and hence I/O Fencing) with any other 3rd party clustering software with RDM logical mode or VMDK-based virtual disks. In VMware environments, SFHA and SFCFSHA support the following methods of fencing:
disk-based fencing with RDM-P mode.
Available starting with SFHA and SFCFSHA version 5.1 Service Pack 1 Rolling Patch 1.
See the following tech note for details.
non-SCSI-3 PR-based fencing using the Coordination Point (CP) server.
The CP server provides arbitration amongst the multiple nodes.
I/O fencing utilizes HBA World Wide Numbers (WWNs) to create registrations on the storage; this has implications in a virtual environment where the HBA is shared between virtual servers on the same physical ESXi host as the WWN used for I/O fencing ends up being the same for each virtual machine. Therefore, SFCFSHA virtual machines (in the same SFCFSHA cluster) cannot share physical servers as the I/O fencing behavior will result in all nodes from that physical ESXi host being fenced out if an event triggers the fencing functionality. In short, if I/O fencing is configured, the SFCFSHA nodes (in the same SFCFSHA cluster) have to be running on separate physical ESXi hosts.