Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- How DMP works
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with InfoScale product components in the VMware guest
- Optimizing storage with InfoScale product components in the VMware guest
- Migrating data with InfoScale product components in the VMware guest
- Improving database performance with InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Configuring coordination point (CP) servers
- Section IV. Reference
I/O fencing considerations in an ESXi environment
VMware does not support SCSI-3 Persistent Reservations (and hence I/O Fencing) with any other 3rd party clustering software with RDM logical mode or VMDK-based virtual disks. In VMware environments, SFHA and SFCFSHA support the following methods of fencing:
disk-based fencing with RDM-P mode.
Available starting with SFHA and SFCFSHA version 5.1 Service Pack 1 Rolling Patch 1.
See the following tech note for details.
non-SCSI-3 PR-based fencing using the Coordination Point (CP) server.
The CP server provides arbitration amongst the multiple nodes.
I/O fencing utilizes HBA World Wide Numbers (WWNs) to create registrations on the storage; this has implications in a virtual environment where the HBA is shared between virtual servers on the same physical ESXi host as the WWN used for I/O fencing ends up being the same for each virtual machine. Therefore, SFCFSHA virtual machines (in the same SFCFSHA cluster) cannot share physical servers as the I/O fencing behavior will result in all nodes from that physical ESXi host being fenced out if an event triggers the fencing functionality. In short, if I/O fencing is configured, the SFCFSHA nodes (in the same SFCFSHA cluster) have to be running on separate physical ESXi hosts.