InfoScale™ 9.0 Virtualization Guide - Linux on ESXi
- Section I. Overview
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Getting started
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with InfoScale™ product components in the VMware guest
- Optimizing storage with InfoScale™ product components in the VMware guest
- About Flexible Storage Sharing
- Migrating data with InfoScale™ product components in the VMware guest
- Improving database performance with InfoScale™ product components in the VMware guest
- Setting up virtual machines for fast failover using InfoScale Enterprise on VMware disks
- About setting up InfoScale Enterprise on VMware ESXi
- Section IV. Reference
I/O fencing considerations in an ESXi environment
VMware does not support SCSI-3 Persistent Reservations (and hence I/O Fencing) with any other 3rd party clustering software with RDM logical mode or VMDK-based virtual disks. In VMware environments, SFHA and Arctera InfoScale Enterprise support the following methods of fencing:
disk-based fencing with RDM-P mode.
Available starting with SFHA and Arctera InfoScale Enterprise version 5.1 Service Pack 1 Rolling Patch 1.
See the following tech note for details.
non-SCSI-3 PR-based fencing using the Coordination Point (CP) server.
The CP server provides arbitration amongst the multiple nodes.
I/O fencing utilizes HBA World Wide Numbers (WWNs) to create registrations on the storage; this has implications in a virtual environment where the HBA is shared between virtual servers on the same physical ESXi host as the WWN used for I/O fencing ends up being the same for each virtual machine. Therefore, Arctera InfoScale Enterprise virtual machines (in the same Arctera InfoScale Enterprise cluster) cannot share physical servers as the I/O fencing behavior will result in all nodes from that physical ESXi host being fenced out if an event triggers the fencing functionality. In short, if I/O fencing is configured, the Arctera InfoScale Enterprise nodes (in the same Arctera InfoScale Enterprise cluster) have to be running on separate physical ESXi hosts.