Veritas InfoScale™ Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- How DMP works
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with Veritas InfoScale product components in the VMware guest
- Optimizing storage with Veritas InfoScale product components in the VMware guest
- Migrating data with Veritas InfoScale product components in the VMware guest
- Improving database performance with Veritas InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Configuring coordination point (CP) servers
- Section IV. Reference
Configuring storage
There are two options to provide storage to the Virtual Machines (VMs) that will host the Cluster File System:
The first option, Raw Device Mapping Protocol (RDMP), uses direct access to external storage and supports parallel access to the LUN, but does not allow vMotion or DRS. For RDMP configuration, you must map the raw device to each VM and make sure you select the Physical (RDM-P) configuration, so SCSI-3 PGR commands are passed along to the disk.
The second option, VMFS virtual disk (VMDK), provides a file that can only be accessed in parallel when the VMFS multi-writer option is enabled. This option supports server vMotion and DRS, but does not currently support SCSI-3 PR IO fencing. The main advantage of this architecture is the ability to move VMs around different ESXi servers without service interruption, using vMotion.
This deployment example uses VMDK files with the multi-writer option enabled. In this section we will show how to configure the ESXi server and virtual machines to share a VMDK file and how to configure SFCFSHA to consume that storage and create a file system. Support for VMDK files is based on the multi-writer option described in this VMware article: http://kb.vmware.com/kb/1034165 By default, one VMDK file can only be mounted by one VM at a time. By following the steps in the VMware article, simultaneous write protection provided by VMFS is disabled using the multi-writer flag. When choosing this configuration, users should be aware of the following limitations and advantages.
Limitations:
Virtual disks must be eager zeroed thick
VMDK Sharing is limited to eight ESXi servers
Linked clones and snapshots are not supported. Be aware that other vSphere activities utilize cloning and that backup solutions leverage snapshots via the vAPIs, so backups may be adversely impacted.
SCSI-3 PR IO fencing is not supported by VMDK files. Special care needs to be taken when assigning VMDKs to VMs. Inadvertently assigning a VMDK file already in use to the wrong VM will likely result in data corruption.
Storage vMotion is not supported
The advantage is that server vMotion is supported.
The lack of SCSI-3 PR IO fencing support requires the usage of at least three Coordination Point servers, to provide non-SCSI-3 fencing protection. In case of a split brain situation, CP servers will be used to determine what part of the sub-cluster will continue providing service. Once the multi-writer flag is enabled on a VMDK file, any VM will be able to mount it and write, so special care in the provisioning phase needs to be taken.
Note that if the number of SFCFSHA nodes is greater than eight, several nodes will have to run in the same ESXi server, based on the limitation that a maximum of eight ESXi servers can share the same VMDK file. For example, if you are running at the SFCFSHA maximum of 64 nodes, those 64 VMs would share the same VMDK file, but you could only use eight ESXi servers to host the cluster.
These are the steps that need to be taken when configuring VMDKs as shared backed storage and that will be presented in the next sections:
Table: Steps to configure VMDK
Storage deployment task | Deployment steps |
---|---|
Enabling Disk UUID on virtual machines (VMs) | |
Installing Array Support Library (ASL) for VMDK on cluster nodes | See Installing Array Support Library (ASL) for VMDK on cluster nodes. |
Excluding the boot disk from the Volume Manager configuration | See Excluding the boot disk from the Volume Manager configuration. |
Creating the VMDK files | See Excluding the boot disk from the Volume Manager configuration. |
Mapping VMDKs to each virtual machine | |
Enabling the multi-write flag | |
Getting consistent names across nodes | |
Creating a Cluster File System |