InfoScale™ 9.0 Virtualization Guide - Linux on ESXi
- Section I. Overview
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Getting started
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with InfoScale™ product components in the VMware guest
- Optimizing storage with InfoScale™ product components in the VMware guest
- About Flexible Storage Sharing
- Migrating data with InfoScale™ product components in the VMware guest
- Improving database performance with InfoScale™ product components in the VMware guest
- Setting up virtual machines for fast failover using InfoScale Enterprise on VMware disks
- About setting up InfoScale Enterprise on VMware ESXi
- Section IV. Reference
Configuring storage
There are two options to provide storage to the Virtual Machines (VMs) that will host the Cluster File System:
The first option, Raw Device Mapping Protocol (RDMP), uses direct access to external storage and supports parallel access to the LUN, but does not allow vMotion or DRS. For RDMP configuration, you must map the raw device to each VM and make sure you select the Physical (RDM-P) configuration, so SCSI-3 PGR commands are passed along to the disk.
The second option, VMFS virtual disk (VMDK), provides a file that can only be accessed in parallel when the VMFS multi-writer option is enabled. This option supports server vMotion and DRS, but does not currently support SCSI-3 PR IO fencing. The main advantage of this architecture is the ability to move VMs around different ESXi servers without service interruption, using vMotion.
Let us consider an example where VMDK is configured with the multi-writer option enabled. In the following section, steps required to configure the virtual machines on ESXi server with shared VMDK files is covered. Also steps to configure Arctera InfoScale Enterprise in the same environment to consume the storage and create file system is also mentioned.
By following the steps in the VMware article, simultaneous write protection provided by VMFS is disabled using the multi-writer flag. When choosing this configuration, users should be aware of the following limitations and advantages
To understand the support for VMDK files with the multi-writer option, refer to the VMware article at: http://kb.vmware.com/kb/1034165
By following the steps in the VMware article, simultaneous write protection provided by VMFS is disabled using the multi-writer flag. When choosing this configuration, users should be aware of the following limitations and advantages.
Limitations:
Virtual disks must be thick provisioned eager zeroed.
VMDK sharing is limited to eight ESXi servers
Linked clones and snapshots are not supported. Be aware that other vSphere activities utilize cloning and that backup solutions leverage snapshots via the vAPIs, so backups may be adversely impacted.
SCSI-3 PR IO fencing is not supported by VMDK files.
Special care needs to be taken when assigning VMDKs to VMs. Inadvertently assigning a VMDK file already in use to the wrong VM will likely result in data corruption.
Storage vMotion is not supported
Advantages
Server vMotion is supported.
The lack of SCSI-3 PR IO fencing support requires the usage of at least three Coordination Point servers, to provide non-SCSI-3 fencing protection. In case of a split brain situation, CP servers will be used to determine what part of the sub-cluster will continue providing service. Once the multi-writer flag is enabled on a VMDK file, any VM will be able to mount it and write, so special care in the provisioning phase needs to be taken.
Note:
Note that if the number of Arctera InfoScale Enterprise nodes is greater than eight, several nodes will have to run in the same ESXi server, based on the limitation that a maximum of eight ESXi servers can share the same VMDK file. For example, if you are running at the Arctera InfoScale Enterprise maximum of 64 nodes, those 64 VMs would share the same VMDK file, but you could only use eight ESXi servers to host the cluster.
These are the steps that need to be taken when configuring VMDKs as shared backed storage and that will be presented in the next sections:
Table: Steps to configure VMDK
Storage deployment task | Deployment steps |
---|---|
Enabling Disk UUID on virtual machines (VMs) | |
Installing Array Support Library (ASL) for VMDK on cluster nodes | See Installing Array Support Library (ASL) for VMDK on cluster nodes. |
Excluding the boot disk from the Volume Manager configuration | See Excluding the boot disk from the Volume Manager configuration. |
Creating the VMDK files | |
Mapping VMDKs to each virtual machine | |
Enabling the multi-write flag | |
Getting consistent names across nodes | |
Creating a Cluster File System |