Veritas InfoScale™ Virtualization Guide - Linux on ESXi
- Section I. Overview
- About Veritas InfoScale solutions in a VMware environment
- Section II. Deploying Veritas InfoScale products in a VMware environment
- Getting started
- Understanding Storage Configuration
- Section III. Use cases for Veritas InfoScale product components in a VMware environment
- Application availability using Cluster Server
- Multi-tier business service support
- Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
- How DMP works
- Improving data protection, storage optimization, data migration, and database performance
- Protecting data with Veritas InfoScale product components in the VMware guest
- Optimizing storage with Veritas InfoScale product components in the VMware guest
- Migrating data with Veritas InfoScale product components in the VMware guest
- Improving database performance with Veritas InfoScale product components in the VMware guest
- Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
- About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
- Configuring coordination point (CP) servers
- Section IV. Reference
Mapping the VMDKs to each virtual machine (VM)
Map each of the created VMDK files to each VM. The example procedure illustrates mapping the VMDKs to the cfs01 node: all steps should be followed for each of the other nodes.
To map the VMDKs to each VM
- Shut down the VM.
- Select the VM and select Edit Settings....
- Select Add , select Hard disk and click Next.
- Select Use an existing virtual disk and click Next.
- Select Browse and choose DS1 data store.
- Select the folder cfs0 and select shared1.vmdk file and click Next.
- On Virtual Device Node select SCSI (1:0)
and click Next.
- Review the details to verify they are correct and click Finish.
- Since this is the first disk added under SCSI controller 1, a new SCSI controller is added.
Modify the type to Paravirtual, if that is not the default, and check that SCSI Bus Sharing is set to None, as this is key to allow vMotion for the VMs.
- Follow steps 3 to 8 for the rest of disks that will be added to each of the VMs.
For the example configuration, the parameters for steps 5-7 are given in the table below:
Data Store
VMDK Name
Virtual Device
DS1
cfs0/shared1.vmdk
SCSI 1:0
DS2
cfs0/shared2.vmdk
SCSI 1:1
DS3
cfs0/shared3.vmdk
SCSI 1:2
DS4
cfs0/shared4.vmdk
SCSI 1:3
DS5
cfs0/shared5.vmdk
SCSI 1:4
The final configuration for the first node of the example cluster (cfs01):
Now follow the same steps for each node of the cluster and map each VMDK file to the VM following the instructions above. Once all the steps are completed, all the VMs should have access to the same VMDK files. Note that at this point, all the VMs are still powered off and that multi-writer flag has not been enabled yet (it will be done in the next step). Any attempt to power on the VMs in this state will prevent a second VM start because it will violate the restrictions to access a VMDK by only a host at a time.