Veritas InfoScale™ Virtualization Guide - Linux on ESXi

Last Published:
Product(s): InfoScale & Storage Foundation (7.4)
Platform: VMware ESX
  1. Section I. Overview
    1. About Veritas InfoScale solutions in a VMware environment
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2. How Veritas InfoScale solutions work in a VMware environment
        1.  
          How Veritas InfoScale product components enhance VMware capabilities
        2.  
          When to use Raw Device Mapping and Storage Foundation
        3.  
          Array migration
        4.  
          Veritas InfoScale component limitations in an ESXi environment
        5.  
          I/O fencing considerations in an ESXi environment
      3.  
        About Veritas InfoScale solutions support for the VMware ESXi environment
      4.  
        Virtualization use cases addressed by Veritas InfoScale products
  2. Section II. Deploying Veritas InfoScale products in a VMware environment
    1. Getting started
      1.  
        Storage configurations and feature compatibility
      2.  
        About setting up VMware with Veritas InfoScale products
      3.  
        Veritas InfoScale products support for VMware environments
      4.  
        Installing and configuring storage solutions in the VMware virtual environment
    2. Understanding Storage Configuration
      1.  
        Configuring storage
      2.  
        Enabling disk UUID on virtual machines
      3.  
        Installing Array Support Library (ASL) for VMDK on cluster nodes
      4.  
        Excluding the boot disk from the Volume Manager configuration
      5.  
        Creating the VMDK files
      6.  
        Mapping the VMDKs to each virtual machine (VM)
      7.  
        Enabling the multi-write flag
      8.  
        Getting consistent names across nodes
      9.  
        Creating a file system
  3. Section III. Use cases for Veritas InfoScale product components in a VMware environment
    1. Application availability using Cluster Server
      1.  
        About application availability with Cluster Server (VCS) in the guest
      2.  
        About VCS support for Live Migration
    2. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    3. Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
      1.  
        Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
      2. How DMP works
        1. How DMP monitors I/O on paths
          1.  
            Path failover mechanism
          2.  
            I/O throttling
          3.  
            Subpaths Failover Group (SFG)
          4.  
            Low Impact Path Probing (LIPP)
        2.  
          Load balancing
        3.  
          About DMP I/O policies
      3.  
        Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
      4.  
        Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
      5.  
        Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
      6.  
        Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
    4. Improving data protection, storage optimization, data migration, and database performance
      1.  
        Use cases for Veritas InfoScale product components in a VMware guest
      2. Protecting data with Veritas InfoScale product components in the VMware guest
        1.  
          About point-in-time copies
        2.  
          Point-in-time snapshots for Veritas InfoScale products in the VMware environment
      3. Optimizing storage with Veritas InfoScale product components in the VMware guest
        1.  
          About SmartTier in the VMware environment
        2.  
          About compression with Veritas InfoScale product components in the VMware guest
        3.  
          About thin reclamation with Veritas InfoScale product components in the VMware guest
        4.  
          About SmartMove with Veritas InfoScale product components in the VMware guest
        5.  
          About SmartTier for Oracle with Veritas InfoScale product components in the VMware guest
      4. Migrating data with Veritas InfoScale product components in the VMware guest
        1.  
          Types of data migration
      5. Improving database performance with Veritas InfoScale product components in the VMware guest
        1.  
          About Veritas InfoScale product components database accelerators
    5. Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
      1.  
        About use cases for InfoScale Enterprise in the VMware guest
      2.  
        Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
      3.  
        Storage Foundation functionality and compatibility matrix
      4. About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
        1.  
          Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
        2.  
          Enable Password-less SSH
        3.  
          Enabling TCP traffic to coordination point (CP) Server and management ports
        4. Configuring coordination point (CP) servers
          1.  
            Configuring a Coordination Point server for Storage Foundation Cluster File System High Availability (SFCFSHA)
          2.  
            Configuring a Coordination Point server service group
          3.  
            Configuring a Cluster Server (VCS) single node cluster
        5.  
          Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
        6.  
          Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
        7.  
          Configuring non-SCSI3 fencing
  4. Section IV. Reference
    1. Appendix A. Known issues and limitations
      1.  
        Prevention of Storage vMotion
    2. Appendix B. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Service and support
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)

Mapping the VMDKs to each virtual machine (VM)

Map each of the created VMDK files to each VM. The example procedure illustrates mapping the VMDKs to the cfs01 node: all steps should be followed for each of the other nodes.

To map the VMDKs to each VM

  1. Shut down the VM.
  2. Select the VM and select Edit Settings....
  3. Select Add , select Hard disk and click Next.
  4. Select Use an existing virtual disk and click Next.

  5. Select Browse and choose DS1 data store.
  6. Select the folder cfs0 and select shared1.vmdk file and click Next.
  7. On Virtual Device Node select SCSI (1:0)

    and click Next.

  8. Review the details to verify they are correct and click Finish.
  9. Since this is the first disk added under SCSI controller 1, a new SCSI controller is added.

    Modify the type to Paravirtual, if that is not the default, and check that SCSI Bus Sharing is set to None, as this is key to allow vMotion for the VMs.

  10. Follow steps 3 to 8 for the rest of disks that will be added to each of the VMs.

    For the example configuration, the parameters for steps 5-7 are given in the table below:

    Data Store

    VMDK Name

    Virtual Device

    DS1

    cfs0/shared1.vmdk

    SCSI 1:0

    DS2

    cfs0/shared2.vmdk

    SCSI 1:1

    DS3

    cfs0/shared3.vmdk

    SCSI 1:2

    DS4

    cfs0/shared4.vmdk

    SCSI 1:3

    DS5

    cfs0/shared5.vmdk

    SCSI 1:4

    The final configuration for the first node of the example cluster (cfs01):

    Now follow the same steps for each node of the cluster and map each VMDK file to the VM following the instructions above. Once all the steps are completed, all the VMs should have access to the same VMDK files. Note that at this point, all the VMs are still powered off and that multi-writer flag has not been enabled yet (it will be done in the next step). Any attempt to power on the VMs in this state will prevent a second VM start because it will violate the restrictions to access a VMDK by only a host at a time.