Veritas InfoScale™ Virtualization Guide - Linux on ESXi

Last Published:
Product(s): InfoScale & Storage Foundation (7.4)
Platform: VMware ESX
  1. Section I. Overview
    1. About Veritas InfoScale solutions in a VMware environment
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2. How Veritas InfoScale solutions work in a VMware environment
        1.  
          How Veritas InfoScale product components enhance VMware capabilities
        2.  
          When to use Raw Device Mapping and Storage Foundation
        3.  
          Array migration
        4.  
          Veritas InfoScale component limitations in an ESXi environment
        5.  
          I/O fencing considerations in an ESXi environment
      3.  
        About Veritas InfoScale solutions support for the VMware ESXi environment
      4.  
        Virtualization use cases addressed by Veritas InfoScale products
  2. Section II. Deploying Veritas InfoScale products in a VMware environment
    1. Getting started
      1.  
        Storage configurations and feature compatibility
      2.  
        About setting up VMware with Veritas InfoScale products
      3.  
        Veritas InfoScale products support for VMware environments
      4.  
        Installing and configuring storage solutions in the VMware virtual environment
    2. Understanding Storage Configuration
      1.  
        Configuring storage
      2.  
        Enabling disk UUID on virtual machines
      3.  
        Installing Array Support Library (ASL) for VMDK on cluster nodes
      4.  
        Excluding the boot disk from the Volume Manager configuration
      5.  
        Creating the VMDK files
      6.  
        Mapping the VMDKs to each virtual machine (VM)
      7.  
        Enabling the multi-write flag
      8.  
        Getting consistent names across nodes
      9.  
        Creating a file system
  3. Section III. Use cases for Veritas InfoScale product components in a VMware environment
    1. Application availability using Cluster Server
      1.  
        About application availability with Cluster Server (VCS) in the guest
      2.  
        About VCS support for Live Migration
    2. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    3. Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
      1.  
        Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
      2. How DMP works
        1. How DMP monitors I/O on paths
          1.  
            Path failover mechanism
          2.  
            I/O throttling
          3.  
            Subpaths Failover Group (SFG)
          4.  
            Low Impact Path Probing (LIPP)
        2.  
          Load balancing
        3.  
          About DMP I/O policies
      3.  
        Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
      4.  
        Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
      5.  
        Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
      6.  
        Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
    4. Improving data protection, storage optimization, data migration, and database performance
      1.  
        Use cases for Veritas InfoScale product components in a VMware guest
      2. Protecting data with Veritas InfoScale product components in the VMware guest
        1.  
          About point-in-time copies
        2.  
          Point-in-time snapshots for Veritas InfoScale products in the VMware environment
      3. Optimizing storage with Veritas InfoScale product components in the VMware guest
        1.  
          About SmartTier in the VMware environment
        2.  
          About compression with Veritas InfoScale product components in the VMware guest
        3.  
          About thin reclamation with Veritas InfoScale product components in the VMware guest
        4.  
          About SmartMove with Veritas InfoScale product components in the VMware guest
        5.  
          About SmartTier for Oracle with Veritas InfoScale product components in the VMware guest
      4. Migrating data with Veritas InfoScale product components in the VMware guest
        1.  
          Types of data migration
      5. Improving database performance with Veritas InfoScale product components in the VMware guest
        1.  
          About Veritas InfoScale product components database accelerators
    5. Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
      1.  
        About use cases for InfoScale Enterprise in the VMware guest
      2.  
        Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
      3.  
        Storage Foundation functionality and compatibility matrix
      4. About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
        1.  
          Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
        2.  
          Enable Password-less SSH
        3.  
          Enabling TCP traffic to coordination point (CP) Server and management ports
        4. Configuring coordination point (CP) servers
          1.  
            Configuring a Coordination Point server for Storage Foundation Cluster File System High Availability (SFCFSHA)
          2.  
            Configuring a Coordination Point server service group
          3.  
            Configuring a Cluster Server (VCS) single node cluster
        5.  
          Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
        6.  
          Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
        7.  
          Configuring non-SCSI3 fencing
  4. Section IV. Reference
    1. Appendix A. Known issues and limitations
      1.  
        Prevention of Storage vMotion
    2. Appendix B. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Service and support
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)

Configuring storage

There are two options to provide storage to the Virtual Machines (VMs) that will host the Cluster File System:

  • The first option, Raw Device Mapping Protocol (RDMP), uses direct access to external storage and supports parallel access to the LUN, but does not allow vMotion or DRS. For RDMP configuration, you must map the raw device to each VM and make sure you select the Physical (RDM-P) configuration, so SCSI-3 PGR commands are passed along to the disk.

  • The second option, VMFS virtual disk (VMDK), provides a file that can only be accessed in parallel when the VMFS multi-writer option is enabled. This option supports server vMotion and DRS, but does not currently support SCSI-3 PR IO fencing. The main advantage of this architecture is the ability to move VMs around different ESXi servers without service interruption, using vMotion.

This deployment example uses VMDK files with the multi-writer option enabled. In this section we will show how to configure the ESXi server and virtual machines to share a VMDK file and how to configure SFCFSHA to consume that storage and create a file system. Support for VMDK files is based on the multi-writer option described in this VMware article: http://kb.vmware.com/kb/1034165 By default, one VMDK file can only be mounted by one VM at a time. By following the steps in the VMware article, simultaneous write protection provided by VMFS is disabled using the multi-writer flag. When choosing this configuration, users should be aware of the following limitations and advantages.

Limitations:

  • Virtual disks must be eager zeroed thick

  • VMDK Sharing is limited to eight ESXi servers

  • Linked clones and snapshots are not supported. Be aware that other vSphere activities utilize cloning and that backup solutions leverage snapshots via the vAPIs, so backups may be adversely impacted.

  • SCSI-3 PR IO fencing is not supported by VMDK files. Special care needs to be taken when assigning VMDKs to VMs. Inadvertently assigning a VMDK file already in use to the wrong VM will likely result in data corruption.

  • Storage vMotion is not supported

The advantage is that server vMotion is supported.

The lack of SCSI-3 PR IO fencing support requires the usage of at least three Coordination Point servers, to provide non-SCSI-3 fencing protection. In case of a split brain situation, CP servers will be used to determine what part of the sub-cluster will continue providing service. Once the multi-writer flag is enabled on a VMDK file, any VM will be able to mount it and write, so special care in the provisioning phase needs to be taken.

Note that if the number of SFCFSHA nodes is greater than eight, several nodes will have to run in the same ESXi server, based on the limitation that a maximum of eight ESXi servers can share the same VMDK file. For example, if you are running at the SFCFSHA maximum of 64 nodes, those 64 VMs would share the same VMDK file, but you could only use eight ESXi servers to host the cluster.

These are the steps that need to be taken when configuring VMDKs as shared backed storage and that will be presented in the next sections:

Table: Steps to configure VMDK

Storage deployment task

Deployment steps

Enabling Disk UUID on virtual machines (VMs)

See Enabling disk UUID on virtual machines.

Installing Array Support Library (ASL) for VMDK on cluster nodes

See Installing Array Support Library (ASL) for VMDK on cluster nodes.

Excluding the boot disk from the Volume Manager configuration

See Excluding the boot disk from the Volume Manager configuration.

Creating the VMDK files

See Excluding the boot disk from the Volume Manager configuration.

Mapping VMDKs to each virtual machine

See Mapping the VMDKs to each virtual machine (VM).

Enabling the multi-write flag

See Enabling the multi-write flag.

Getting consistent names across nodes

See Getting consistent names across nodes.

Creating a Cluster File System

See Creating a file system.