Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.1)
Platform: Linux,VMware ESX
  1. Section I. Overview
    1. About Veritas InfoScale solutions in a VMware environment
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2. How InfoScale solutions work in a VMware environment
        1.  
          How InfoScale product components enhance VMware capabilities
        2.  
          When to use Raw Device Mapping and Storage Foundation
        3.  
          Array migration
        4.  
          InfoScale component limitations in an ESXi environment
        5.  
          I/O fencing considerations in an ESXi environment
      3.  
        About InfoScale solutions support for the VMware ESXi environment
      4.  
        Virtualization use cases addressed by Veritas InfoScale products
  2. Section II. Deploying Veritas InfoScale products in a VMware environment
    1. Getting started
      1.  
        Storage configurations and feature compatibility
      2.  
        About setting up VMware with InfoScale products
      3.  
        InfoScale products support for VMware environments
      4.  
        Installing and configuring storage solutions in the VMware virtual environment
    2. Understanding Storage Configuration
      1.  
        Configuring storage
      2.  
        Enabling disk UUID on virtual machines
      3.  
        Installing Array Support Library (ASL) for VMDK on cluster nodes
      4.  
        Excluding the boot disk from the Volume Manager configuration
      5.  
        Creating the VMDK files
      6.  
        Mapping the VMDKs to each virtual machine (VM)
      7.  
        Enabling the multi-write flag
      8.  
        Getting consistent names across nodes
      9.  
        Creating a file system
  3. Section III. Use cases for Veritas InfoScale product components in a VMware environment
    1. Application availability using Cluster Server
      1.  
        About application availability with Cluster Server (VCS) in the guest
      2.  
        About VCS support for Live Migration
    2. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    3. Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
      1.  
        Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
      2. How DMP works
        1. How DMP monitors I/O on paths
          1.  
            Path failover mechanism
          2.  
            I/O throttling
          3.  
            Subpaths Failover Group (SFG)
          4.  
            Low Impact Path Probing (LIPP)
        2.  
          Load balancing
        3.  
          About DMP I/O policies
      3.  
        Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
      4.  
        Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
      5.  
        Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
      6.  
        Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
    4. Improving data protection, storage optimization, data migration, and database performance
      1.  
        Use cases for InfoScale product components in a VMware guest
      2. Protecting data with InfoScale product components in the VMware guest
        1.  
          About point-in-time copies
        2.  
          Point-in-time snapshots for InfoScale products in the VMware environment
      3. Optimizing storage with InfoScale product components in the VMware guest
        1.  
          About SmartTier in the VMware environment
        2.  
          About compression with InfoScale product components in the VMware guest
        3.  
          About thin reclamation with InfoScale product components in the VMware guest
        4.  
          About SmartMove with InfoScale product components in the VMware guest
        5.  
          About SmartTier for Oracle with InfoScale product components in the VMware guest
      4. Migrating data with InfoScale product components in the VMware guest
        1.  
          Types of data migration
      5. Improving database performance with InfoScale product components in the VMware guest
        1.  
          About InfoScale product components database accelerators
    5. Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
      1.  
        About use cases for InfoScale Enterprise in the VMware guest
      2.  
        Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
      3.  
        Storage Foundation functionality and compatibility matrix
      4. About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
        1.  
          Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
        2.  
          Enable Password-less SSH
        3.  
          Enabling TCP traffic to coordination point (CP) Server and management ports
        4. Configuring coordination point (CP) servers
          1.  
            Configuring a Coordination Point server for Storage Foundation Cluster File System High Availability (SFCFSHA)
          2.  
            Configuring a Coordination Point server service group
          3.  
            Configuring a Cluster Server (VCS) single node cluster
        5.  
          Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
        6.  
          Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
        7.  
          Configuring non-SCSI3 fencing
  4. Section IV. Reference
    1. Appendix A. Known issues and limitations
      1.  
        Prevention of Storage vMotion
    2. Appendix B. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Service and support
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)

How DMP works

Dynamic Multi-Pathing (DMP) provides greater availability, reliability, and performance by using the path failover feature and the load balancing feature. These features are available for multiported disk arrays from various vendors.

Disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array. DMP can also differentiate between different enclosures of a supported array that are connected to the same host system.

The multi-pathing policy that DMP uses depends on the characteristics of the disk array.

DMP supports the following standard array types:

Table:

Array type

Description

Active/Active (A/A)

Allows several paths to be used concurrently for I/O. Such arrays allow DMP to provide greater I/O throughput by balancing the I/O load uniformly across the multiple paths to the LUNs. In the event that one path fails, DMP automatically routes I/O over the other available paths.

Asymmetric Active/Active (A/A-A)

A/A-A or Asymmetric Active/Active arrays can be accessed through secondary storage paths with little performance degradation. The behavior is similar to ALUA, except that it does not support the SCSI commands that an ALUA array supports.

Asymmetric Logical Unit Access (ALUA)

DMP supports all variants of ALUA.

Active/Passive (A/P)

Allows access to its LUNs (logical units; real disks or virtual disks created using hardware) via the primary (active) path on a single controller (also known as an access port or a storage processor) during normal operation.

In implicit failover mode (or autotrespass mode), an A/P array automatically fails over by scheduling I/O to the secondary (passive) path on a separate controller if the primary path fails. This passive port is not used for I/O until the active port fails. In A/P arrays, path failover can occur for a single LUN if I/O fails on the primary path.

This array mode supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.

Active/Passive in explicit failover mode or non-autotrespass mode (A/PF)

The appropriate command must be issued to the array to make the LUNs fail over to the secondary path.

This array mode supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.

Active/Passive with LUN group failover (A/PG)

For Active/Passive arrays with LUN group failover (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs. The primary controller and the secondary controller are each connected to a separate group of LUNs. If a single LUN in the primary controller's LUN group fails, all LUNs in that group fail over to the secondary controller.

This array mode supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.

An array policy module (APM) may define array types to DMP in addition to the standard types for the arrays that it supports.

Veritas InfoScale uses DMP metanodes (DMP nodes) to access disk devices connected to the system. For each disk in a supported array, DMP maps one node to the set of paths that are connected to the disk. Additionally, DMP associates the appropriate multi-pathing policy for the disk array with the node.

Figure: How DMP represents multiple physical paths to a disk as one node shows how DMP sets up a node for a disk in a supported disk array.

Figure: How DMP represents multiple physical paths to a disk as one node

How DMP represents multiple physical paths to a disk as one node

DMP implements a disk device naming scheme that allows you to recognize to which array a disk belongs.

Figure: Example of multi-pathing for a disk enclosure in a SAN environment shows an example where two paths, vmhba1:C0:T0:L0 and vmhba2:C0:T0:L0, exist to a single disk in the enclosure, but VxVM uses the single DMP node, enc0_0, to access it.

Figure: Example of multi-pathing for a disk enclosure in a SAN environment

Example of multi-pathing for a disk enclosure in a SAN environment