Veritas InfoScale™ 7.3.1 Virtualization Guide - Linux on ESXi

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1)
  1. Section I. Overview
    1. Overview of Veritas InfoScale solutions in a VMware environment
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2. Introduction to using Veritas InfoScale solutions in the VMware virtualization environment
        1. How Veritas InfoScale solutions work in a VMware environment
          1.  
            How Veritas InfoScale product components enhance VMware capabilities
          2.  
            When to use Raw Device Mapping and Storage Foundation
          3.  
            Array migration
          4.  
            Veritas InfoScale component limitations in an ESXi environment
          5.  
            I/O fencing considerations in an ESXi environment
      3. Introduction to using Dynamic Multi-Pathing for VMware
        1.  
          About the SmartPool feature
      4.  
        About the Veritas InfoScale components
      5. About Veritas InfoScale solutions support for the VMware ESXi environment
        1.  
          Veritas InfoScale products support for VMware functionality
      6.  
        Virtualization use cases addressed by Veritas InfoScale products
  2. Section II. Deploying Veritas InfoScale products in a VMware environment
    1. Getting started
      1.  
        Veritas InfoScale products supported configurations in an VMware ESXi environment
      2.  
        Storage configurations and feature compatibility
      3.  
        About setting up VMware with Veritas InfoScale products
      4.  
        Veritas InfoScale products support for VMware environments
      5.  
        Installing and configuring storage solutions in the VMware virtual environment
  3. Section III. Use cases for Veritas InfoScale product components in a VMware environment
    1. Storage to application visibility using Veritas InfoScale Operations Manager
      1. About storage to application visibility using Veritas InfoScale Operations Manager
        1.  
          About Control Hosts in Veritas InfoScale Operations Manager
      2. About discovering the VMware Infrastructure using Veritas InfoScale Operations Manager
        1.  
          Requirements for discovering vCenter and ESX servers using Veritas InfoScale Operations Manager
        2.  
          How Veritas InfoScale Operations Manager discovers vCenter and ESX servers
        3.  
          Information that Veritas InfoScale Operations Manager discovers on the VMware Infrastructure components
        4.  
          About the datastores in Veritas InfoScale Operations Manager
        5. About the multi-pathing discovery in the VMware environment
          1.  
            About the user privileges for multi-pathing discovery in the VMware environment
        6. About near real-time (NRT) update of virtual machine states
          1.  
            Setting-up near real-time (NRT) update of virtual machine states
          2.  
            Configuring the VMware vCenter Server to generate SNMP traps
      3.  
        About discovering LPAR and VIO in Veritas InfoScale Operations Manager
      4.  
        About LPAR storage correlation supported in Veritas InfoScale Operations Manager
    2. Application availability using Cluster Server
      1.  
        About application availability with Cluster Server (VCS) in the guest
      2.  
        About VCS support for Live Migration
      3.  
        About the VCS for vSphere setup
      4.  
        Implementing application availability
      5.  
        Assessing availability levels for Cluster Server in the VMware guest
    3. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    4. Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
      1.  
        Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
      2.  
        About Dynamic Multi-Pathing for VMware
      3. How DMP works
        1. How DMP monitors I/O on paths
          1.  
            Path failover mechanism
          2.  
            I/O throttling
          3.  
            Subpaths Failover Group (SFG)
          4.  
            Low Impact Path Probing (LIPP)
        2.  
          Load balancing
        3.  
          About DMP I/O policies
      4.  
        About storage visibility using Dynamic Multi-Pathing (DMP) in the hypervisor
      5.  
        Example: achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
      6.  
        About storage availability using Dynamic Multi-Pathing in the hypervisor
      7.  
        Example: achieving storage availability using Dynamic Multi-Pathing in the hypervisor
      8.  
        About I/O performance with Dynamic Multi-Pathing in the hypervisor
      9.  
        Example: improving I/O performance with Dynamic Multi-Pathing in the hypervisor
      10.  
        About simplified management using Dynamic Multi-Pathing in the hypervisor and guest
      11.  
        Example: achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
    5. Improving I/O performance using SmartPool
      1.  
        Improving I/O performance with Veritas InfoScale product components in the VMware guest and DMP for VMware in the ESXi host
      2.  
        Implementing the SmartIO and SmartPool solution
    6. Improving data protection, storage optimization, data migration, and database performance
      1.  
        Use cases for Veritas InfoScale product components in a VMware guest
      2. Protecting data with Veritas InfoScale product components in the VMware guest
        1.  
          About point-in-time copies
        2.  
          Point-in-time snapshots for Veritas InfoScale products in the VMware environment
      3. Optimizing storage with Veritas InfoScale product components in the VMware guest
        1.  
          About SmartTier in the VMware environment
        2.  
          About compression with Veritas InfoScale product components in the VMware guest
        3.  
          About thin reclamation with Veritas InfoScale product components in the VMware guest
        4.  
          About SmartMove with Veritas InfoScale product components in the VMware guest
        5.  
          About SmartTier for Oracle with Veritas InfoScale product components in the VMware guest
      4. Migrating data with Veritas InfoScale product components in the VMware guest
        1.  
          Types of data migration
      5. Improving database performance with Veritas InfoScale product components in the VMware guest
        1.  
          About Veritas InfoScale product components database accelerators
      6.  
        Simplified storage management with Veritas InfoScale product components in the VMware guest
    7. Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
      1.  
        About use cases for Storage Foundation Cluster File System High Availability in the VMware guest
      2.  
        Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
      3.  
        Storage Foundation functionality and compatibility matrix
      4. About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
        1.  
          Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
        2.  
          Enable Password-less SSH
        3.  
          Enabling TCP traffic to coordination point (CP) Server and management ports
        4. Configuring coordination point (CP) servers
          1.  
            Configuring a Coordination Point server for Storage Foundation Cluster File System High Availability (SFCFSHA)
          2.  
            Configuring a Coordination Point server service group
          3.  
            Configuring a Cluster Server (VCS) single node cluster
        5.  
          Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
        6.  
          Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
        7.  
          Configuring non-SCSI3 fencing
      5. Configuring storage
        1.  
          Enabling disk UUID on virtual machines
        2.  
          Installing Array Support Library (ASL) for VMDK on cluster nodes
        3.  
          Excluding the boot disk from the Volume Manager configuration
        4.  
          Creating the VMDK files
        5.  
          Mapping the VMDKs to each virtual machine (VM)
        6.  
          Enabling the multi-write flag
        7.  
          Getting consistent names across nodes
        8.  
          Creating a clustered file system
  4. Section IV. Reference
    1. Appendix A. Known issues and limitations
      1.  
        Prevention of Storage vMotion
    2. Appendix B. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Service and support
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)

Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)

To configure SFCFSHA cluster settings

  1. Run installer - configure or just continue from where we left in the previous step entering y.
  2. Fencing would normally be the next step in configuring SFCFSHA. However, the I/O fencing configuration depends on other factors which are not yet determined:
    • VMDKs or RDMP storage devices are used

    • How I/O and network paths are configured

    • Configuration of coordination point (CP) server (or, in some cases, Coordination Disks)

    For now you can enter n when prompted to configure IO fencing in enabled mode and come back to it later in the configuration process.

  3. Configure the cluster name when prompted.

    The cluster name for the example deployment is cfs0

  4. Configure the NICs used for heartbeat when prompted.

    LLT (Low Latency Protocol) can be configured over Ethernet or UDP. UDP is needed only when routing between the nodes is necessary. If UDP is not needed, then Ethernet is the clear recommendation.

    In the example deployment, eth4 and eth5 are the private links. Eth3 is the public link, and it will be only used as low priority heartbeat path (so it only will be used if the other two paths fail).

    All media speed checking should succeed. If not, please review your node interconnections.

  5. Configure the cluster ID when prompted. A unique cluster ID is needed: it is vital to choose a number that is not used in any other cluster. This is especially true when using the same network interconnection (both private and public). The CPI generates a random number, and checks the network to make sure that packets with that ID do not exist. However the CPI cannot guarantee that the ID is not being used in a cluster that is currently powered off. The best practice is to maintain a register of the cluster IDs used across the data center to avoid use of duplicate IDs. In the example configuration, no other clusters with that ID have been found.

  6. At this point a summary of the configuration to be deployed is presented. Examine the summary and enter y if everything is correct. If not enter n and go through the steps again.

  7. The installer prompts for a Virtual IP to manage the cluster. This is not mandatory, and the cluster can be configured without that IP. Depending on your implementation, it might be a best practice.

  8. Decide whether or not to use secure mode.

    In the past, the difficulty in configuring Cluster Server secure mode deterred many users from using it. For SFCFSHA:

    • Secure mode configuration is much easier

    • The installer takes care of the entire configuration

    • A validated user and password from the OS is used instead of the traditional admin/password login

    For demonstration purposes, secure mode is used in the example deployment, but feel free to choose the option that best suits your needs.

    FIPS is not used for the example configuration as it is not certified for deployment with CP servers. Option 1, secure mode without FIPS is used.

  9. SMTP is not needed for the example.
  10. SNMP notifications are not needed for the example.

    At this point the cluster configuration will be initiated.