Veritas InfoScale™ 7.4.1 Virtualization Guide - Linux on ESXi

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.1)
Platform: Linux,VMware ESX
  1. Section I. Overview
    1. About Veritas InfoScale solutions in a VMware environment
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2. How InfoScale solutions work in a VMware environment
        1.  
          How InfoScale product components enhance VMware capabilities
        2.  
          When to use Raw Device Mapping and Storage Foundation
        3.  
          Array migration
        4.  
          InfoScale component limitations in an ESXi environment
        5.  
          I/O fencing considerations in an ESXi environment
      3.  
        About InfoScale solutions support for the VMware ESXi environment
      4.  
        Virtualization use cases addressed by Veritas InfoScale products
  2. Section II. Deploying Veritas InfoScale products in a VMware environment
    1. Getting started
      1.  
        Storage configurations and feature compatibility
      2.  
        About setting up VMware with InfoScale products
      3.  
        InfoScale products support for VMware environments
      4.  
        Installing and configuring storage solutions in the VMware virtual environment
    2. Understanding Storage Configuration
      1.  
        Configuring storage
      2.  
        Enabling disk UUID on virtual machines
      3.  
        Installing Array Support Library (ASL) for VMDK on cluster nodes
      4.  
        Excluding the boot disk from the Volume Manager configuration
      5.  
        Creating the VMDK files
      6.  
        Mapping the VMDKs to each virtual machine (VM)
      7.  
        Enabling the multi-write flag
      8.  
        Getting consistent names across nodes
      9.  
        Creating a file system
  3. Section III. Use cases for Veritas InfoScale product components in a VMware environment
    1. Application availability using Cluster Server
      1.  
        About application availability with Cluster Server (VCS) in the guest
      2.  
        About VCS support for Live Migration
    2. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    3. Improving storage visibility, availability, and I/O performance using Dynamic Multi-Pathing
      1.  
        Use cases for Dynamic Multi-Pathing (DMP) in the VMware environment
      2. How DMP works
        1. How DMP monitors I/O on paths
          1.  
            Path failover mechanism
          2.  
            I/O throttling
          3.  
            Subpaths Failover Group (SFG)
          4.  
            Low Impact Path Probing (LIPP)
        2.  
          Load balancing
        3.  
          About DMP I/O policies
      3.  
        Achieving storage visibility using Dynamic Multi-Pathing in the hypervisor
      4.  
        Achieving storage availability using Dynamic Multi-Pathing in the hypervisor
      5.  
        Improving I/O performance with Dynamic Multi-Pathing in the hypervisor
      6.  
        Achieving simplified management using Dynamic Multi-Pathing in the hypervisor and guest
    4. Improving data protection, storage optimization, data migration, and database performance
      1.  
        Use cases for InfoScale product components in a VMware guest
      2. Protecting data with InfoScale product components in the VMware guest
        1.  
          About point-in-time copies
        2.  
          Point-in-time snapshots for InfoScale products in the VMware environment
      3. Optimizing storage with InfoScale product components in the VMware guest
        1.  
          About SmartTier in the VMware environment
        2.  
          About compression with InfoScale product components in the VMware guest
        3.  
          About thin reclamation with InfoScale product components in the VMware guest
        4.  
          About SmartMove with InfoScale product components in the VMware guest
        5.  
          About SmartTier for Oracle with InfoScale product components in the VMware guest
      4. Migrating data with InfoScale product components in the VMware guest
        1.  
          Types of data migration
      5. Improving database performance with InfoScale product components in the VMware guest
        1.  
          About InfoScale product components database accelerators
    5. Setting up virtual machines for fast failover using Storage Foundation Cluster File System High Availability on VMware disks
      1.  
        About use cases for InfoScale Enterprise in the VMware guest
      2.  
        Storage Foundation Cluster File System High Availability operation in VMware virtualized environments
      3.  
        Storage Foundation functionality and compatibility matrix
      4. About setting up Storage Foundation Cluster File High System High Availability on VMware ESXi
        1.  
          Planning a Storage Foundation Cluster File System High Availability (SFCFSHA) configuration
        2.  
          Enable Password-less SSH
        3.  
          Enabling TCP traffic to coordination point (CP) Server and management ports
        4. Configuring coordination point (CP) servers
          1.  
            Configuring a Coordination Point server for Storage Foundation Cluster File System High Availability (SFCFSHA)
          2.  
            Configuring a Coordination Point server service group
          3.  
            Configuring a Cluster Server (VCS) single node cluster
        5.  
          Deploying Storage Foundation Cluster File System High Availability (SFCFSHA) software
        6.  
          Configuring Storage Foundation Cluster File System High Availability (SFCFSHA)
        7.  
          Configuring non-SCSI3 fencing
  4. Section IV. Reference
    1. Appendix A. Known issues and limitations
      1.  
        Prevention of Storage vMotion
    2. Appendix B. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Service and support
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)

Configuring non-SCSI3 fencing

VMDK files do not currently support SCSI-3 Persistent Reservation and therefore non-SCSI-3 PR fencing must be used. Coordination point (CP) servers provide the required level of server based fencing. At this point in the configuration process, the three CP servers that are to be used with this cluster should be available and the CP service should be up and running.

To configure non-SCSI-3 fencing

  1. If you started at the beginning of the installer process and selected the enable fencing option, you are prompted to configure fencing.

    If you chose not to enable fencing at that point, then the cluster configuration is finished. You should now run installsfcfsha61 -fencing to enable fencing in the cluster.

  2. Regardless of how you navigated to the fencing configuration of the installer, select option 1 for Coordination Point client-based fencing.
  3. When prompted if your storage environment supports SCSI-3 PR, select n , since VMDK files do not support SCSI-3 PR.
  4. When prompted if you wnat to configure Non-SCSI-3 fencing, select y.
  5. For production environments, three CP servers are recommended. Enter 3 when prompted for the number of coordination points.

  6. Specify how many interfaces the CP servers will be listening on and the IP address of each interface. If a CP server is reachable via several networks, the best practice is to configure every interface. This allows the SFCFSHA nodes maximum communication flexibility, should a race condition occur.

    Enter the host names and VIPs for the other CP servers and review the fencing configuration.

  7. When prompted, select secure mode. All the trusted relationships between cluster nodes and CP servers are automatically set up.
  8. Verify that the cluster information is correct. Each node is registered with each of the CP servers. Once this is done, the installer will restart VCS to apply the fencing configuration. At this point we don´t have any file system configured yet.
  9. When prompted, it is a recommended best practice to configure the Coordination Point Agent on the client, so CP servers are proactively monitored from the cluster. This step completes the fencing configuration.

Once fencing configuration is complete, you can verify if it is correct.

To verify the fencing configuration

  1. Query each of the CP servers to verify each node has been registered.
    # CCPS_USERNAME=CPSADM@VCS_SERVICES
    # CPS_DOMAINTYPE=vx
    [root@cfs01 install]# cpsadm -s cps1v -a list_nodes
    ClusterName UUID                                   Hostname(Node ID) Registered
    =========== ====================================== ================  ==========
    cfs0        {38910d38-1dd2-11b2-a898-f1c7b967fd89} cfs01(0)             1
    cfs0        {38910d38-1dd2-11b2-a898-f1c7b967fd89} cfs02(1)             1
    cfs0        {38910d38-1dd2-11b2-a898-f1c7b967fd89} cfs03(2)             1
    cfs0        {38910d38-1dd2-11b2-a898-f1c7b967fd89} cfs04(3)             1
    [root@cfs01 install]# cpsadm -s cps1v -a list_membership -c cfs0
    List of registered nodes: 0 1 2 3
  2. Run the same command against the each CP server.
  3. Using the VCS Cluster Explorer screen, we can see that the vxfen service group has been created to monitor CP servers and that it is healthy.