InfoScale™ 9.0 Virtualization Guide - Linux on ESXi

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Linux,VMware ESX
  1. Section I. Overview
    1. About Veritas InfoScale solutions in a VMware environment
      1.  
        Overview of the InfoScale Virtualization Guide
      2. How InfoScale™ solutions work in a VMware environment
        1.  
          How InfoScale™ product components enhance VMware capabilities
        2.  
          When to use Raw Device Mapping and InfoScale
        3.  
          Array migration
        4.  
          InfoScale™ component limitations in an ESXi environment
        5.  
          I/O fencing considerations in an ESXi environment
      3.  
        About InfoScale™ solutions support for the VMware ESXi environment
      4.  
        Virtualization use cases addressed by InfoScale
  2. Section II. Deploying Veritas InfoScale products in a VMware environment
    1. Getting started
      1.  
        Storage configurations and feature compatibility
      2.  
        About setting up VMware with InfoScale™ products
      3.  
        InfoScale™ products support for VMware environments
      4.  
        Installing and configuring storage solutions in the VMware virtual environment
      5.  
        Recommendations for improved resiliency of InfoScale clusters in virtualized environments
    2. Understanding Storage Configuration
      1.  
        Configuring storage
      2.  
        Enabling disk UUID on virtual machines
      3.  
        Installing Array Support Library (ASL) for VMDK on cluster nodes
      4.  
        Excluding the boot disk from the Volume Manager configuration
      5.  
        Creating the VMDK files
      6.  
        Mapping the VMDKs to each virtual machine (VM)
      7.  
        Enabling the multi-write flag
      8.  
        Getting consistent names across nodes
      9.  
        Creating a file system
  3. Section III. Use cases for Veritas InfoScale product components in a VMware environment
    1. Application availability using Cluster Server
      1.  
        About application availability with Cluster Server (VCS) in the guest
      2.  
        About VCS support for Live Migration
    2. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
    3. Improving data protection, storage optimization, data migration, and database performance
      1.  
        Use cases for InfoScale™ product components in a VMware guest
      2. Protecting data with InfoScale™ product components in the VMware guest
        1.  
          About point-in-time copies
        2.  
          Point-in-time snapshots for InfoScale™ products in the VMware environment
      3. Optimizing storage with InfoScale™ product components in the VMware guest
        1. About Flexible Storage Sharing
          1.  
            Limitations of Flexible Storage Sharing
        2.  
          About SmartTier in the VMware environment
        3.  
          About compression with InfoScale™ product components in the VMware guest
        4.  
          About thin reclamation with InfoScale™ product components in the VMware guest
        5.  
          About SmartMove with InfoScale™ product components in the VMware guest
        6.  
          About SmartTier for Oracle with InfoScale™ product components in the VMware guest
      4. Migrating data with InfoScale™ product components in the VMware guest
        1.  
          Types of data migration
      5. Improving database performance with InfoScale™ product components in the VMware guest
        1.  
          About InfoScale™ product components database accelerators
    4. Setting up virtual machines for fast failover using InfoScale Enterprise on VMware disks
      1.  
        About use cases for InfoScale Enterprise in the VMware guest
      2.  
        InfoScale Enterprise operation in VMware virtualized environments
      3.  
        InfoScale functionality and compatibility matrix
      4. About setting up InfoScale Enterprise on VMware ESXi
        1.  
          Planning a InfoScale Enterprise configuration
        2.  
          Enable Password-less SSH
        3.  
          Enabling TCP traffic to coordination point (CP) Server and management ports
        4. Configuring coordination point (CP) servers
          1.  
            Configuring a Coordination Point server for InfoScale Enterprise
          2.  
            Configuring a Cluster Server (VCS) single node cluster
          3.  
            Configuring a Coordination Point server service group
        5.  
          Deploying InfoScale Enterprise software
        6.  
          Configuring InfoScale Enterprise
        7.  
          Configuring non-SCSI3 fencing
  4. Section IV. Reference
    1. Appendix A. Known issues and limitations
      1.  
        Prevention of Storage vMotion
    2. Appendix B. Where to find more information
      1.  
        Arctera InfoScale documentation
      2.  
        Service and support
      3.  
        About Services and Operations Readiness Tools (SORT)

Configuring storage

There are two options to provide storage to the Virtual Machines (VMs) that will host the Cluster File System:

  • The first option, Raw Device Mapping Protocol (RDMP), uses direct access to external storage and supports parallel access to the LUN, but does not allow vMotion or DRS. For RDMP configuration, you must map the raw device to each VM and make sure you select the Physical (RDM-P) configuration, so SCSI-3 PGR commands are passed along to the disk.

  • The second option, VMFS virtual disk (VMDK), provides a file that can only be accessed in parallel when the VMFS multi-writer option is enabled. This option supports server vMotion and DRS, but does not currently support SCSI-3 PR IO fencing. The main advantage of this architecture is the ability to move VMs around different ESXi servers without service interruption, using vMotion.

Let us consider an example where VMDK is configured with the multi-writer option enabled. In the following section, steps required to configure the virtual machines on ESXi server with shared VMDK files is covered. Also steps to configure Arctera InfoScale Enterprise in the same environment to consume the storage and create file system is also mentioned.

By following the steps in the VMware article, simultaneous write protection provided by VMFS is disabled using the multi-writer flag. When choosing this configuration, users should be aware of the following limitations and advantages

To understand the support for VMDK files with the multi-writer option, refer to the VMware article at: http://kb.vmware.com/kb/1034165

By following the steps in the VMware article, simultaneous write protection provided by VMFS is disabled using the multi-writer flag. When choosing this configuration, users should be aware of the following limitations and advantages.

Limitations:

  • Virtual disks must be thick provisioned eager zeroed.

  • VMDK sharing is limited to eight ESXi servers

  • Linked clones and snapshots are not supported. Be aware that other vSphere activities utilize cloning and that backup solutions leverage snapshots via the vAPIs, so backups may be adversely impacted.

  • SCSI-3 PR IO fencing is not supported by VMDK files.

  • Special care needs to be taken when assigning VMDKs to VMs. Inadvertently assigning a VMDK file already in use to the wrong VM will likely result in data corruption.

  • Storage vMotion is not supported

Advantages

Server vMotion is supported.

The lack of SCSI-3 PR IO fencing support requires the usage of at least three Coordination Point servers, to provide non-SCSI-3 fencing protection. In case of a split brain situation, CP servers will be used to determine what part of the sub-cluster will continue providing service. Once the multi-writer flag is enabled on a VMDK file, any VM will be able to mount it and write, so special care in the provisioning phase needs to be taken.

Note:

Note that if the number of Arctera InfoScale Enterprise nodes is greater than eight, several nodes will have to run in the same ESXi server, based on the limitation that a maximum of eight ESXi servers can share the same VMDK file. For example, if you are running at the Arctera InfoScale Enterprise maximum of 64 nodes, those 64 VMs would share the same VMDK file, but you could only use eight ESXi servers to host the cluster.

These are the steps that need to be taken when configuring VMDKs as shared backed storage and that will be presented in the next sections:

Table: Steps to configure VMDK

Storage deployment task

Deployment steps

Enabling Disk UUID on virtual machines (VMs)

See Enabling disk UUID on virtual machines.

Installing Array Support Library (ASL) for VMDK on cluster nodes

See Installing Array Support Library (ASL) for VMDK on cluster nodes.

Excluding the boot disk from the Volume Manager configuration

See Excluding the boot disk from the Volume Manager configuration.

Creating the VMDK files

See Creating the VMDK files.

Mapping VMDKs to each virtual machine

See Mapping the VMDKs to each virtual machine (VM).

Enabling the multi-write flag

See Enabling the multi-write flag.

Getting consistent names across nodes

See Getting consistent names across nodes.

Creating a Cluster File System

See Creating a file system.