Veritas InfoScale™ 8.0 Virtualization Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (8.0)
Platform: Linux
  1. Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
    1. Overview of supported products and technologies
      1.  
        Overview of the Veritas InfoScale Products Virtualization Guide
      2. About Veritas InfoScale Solutions support for Linux virtualization environments
        1.  
          About SmartIO in the Linux virtualized environment
        2.  
          About the SmartPool feature
      3. About Kernel-based Virtual Machine (KVM) technology
        1.  
          Kernel-based Virtual Machine Terminology
        2.  
          VirtIO disk drives
      4. About the RHEV environment
        1.  
          RHEV terminology
      5.  
        Virtualization use cases addressed by Veritas InfoScale products
      6.  
        About virtual-to-virtual (in-guest) clustering and failover
  2. Section II. Implementing a basic KVM environment
    1. Getting started with basic KVM
      1.  
        Creating and launching a kernel-based virtual machine (KVM) host
      2.  
        RHEL-based KVM installation and usage
      3.  
        Setting up a kernel-based virtual machine (KVM) guest
      4.  
        About setting up KVM with Veritas InfoScale Solutions
      5. Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
        1.  
          Dynamic Multi-Pathing in the KVM guest virtualized machine
        2.  
          Dynamic Multi-Pathing in the KVM host
        3.  
          Storage Foundation in the virtualized guest machine
        4.  
          Enabling I/O fencing in KVM guests
        5.  
          Storage Foundation Cluster File System High Availability in the KVM host
        6.  
          Dynamic Multi-Pathing in the KVM host and guest virtual machine
        7.  
          Dynamic Multi-Pathing in the KVM host and Storage Foundation HA in the KVM guest virtual machine
        8.  
          Cluster Server in the KVM host
        9.  
          Cluster Server in the guest
        10.  
          Cluster Server in a cluster across virtual machine guests and physical machines
      6.  
        Installing Veritas InfoScale Solutions in the kernel-based virtual machine environment
      7. Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
        1.  
          How Cluster Server (VCS) manages Virtual Machine (VM) guests
    2. Configuring KVM resources
      1.  
        About kernel-based virtual machine resources
      2. Configuring storage
        1.  
          Consistent storage mapping in the KVM environment
        2. Mapping devices to the guest
          1.  
            Mapping DMP meta-devices
          2.  
            Consistent naming across KVM Hosts
          3.  
            Mapping devices using paths
          4.  
            Mapping devices using volumes
          5.  
            Mapping devices using the virtio-scsi interface
        3.  
          Resizing devices
      3. Configuring networking
        1. Bridge network configuration
          1.  
            Host network configuration
          2.  
            Configuring guest network
        2.  
          Network configuration for VCS cluster across physical machines (PM-PM)
        3.  
          Standard bridge configuration
        4.  
          Network configuration for VM-VM cluster
  3. Section III. Implementing Linux virtualization use cases
    1. Application visibility and device discovery
      1.  
        About storage to application visibility using Veritas InfoScale Operations Manager
      2.  
        About Kernel-based Virtual Machine (KVM) virtualization discovery in Veritas InfoScale Operations Manager
      3.  
        About Red Hat Enterprise Virtualization (RHEV) virtualization discovery in Veritas InfoScale Operations Manager
      4.  
        About Microsoft Hyper-V virtualization discovery
      5.  
        Virtual machine discovery in Microsoft Hyper-V
      6.  
        Storage mapping discovery in Microsoft Hyper-V
    2. Server consolidation
      1.  
        Server consolidation
      2.  
        Implementing server consolidation for a simple workload
    3. Physical to virtual migration
      1.  
        Physical to virtual migration
      2.  
        How to implement physical to virtual migration (P2V)
    4. Simplified management
      1.  
        Simplified management
      2. Provisioning storage for a guest virtual machine
        1.  
          Provisioning Veritas Volume Manager volumes as data disks for VM guests
        2.  
          Provisioning Veritas Volume Manager volumes as boot disks for guest virtual machines
      3. Boot image management
        1.  
          Creating the boot disk group
        2.  
          Creating and configuring the golden image
        3.  
          Rapid Provisioning of virtual machines using the golden image
        4.  
          Storage Savings from space-optimized snapshots
    5. Application availability using Cluster Server
      1.  
        About application availability options
      2.  
        Cluster Server In a KVM Environment Architecture Summary
      3.  
        VCS in host to provide the Virtual Machine high availability and ApplicationHA in guest to provide application high availability
      4.  
        Virtual to Virtual clustering and failover
      5.  
        I/O fencing support for Virtual to Virtual clustering
      6.  
        Virtual to Physical clustering and failover
      7.  
        Recommendations for improved resiliency of InfoScale clusters in virtualized environments
    6. Virtual machine availability
      1.  
        About virtual machine availability options
      2.  
        VCS in host monitoring the Virtual Machine as a resource
      3.  
        Validating the virtualization environment for virtual machine availability
    7. Virtual machine availability for live migration
      1.  
        About live migration
      2.  
        Live migration requirements
      3.  
        Reduce SAN investment with Flexible Shared Storage in the RHEV environment
      4. About Flexible Storage Sharing
        1.  
          Flexible Storage Sharing use cases
        2.  
          Limitations of Flexible Storage Sharing
      5.  
        Configure Storage Foundation components as backend storage for virtual machines
      6.  
        Implementing live migration for virtual machine availability
    8. Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
      1.  
        Installing and configuring Cluster Server for Red Hat Enterprise Virtualization (RHEV) virtual-to-virtual clustering
      2.  
        Storage configuration for VCS in a RHEV environment
    9. Virtual to virtual clustering in a Microsoft Hyper-V environment
      1.  
        Installing and configuring Cluster Server with Microsoft Hyper-V virtual-to-virtual clustering
    10. Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
      1.  
        Installing and configuring Cluster Server for Oracle Virtual Machine (OVM) virtual-to-virtual clustering
      2.  
        Storage configuration for VCS support in Oracle Virtual Machine (OVM)
    11. Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
      1.  
        About disaster recovery for Red Hat Enterprise Virtualization virtual machines
      2.  
        DR requirements in an RHEV environment
      3. Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
        1.  
          Why select VVR over array-based replication solutions
      4.  
        Configure Storage Foundation components as backend storage
      5.  
        Configure VVR and VFR in VCS GCO option for replication between DR sites
      6.  
        Configuring Red Hat Enterprise Virtualization (RHEV) virtual machines for disaster recovery using Cluster Server (VCS)
    12. Multi-tier business service support
      1.  
        About Virtual Business Services
      2.  
        Sample virtual business service configuration
      3. Recovery of Multi-tier Applications managed with Virtual Business Services in Veritas Operations Manager
        1.  
          Service Group Management in Virtual Business Services
    13. Managing Docker containers with InfoScale Enterprise
      1.  
        About managing Docker containers with InfoScale Enterprise product
      2. About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
        1.  
          Supported software
        2.  
          How the agents makes Veritas highly available
        3.  
          Documentation reference
      3. Managing storage capacity for Docker containers
        1.  
          Provisioning storage for Docker infrastructure from the Veritas File System
        2. Provisioning data volumes for Docker containers
          1.  
            Provisioning storage on Veritas File System as data volumes for containers
          2.  
            Provisioning VxVM volumes as data volumes for containers
          3.  
            Creating a data volume container
        3. Automatically provision storage for Docker Containers
          1.  
            Installing the Veritas InfoScale Docker volume plugin
          2.  
            Configuring a disk group
          3.  
            Creating Docker Containers with storage attached automatically
          4.  
            Avoid noisy neighbor problem by using Quality of Service support
          5.  
            Provision to create snapshots
          6.  
            Configuring Veritas volume plugin with Docker 1.12 Swarm mode
        4.  
          About using InfoScale Enterprise features to manage storage for containers
      4. Offline migration of Docker containers
        1.  
          Migrating Docker containers
        2.  
          Migrating Docker Daemons and Docker Containers
      5. Disaster recovery of volumes and file systems in Docker environments
        1.  
          Configuring Docker containers for disaster recovery
      6.  
        Limitations while managing Docker containers
  4. Section IV. Reference
    1. Appendix A. Troubleshooting
      1.  
        Troubleshooting virtual machine live migration
      2.  
        Live migration storage connectivity in a Red Hat Enterprise Virtualization (RHEV) environment
      3.  
        Troubleshooting Red Hat Enterprise Virtualization (RHEV) virtual machine disaster recovery (DR)
      4.  
        The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
      5.  
        VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
      6.  
        Virtual machine start fails due to having the wrong boot order in RHEV environments
      7.  
        Virtual machine hangs in the wait_for_launch state and fails to start in RHEV environments
      8.  
        VCS fails to start a virtual machine on a host in another RHEV cluster if the DROpts attribute is not set
      9.  
        Virtual machine fails to detect attached network cards in RHEV environments
      10.  
        The KVMGuest agent behavior is undefined if any key of the RHEVMInfo attribute is updated using the -add or -delete options of the hares -modify command
      11.  
        RHEV environment: If a node on which the VM is running panics or is forcefully shutdown, VCS is unable to start the VM on another node
    2. Appendix B. Sample configurations
      1. Sample configuration in a KVM environment
        1.  
          Sample configuration 1: Native LVM volumes are used to store the guest image
        2.  
          Sample configuration 2: VxVM volumes are used to store the guest image
        3.  
          Sample configuration 3: CVM-CFS is used to store the guest image
      2.  
        Sample configurations for a Red Hat Enterprise Virtualization (RHEV) environment
    3. Appendix C. Where to find more information
      1.  
        Veritas InfoScale documentation
      2.  
        Linux virtualization documentation
      3.  
        Service and support
      4.  
        About Veritas Services and Operations Readiness Tools (SORT)

Configuring Red Hat Enterprise Virtualization (RHEV) virtual machines for disaster recovery using Cluster Server (VCS)

You can configure new or existing RHEV-based virtual machines for disaster recovery (DR) by setting them up and configuring VCS for DR.

To set up RHEV-based virtual machines for DR

  1. Configure VCS on both sites in the RHEL-H hosts, with the GCO option.

    For more information about configuring a global cluster: see the Veritas InfoScale ™ Solutions Disaster Recovery Implementation Guide.

  2. Configure replication setup using a replication technology such as VVR, VFR, Hitachi TrueCopy, or EMC SRDF.
  3. Map the primary LUNs to all the RHEL-H hosts in the primary site.
  4. Issue OS level SCSI rescan commands and verify that the LUNs are visible in the output of the multipath -l command.
  5. Map the secondary LUNs to all the RHEL hosts in the secondary site and verify that they are visible in the output of the multipath -l command on all the hosts in the secondary site.
  6. Add the RHEL-H hosts to the RHEV-M console.
    • Create two RHEV clusters in the same datacenter, representing the two sites.

    • Add all the RHEL-H hosts from the primary site to one of the RHEV clusters.

    • Similarly, add all the RHEL-H hosts from the secondary site to the second RHEV cluster.

  7. Log in to the RHEV-M console and create a Fibre Channel-type Storage Domain on one of the primary site hosts using the primary LUNs.
  8. In the RHEV-M console, create a virtual machine and assign a virtual disk carved out of the Fibre Channel Storage Domain created in 7.
    • Configure any additional parameters such as NICs and virtual disk for the virtual machine.

    • Verify that the virtual machine turns on correctly.

    • Install appropriate RHEL operating system inside the guest.

    • Configure the network interface with appropriate parameters such as IP address, Netmask, and gateway.

    • Make sure that the NIC is not under network manager control. You can disable this setting by editing the /etc/sysconfig/network-scripts/ifcfg-eth0 file inside the virtual machine and setting NM_CONTROLLED to "no".

    • Make sure that the virtual machine does not have a CDROM attached to it. This is necessary since VCS sends the DR payload in the form of a CDROM to the virtual machine.

  9. Copy the package VRTSvcsnr from the VCS installation media to the guest and install it. This package installs a lightweight service which starts when the guest boots. The service reconfigures the IP address and Gateway of the guest as specified in the KVMGuest resource.

To configure VCS for managing RHEV-based virtual machines for DR

  1. Install VCS in the RHEL-H hosts at both the primary and the secondary sites.
    • Configure all the VCS nodes in the primary site in a single primary VCS cluster.

    • Configure all the VCS nodes in the secondary site in the same secondary VCS cluster.

    • Make sure that the RHEV cluster at each site corresponds to the VCS cluster at that site.

    See Figure: VCS Resource dependency diagram.

  2. Create a service group in the primary VCS cluster and add a KVMGuest resource for managing the virtual machine. Repeat this step in the secondary VCS cluster.
  3. Configure site-specific parameters for the KVMGuest resource in each VCS cluster.
    • The DROpts attribute enables you to specify site-specific networking parameters for the virtual machine such as IP Address, Netmask, Gateway, DNSServers, DNSSearchPath and Device. The Device is set to the name of the NIC as seen by the guest, for example eth0.

    • Verify that the ConfigureNetwork key in the DROpts attribute is set to 1.

    • The DROpts attribute must be set on the KVMGuest resource in both the clusters.

  4. Configure the preonline trigger on the virtual machine service group. The preonline trigger script is located at /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/preonline_rhev.
    • Create a folder in the /opt/VRTSvcs directory on each RHEL-H host to host the trigger script. Copy the trigger script in this folder with the name "preonline". Enable the preonline trigger on the virtual machine service group by setting the PreOnline service group attribute. Also, specify the path (relative to /opt/VRTSvcs) in the TriggerPath attribute.

    For example:

    group RHEV_VM_SG1 (
        SystemList = { vcslx317 = 0, vcslx373 = 1 }
        ClusterList = { test_rhevdr_pri = 0, test_rhevdr_sec = 1 }
        AutoStartList = { vcslx317 }
        TriggerPath = "bin/triggers/RHEVDR"
        PreOnline = 1
        )

    For more information on setting triggers, see the Cluster Server Administrator's Guide.

  5. Create a separate service group for managing the replication direction. This task must be performed for each cluster.
    • Add the appropriate replication resource (such as Hitachi TrueCopy or EMC SRDF). For details on the appropriate replication agent, see the Replication Agent Installation and Configuration Guide for that agent.

    • Add an Online Global Firm dependency from the virtual machine (VM) service group to the replication service group.

    • Configure the replication service group as global.

  6. Configure the postonline trigger on the replication service group. The postonline trigger script is located at /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/postonline_rhev.
    • Copy the postonline trigger to the same location as the preonline trigger script, with the name "postonline". Enable the postonline trigger on the replication service group by adding the POSTONLINE key to the TriggersEnabled attribute. Also, specify the path (relative to /opt/VRTSvcs) in the TriggerPath attribute.

      For example:

      group SRDF_SG1 (
          SystemList = { vcslx317 = 0, vcslx373 = 1 }
          ClusterList = { test_rhevdr_pri = 0, test_rhevdr_sec = 1 }
          AutoStartList = { vcslx317 }
          TriggerPath = "bin/triggers/RHEVDR"
          TriggersEnabled = { POSTONLINE }
          )

      For more information on setting triggers, see the Cluster Server Administrator's Guide.

If you have multiple replicated Storage Domains, the replication direction for all the domains in a datacenter must be the same.

To align replication for multiple replicated Storage Domains in a datacenter

  1. Add all the replication resources in the same Replication Service Group.
  2. If you require different Storage Domains to be replicated in different directions at the same time, configure them in a separate datacenter.

    This is because the Storage Pool Manager (SPM) host requires read-write access to all the Storage Domains in a datacenter.

After completing all the above steps, you can easily switch the virtual machine service group from one site to the other. When you online the replication service group in a site, the replication resource makes sure that the replication direction is from that site to the remote site. This ensures that all the replicated devices are read-write enabled in the current site.

See About disaster recovery for Red Hat Enterprise Virtualization virtual machines.

Disaster recovery workflow

  1. Online the replication service group in a site followed by the virtual machine service group.
  2. Check the failover by logging into the RHEV-M console. Select the Hosts tab of the appropriate datacenter to verify that the SPM is marked on one of the hosts in the site in which the replication service group is online.
  3. When you bring the Replication Service Group online, the postonline trigger probes the KVMGuest resources in the parent service group. This is to ensure that the virtual machine service group can go online.
  4. When you bring the virtual machine service group online, the preonline trigger performs the following tasks:
    • The trigger checks whether the SPM is in the local cluster. If the SPM is in the local cluster, the trigger checks whether the SPM host is in the UP state. If the SPM host is in the NON_RESPONSIVE state, the trigger fences out the host. This enables RHEV-M to select some other host in the current cluster.

    • If the SPM is in the remote cluster, the trigger deactivates all the hosts in the remote cluster. Additionally, if the remote SPM host is in the NON_RESPONSIVE state, the trigger script fences out the host. This enables RHEV-M to select some other host in the current cluster.

    • The trigger script then waits for 10 minutes for the SPM to failover to the local cluster.

    • When the SPM successfully fails over to the local cluster, the script then reactivates all the remote hosts that were previously deactivated.

    • Then the trigger script proceeds to online the virtual machine service group.

  5. When the KVMGuest resource goes online, the KVMGuest agent sets a virtual machine payload on the virtual machine before starting it. This payload contains the site-specific networking parameters that you set in the DROpts attribute for that resource.
  6. When the virtual machine starts, the vcs-net-reconfig service is loaded and reads the DR parameters from the CDROM and then applies them to the guest. This way, the networking personality of the virtual machine is modified when the virtual machine crosses site boundaries.

Troubleshooting a disaster recovery configuration

  • You can troubleshoot your disaster recovery in the following scenarios:
    • When the service groups are switched to the secondary site, the hosts in the primary site may go into the NON_OPERATIONAL state. To resolve this issue, deactivate the hosts by putting them in maintenance mode, and reactivate them. If the issue is not resolved, log onto the RHEL-H host and restart the vdsmd service using the service vdsmd restartcommand. If the issue still persists, please contact RedHat Technical Support.

    • After a DR failover, the DNS configuration of the virtual machine may not change. To resolve this issue, check if the network adapter inside the virtual machine is under Network Manager control. If so, unconfigure the network adapter by editing the /etc/sysconfig/network-scripts/ifcfg-eth0 file inside the virtual machine and setting NM_CONTROLLED to "no".

    • After a failover to the secondary site, the virtual machine service group does not go online. To resolve this issue, check the state of the SPM in the data center. Make sure that the SPM is active on some host in the secondary RHEV cluster. Additionally, check the VCS engine logs for more information.