Veritas InfoScale™ 8.0 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- Overview of the Veritas InfoScale Products Virtualization Guide
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Virtualization use cases addressed by Veritas InfoScale products
- About virtual-to-virtual (in-guest) clustering and failover
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Creating and launching a kernel-based virtual machine (KVM) host
- RHEL-based KVM installation and usage
- Setting up a kernel-based virtual machine (KVM) guest
- About setting up KVM with Veritas InfoScale Solutions
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Dynamic Multi-Pathing in the KVM guest virtualized machine
- Dynamic Multi-Pathing in the KVM host
- Storage Foundation in the virtualized guest machine
- Enabling I/O fencing in KVM guests
- Storage Foundation Cluster File System High Availability in the KVM host
- Dynamic Multi-Pathing in the KVM host and guest virtual machine
- Dynamic Multi-Pathing in the KVM host and Storage Foundation HA in the KVM guest virtual machine
- Cluster Server in the KVM host
- Cluster Server in the guest
- Cluster Server in a cluster across virtual machine guests and physical machines
- Installing Veritas InfoScale Solutions in the kernel-based virtual machine environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing Linux virtualization use cases
- Application visibility and device discovery
- About storage to application visibility using Veritas InfoScale Operations Manager
- About Kernel-based Virtual Machine (KVM) virtualization discovery in Veritas InfoScale Operations Manager
- About Red Hat Enterprise Virtualization (RHEV) virtualization discovery in Veritas InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Virtual machine discovery in Microsoft Hyper-V
- Storage mapping discovery in Microsoft Hyper-V
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- About application availability options
- Cluster Server In a KVM Environment Architecture Summary
- VCS in host to provide the Virtual Machine high availability and ApplicationHA in guest to provide application high availability
- Virtual to Virtual clustering and failover
- I/O fencing support for Virtual to Virtual clustering
- Virtual to Physical clustering and failover
- Recommendations for improved resiliency of InfoScale clusters in virtualized environments
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- About disaster recovery for Red Hat Enterprise Virtualization virtual machines
- DR requirements in an RHEV environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Configure Storage Foundation components as backend storage
- Configure VVR and VFR in VCS GCO option for replication between DR sites
- Configuring Red Hat Enterprise Virtualization (RHEV) virtual machines for disaster recovery using Cluster Server (VCS)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About managing Docker containers with InfoScale Enterprise product
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Limitations while managing Docker containers
- Application visibility and device discovery
- Section IV. Reference
- Appendix A. Troubleshooting
- Troubleshooting virtual machine live migration
- Live migration storage connectivity in a Red Hat Enterprise Virtualization (RHEV) environment
- Troubleshooting Red Hat Enterprise Virtualization (RHEV) virtual machine disaster recovery (DR)
- The KVMGuest resource may remain in the online state even if storage connectivity to the host is lost
- VCS initiates a virtual machine failover if a host on which a virtual machine is running loses network connectivity
- Virtual machine start fails due to having the wrong boot order in RHEV environments
- Virtual machine hangs in the wait_for_launch state and fails to start in RHEV environments
- VCS fails to start a virtual machine on a host in another RHEV cluster if the DROpts attribute is not set
- Virtual machine fails to detect attached network cards in RHEV environments
- The KVMGuest agent behavior is undefined if any key of the RHEVMInfo attribute is updated using the -add or -delete options of the hares -modify command
- RHEV environment: If a node on which the VM is running panics or is forcefully shutdown, VCS is unable to start the VM on another node
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
How to implement physical to virtual migration (P2V)
Migrating data from a physical server to a virtualized guest, the LUNs are first physically connected to the host, and then the LUNs are mapped in KVM from the host to the guest.
This use case procedure is very similar to the server consolidation use case and the procedures are quite similar. Physical to virtual migration is the process used to achieve server consolidation.
This use case requires Storage Foundation HA or Storage Foundation Cluster File System HA in the KVM host and Storage Foundation in the KVM guest. For setup information:
See Installing Veritas InfoScale Solutions in the kernel-based virtual machine environment.
There are three options:
If Veritas InfoScale Solutions products are installed on both the physical server and the virtual host, identifying the LUNs which need mapping is made easy. Once the LUNs are connected to the virtual host, 'vxdisk - o alldgs list' can be used to identify the devices in the disk group which require mapping.
If Veritas InfoScale Solutions products are not installed on the virtual host and the physical server is a Linux system, the devices which need mapping can be identified by using the device IDs on the physical server.
If Veritas InfoScale Solutions products are installed only on the physical server and the SF administration utility for RHEV,
vxrhevadm, is installed on the RHEV-M machine, you can identify the exact DMP device mapping on the guest. However, for volume and file system mappings, run heuristics to identify exact device mappings on the host.
To implement physical to virtual migration with Storage Foundation in the host and guest (KVM-only)
- Find the Linux device IDs of the devices which need mapping.
# vxdg list diskgroup
- For each disk in the disk group:
# vxdmpadm getsubpaths dmpnodename=device # ls -al /dev/disk/by-id/* | grep subpath
If Storage Foundation is not installed on the host, before decommissioning the physical server, identify the LUNs which require mapping by using the devices serial numbers. The LUNs can be mapped to the guest using the persistent "by-path" device links.
To implement physical to virtual migration if Storage Foundation is not installed in the host (KVM-only)
- On the physical server, identify the LUNs which must be mapped on the KVM host using the udevadm command.
- Map the LUNs to the virtualization host.
The udev database can be used to identify the devices on the host which need to be mapped.
# udevadm info --export-db | grep '/dev/disk/by-path' | \ cut -d' ' -f4 /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-1 /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-2
Map the LUNs to the guest. As there are multiple paths in this example, the paths sym-link can be used to ensure consistent device mapping for all four paths.
# virsh attach-disk guest1 \ /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-1 \ vdb # virsh attach-disk guest1 \ /dev/disk/by-path/pci-0000:05:00.0-fc-0x5006016239a01884-lun-2 \ vdc - Verify that the devices are correctly mapped to the guest. The configuration changes can be made persistent by redefining the guest.
# virsh dumpxml guest1 > /tmp/guest1.xml # virsh define /tmp/guest1.xm
To implement physical to virtual migration with Storage Foundation in the guest and host (KVM-only)
- Map the LUNs to the virtualization host.
- On the virtualization host, identify the devices which require mapping. For example, the devices with the disk group data_dg are mapped to guest1.
# vxdisk -o alldgs list |grep data_dg 3pardata0_1 auto:cdsdisk - (data_dg) online 3pardata0_2 auto:cdsdisk - (data_dg) online
- Map the devices to the guest.
# virsh attach-disk guest1 /dev/vx/dmp/3pardata0_1 vdb Disk attached successfully # virsh attach-disk guest1 /dev/vx/dmp/3pardata0_2 vdc Disk attached successfully
- In the guest, verify that all devices are correctly mapped and that the disk group is available.
# vxdisk scandisks # vxdisk -o alldgs list |grep data_dg 3pardata0_1 auto:cdsdisk - (data_dg) online 3pardata0_2 auto:cdsdisk - (data_dg) online
- In the virtualization host make the mapping persistent by redefining the guest:
# virsh dumpxml guest1 > /tmp/guest1.xml # virsh define /tmp/guest1.xml
To implement physical to virtual migration with Storage Foundation only in the guest and the SF administration utility for RHEV, vxrhevadm, on the RHEV Manager
- Map the LUNs to the virtualization host.
- On the virtualization host, identify the devices which require mapping. For example, the devices with the disk group data_dg are mapped to guest1.
# vxdisk list -guest1 <data_dg> DMP nodes # vxprint -guest1 <data_dg> -v, volume # vxfs, file created on vxfs filesystem
- 2. Attach each entity to respective virtual machines.
# ./vxrhevadm -p <password> -n <VM name> -d <dmpnode> attach Attached a dmp node to the specified virtual machine # ./vxrhevadm -p <password> -n <VM name> -v <volume> attach Attached a volume device to the specified virtual machine # ./vxrhevadm -p <password> -n <VM name> -f <file>:raw attach Attached a file system device to the specified virtual machine
- Power up the guest virtual machine and verify that the SCSI disks are available in the guest virtual machine.
Note:
The XML dumps available in the
/var/log/vdsm/vdsm.logis a hint about device mappings. For DMP nodes, enable persistent naming in the host to identify the device mapping in the guest. For volume and file system mappings, run heuristics to identify device mappings in the guest.
To use a Veritas Volume Manager volume as a boot device when configuring a new virtual machine
- Follow the recommended steps in your Linux virtualization documentation to install and boot a VM guest.
When requested to select managed or existing storage for the boot device, use the full path to the VxVM storage volume block device, for example /dev/vx/dsk/boot_dg/bootdisk-vol.
- If using the virt-install utility, enter the full path to the VxVM volume block device with the --disk parameter, for example, --disk path=/dev/vx/dsk/boot_dg/bootdisk-vol.
To use a Storage Foundation component as a boot device when configuring a new virtual machine
- Follow the recommended steps in your Linux virtualization documentation to install and boot a VM guest.
When requested to select managed or existing storage for the boot device, use the full path to the VxVM storage volume block device, file system device, or DMP node.
For example /dev/vx/dsk/boot_dg/bootdisk-vol
Likewise, /dev/vx/dsk/boot_dg/bootdisk-file, or /dev/vx/dsk/boot_dg/bootdisk-dmpnode.
- In the RHEV Manager advanced settings for virtual machines, select the boot option and attach the appropriate ISO image.
- Attach the DMP node, volume block device, or file system device as the boot option.
# /opt/VRTSrhevm/bin/vxrhevadm -p \
<rhevm-password> -n <vmname> -d <dmpnode-path> attach
# /opt/VRTSrhevm/bin/vxrhevadm -p \
<rhevm-password> -n <vmname> -v <volume-path> attach
# /opt/VRTSrhevm/bin/vxrhevadm -p \
<rhevm-password> -n <vmname> -f <file-path:raw> | <file-path:qcow2> attach
- Start the guest virtual machine and boot from ISO.
- Install OS on the SF entity appearing as a SCSI device. Install bootloader on the SCSI device itself.
- Power off the guest virtual machine.
- Configure the host to boot from hard disk in guest virtual machine settings.
- Power on the guest to book from the configured SF component.