Veritas InfoScale™ 8.0 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Application visibility and device discovery
- Section IV. Reference
- Appendix A. Troubleshooting
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Mapping DMP meta-devices
Consistent mapping can be achieved from the host to the guest by using the Persistent Naming feature of DMP.
Running DMP in the host has other practical benefits:
Multi-path device can be exported as a single device. This makes managing mapping easier, and helps alleviate the 32 device limit, imposed by the VirtIO driver.
Path failover can be managed efficiently in the host, taking full advantage of the Event Source daemon to proactively monitor paths.
When Veritas InfoScale Solutions products are installed in the guest, the 'Persistent Naming' feature provides consistent naming of supported devices from the guest through the host to the array. The User Defined Names feature, or UDN, allows DMP virtual devices to have custom assigned names.
To map a DMP meta-device to a guest
- Map the device to the guest. In this example the dmp device xiv0_8614 is mapped to guest_1.
# virsh attach-disk guest_1 /dev/vx/dmp/xiv0_8614 vdb
- The mapping can be made persistent by redefining the guest.
# virsh dumpxml guest_1 > /tmp/guest_1.xml # virsh define /tmp/guest_1.xml