Veritas InfoScale™ 8.0 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Application visibility and device discovery
- Section IV. Reference
- Appendix A. Troubleshooting
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
About disaster recovery for Red Hat Enterprise Virtualization virtual machines
Red Hat Enterprise Virtualization (RHEV) virtual machines can be configured for disaster recovery (DR) by replicating their boot disks using replication methods such as Volume Replicator (VVR), File Replicator (VFR), Hitachi TrueCopy or EMC SRDF. The network configuration for the virtual machines in the primary site may not be effective in the secondary site if the two sites are in different IP subnets. Hence you must make some additional configuration changes to the KVMGuest resource managing the virtual machine.
Supported technologies for replicating virtual machines include:
Volume Replicator (VVR)
File Replicator (VFR)
EMC SRDF
Hitachi TrueCopy
Note:
Live migration of virtual machines across replicated sites is not supported.
Disaster recovery use cases for virtual machines work in the following way:
The replication agent takes care of the replication direction. After a disaster event at the primary site, VCS tries to online the replication service group at the secondary site (according to the ClusterFailoverPolicy). The replication resource reverses the replication direction. Reversing the replication direction makes sure that the old secondary LUNs become the new primary LUNs and also are Read-Write enabled on the RHEL-H hosts at the secondary site. This helps RHEV-M activate the Fibre Channel (FC) Storage Domain on the secondary site RHEL-H hosts.
Before the virtual machine (VM) service group can be brought online, the Storage Pool Manager (SPM) in the datacenter needs to failover to the secondary site. This is achieved by the pre-online trigger script configured on the VM service group. This trigger script checks whether the SPM is still active in the primary RHEV cluster. If so, it deactivates all the RHEL-H hosts in the primary RHEV cluster. Additionally, if the SPM host in the primary RHEV cluster is in the NON_RESPONSIVE state, the trigger fences out the host to enable SPM failover. The trigger script then waits for the SPM to failover to the secondary RHEV cluster. When the SPM successfully fails over to the secondary RHEV cluster, the pre-online trigger script reactivates all the RHEL-H hosts in the primary RHEV cluster, which were deactivated earlier and proceeds to online the VM service group in the secondary site