Veritas InfoScale™ 8.0 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Application visibility and device discovery
- Section IV. Reference
- Appendix A. Troubleshooting
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Mapping devices using the virtio-scsi interface
Devices can be mapped to the guest through the virtio-scsi interface, replacing the virtio-blk device and providing the following improvements:
The ability to connect to multiple storage devices
A standard command set
Standard device naming to simplify migrations
Device pass-through
Note:
Mapping using paths is also supported with the virtio-scsi interface.
To enable SCSI passthrough and use the exported disks as bare-metal SCSI devices inside the guest, the <disk> element's device attribute must be set to "lun" instead of "disk". The following disk XML file provides an example of the device attribute's value for virtio-scsi:
<disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-path/pci-0000:07:00.1-fc-0x5001438011393dee-lun-1'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='4' bus='0' target='0' unit='0'/> </disk>
To map one or more devices using virtio-scsi
- Create one XML file for each SCSI controller, and enter the following content into the XML files:
<controller type='scsi' model='virtio-scsi' index='1'/>
The XML file in this example is named
ctlr.xml
. - Attach the SCSI controllers to the guest:
# virsh attach-device guest1 ctlr.xml --config
- Create XML files for the disks, and enter the following content into the XML files:
<disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-path/pci-0000:07:00.1-fc-0x5001438011393dee-lun-1'/> <target dev='sdd' bus='scsi'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk>
The XML file in this example is named
disk.xml
. - Attach the disk to the existing guest:
# virsh attach-device guest1 disk.xml --config