InfoScale™ 9.0 Virtualization Guide - Solaris
- Section I. Overview of InfoScale solutions in Solaris virtualization environments
- Section II. Zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- About VCS support for zones
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About InfoScale SFRAC component support for Oracle RAC in a zone environment
- Known issues with supporting a InfoScale SFRAC component in a zone environment
- Software limitations of InfoScale support of non-global zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- Section III. Oracle VM Server for SPARC
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Arctera InfoScale Enterprise solutions in Oracle VM server for SPARC
- Features
- Split InfoScale stack model
- Guest-based InfoScale stack model
- Layered InfoScale stack model
- System requirements
- Installing InfoScale in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SFRAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Using SmartIO in the virtualized environment
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Enabling DMP path failover in the guest domain
In Oracle VM Server configurations the Virtual Disk Client (VDC) driver timeout is set to zero by default which signifies infinity. This can cause the failed I/O not to return to the guest domain, if either the control or alternate I/O domain crashes unexpectedly. As a result the guest domain cannot get back the failed I/Os and cannot route them through the alternate domain. If this issue occurs or to avoid this issue, you must set the VDC driver timeout.
There are two ways to set the VDC driver timeout:
Modify globally all of the LUNs that are exported to the current guest domain. This requires a reboot to all the guest domains. | |
Manually export every LUN directly to the guest domain and set the timeout parameter to 30 seconds. No reboot is required. |
To change the VDC driver timeout globally
- On each guest domain, edit the
/etc/system
file and add the following line to set the VDC driver timeout to 30 seconds:set vdc:vdc_timeout=30
- Reboot the guest domains.
To change the VDC driver timeout for each LUN
- Create the primary domain using four internal disks and get all required SAN LUNs for the guest domains allocated to the primary domain.
- Remove half of the system's I/O from the primary domain:
# ldm remove-io pci_X primary_domain_name
where pci_x is the name of the PCI bus for your system.
where primary_domain_name is the name of the primary domain.
For example:
# ldm remove-io pci_@400 primary
- Create the alternate I/O domain on the other four internal disks and add the I/O that was removed from the primary domain:
# ldm add-io pci_X primary_domain_name
where pci_x is the name of the PCI bus for your system.
where primary_domain_name is the name of the primary domain.
For example:
# ldm add-io pci_@400 primary
- On the primary domain, create the guest domains. In the sample the enclosure-based name of one of the LUNs is
xyz
and the guest domain ishsxd0015
:# ldm add-vdsdev /dev/vx/dmp/xyz vol0015-001-p1@primary-vds0 # ldm add-vdsdev /dev/vx/dmp/xyz vol0015-001-p2@alternate-vds0 # ldm add-vdisk timeout=30 vdsk0015-001-p1 \ vol0015-001-p1@primary-vds0 hsxd0015 # ldm add-vdisk timeout=30 vdsk0015-001-p2 \ vol0015-001-p2@alternate-vds0 hsxd0015
The same set of four commands for each SAN LUN that gets placed in a guest domain. Use three SAN LUNs for SAN boot in the guest domain and the rest for application data. Each LUN in the guest domain has one path backup through the primary domain and one backup through the alternate domain. That means each LUN only uses one LDC in each domain. Also, since you are using DMP you still only use 1 LDC in each domain, even if the LUN has more than two paths from the array.