Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
Last Published:
2018-08-22
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configure the service group for a Logical Domain
VCS uses the LDOM agent to manage a guest logical domain. The Logical Domain resource has an online local hard dependency on the AlternateIO resource.
Configuration notes:
Configure the service group as a fail over type service group.
The SystemList attribute in the LDOM service group must contain only host names of the control domains from each physical system in the cluster.
The LDOM service group must have online local hard dependency with the AlternateIO service group.
If the guest domain needs to be made available even when the primary domain is rebooted or shut down for planned maintenance.
To make the guest domain available
- Set the LDOM resource attribute DomainFailurePolicy to { primary=ignore, alternate1=stop } for all the LDOM resources in the cluster which are critical and needs to be available during primary/control domain maintenance. This setting ensure that guest domain will not be brought down when primary/control domain is taken down for planned maintenance.
# hares -modify DomainFailurePolicy ldmres primary ignore \ alternate1 stop
- Set the LDOM service group attribute SysDownPolicy to AutoDisableNoOffline This setting ensures that VCS will not fail-over the service group even when the primary/control domain where the service group is online is taken down.
# hagrp -modify ldmsg SysDownPolicy AutoDisableNoOffline
- The service group will be auto-disabled in the cluster when the control domain is taken down for maintenance. Once the control domain is brought online again, clear the auto disabled system by executing the following command:
# hagrp -autoenable ldmsg -sys primary1
- Once the maintenance for the control domain is completed, set the DomainFailurePolicy attribute to it's original values (default: {primary = stop}). Also reset the service group attribute SysDownPolicy:
# hares -modify ldmres DomainFailurePolicy primary stop # hagrp -modify ldmsg SysDownPolicy -delete AutoDisableNoOffline