Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Virtualization Guide - Solaris
Last Published:
2025-04-14
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Solaris
- Section I. Overview of InfoScale solutions in Solaris virtualization environments
- Section II. Zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- About VCS support for zones
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About InfoScale SFRAC component support for Oracle RAC in a zone environment
- Known issues with supporting a InfoScale SFRAC component in a zone environment
- Software limitations of InfoScale support of non-global zones
- InfoScale Enterprise Solutions support for Solaris Native Zones
- Section III. Oracle VM Server for SPARC
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Arctera InfoScale Enterprise solutions in Oracle VM server for SPARC
- Features
- Split InfoScale stack model
- Guest-based InfoScale stack model
- Layered InfoScale stack model
- System requirements
- Installing InfoScale in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SFRAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Using SmartIO in the virtualized environment
- InfoScale Enterprise Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configure the service group for a Logical Domain
VCS uses the LDOM agent to manage a guest logical domain. The Logical Domain resource has an online local hard dependency on the AlternateIO resource.
Configuration notes:
Configure the service group as a fail over type service group.
The SystemList attribute in the LDOM service group must contain only host names of the control domains from each physical system in the cluster.
The LDOM service group must have online local hard dependency with the AlternateIO service group.
If the guest domain needs to be made available even when the primary domain is rebooted or shut down for planned maintenance.
To make the guest domain available
- Set the LDOM resource attribute DomainFailurePolicy to { primary=ignore, alternate1=stop } for all the LDOM resources in the cluster which are critical and needs to be available during primary/control domain maintenance. This setting ensure that guest domain will not be brought down when primary/control domain is taken down for planned maintenance.
# hares -modify DomainFailurePolicy ldmres primary ignore \ alternate1 stop
- Set the LDOM service group attribute SysDownPolicy to AutoDisableNoOffline This setting ensures that VCS will not fail-over the service group even when the primary/control domain where the service group is online is taken down.
# hagrp -modify ldmsg SysDownPolicy AutoDisableNoOffline
- The service group will be auto-disabled in the cluster when the control domain is taken down for maintenance. Once the control domain is brought online again, clear the auto disabled system by executing the following command:
# hagrp -autoenable ldmsg -sys primary1
- Once the maintenance for the control domain is completed, set the DomainFailurePolicy attribute to it's original values (default: {primary = stop}). Also reset the service group attribute SysDownPolicy:
# hares -modify ldmres DomainFailurePolicy primary stop # hagrp -modify ldmsg SysDownPolicy -delete AutoDisableNoOffline