Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Recommendations while configuring VCS and Oracle VM Server for SPARC with multiple I/O domains
Online and offline operations for service groups in the StorageSG attribute
To manually bring online or take offline service groups that are configured in the StorageSG attribute do not use the AlternateIO resource or its service group
Instead, use service groups configured in the StorageSG attribute.
Freeze the service group for the AlternateIO resource
Freeze the AlternateIO service group before you bring online or take offline service groups configured in the StorageSG attribute of the AlternateIO resource. If you do not freeze the service group, the behavior of the Logical Domain is unknown as it is dependent on the AlternateIO service group.
Configuring preonline trigger for storage service groups
You must configure preonline trigger in the following scenario:
When the service groups configured in the StorageSG attribute of the AlternateIO resource are of fail over type, and if you accidentally bring storage service groups online on another physical system in the cluster.
It is possible to bring the storage service groups online on another physical system because resources configured to monitor back-end storage services are present in different service groups on each physical system. Thus, VCS cannot prevent resources coming online on multiple systems. This may cause data corruption.
Note:
Perform this procedure for storage service groups on each node.
To configure preonline trigger for each service group listed in the StorageSG attribute
Run the following commands:
# hagrp -modify stg-sg TriggerPath bin/AlternateIO/StorageSG # hagrp -modify stg-sg TriggersEnabled PREONLINE
where stg-sg is the name of the storage service group
Set connection time out period for virtual disks
When a disk device is not available, I/O services from the guest domain to the virtual disks are blocked.
Veritas recommends to set a connection time out period for each virtual disk so that applications times out after the set period instead of waiting indefinitely. Run the following command:
# ldm add-vdisk timeout=seconds disk_name \ volume_name@service_name ldom
Fail over of LDom service group when all the I/O domains are down. When the attribute SysDownPolicy is set to AutoDisableNoOffline for a service group, the service group state would be changed to OFFLINE|AutoDisabled when the system on which the service group online goes down. Before auto-enabling the service group and online the service group on any other nodes, it is mandatory to ensure that guest domain is stopped on the system (control domain) that is down. This is particularly important when failure-policy of the master-domains is set to ignore.
Consider the following scenario: The DomainFailurePolicy of the LDom resource is set to {primary="stop"} by default.
If the guest domain need to be made available even when primary domain is rebooted or shut down for maintenance.
The DomainFailurePolicy attribute would be changed to {primary=ignore, alternate1=stop} or {primary=ignore, alternate1=ignore}.
Guest domain will not be stopped even when primary domain is rebooted or shutdown
The SysDownPolicy attribute would be set to AutoDisableNoOffline for planned maintenance. VCS will not fail-over the service group when the node is down instead the group would be put in to auto-disabled state.
The guest domain can continue to function normally with the I/O services available through alternate I/O domain when the control domain is taken down for maintenance.
When the control domain is under maintenance, and if the alternate I/O domain fails due one of the following:
DomainFailurePolicy attribute is set to {primary=ignore, alternate1=stop} and only the I/O services from alternate I/O domain are unavailable (i/o domain is active, but n/w or storage loss).
DomainFailurePolicy attribute is set to {primary=ignore, alternate1=ignore} and if the alternate I/O domain is down (domain is in-active).
In this situation guest domain will not be functioning normally and it is not possible to bring down the guest domain as there is no way to access the guest domain. In such scenarios, you must perform the following steps to online the LDom service group on any other available nodes.
To online the LDom service group
- If the primary domain can be brought up, then bring up the primary domain and stop the guest domain:
# ldm stop ldom_name
If this is not possible, power off the physical system from the console so that the guest domain stops.
- Auto enable the service group:
# hagrp -autoenable group -sys system
- Online the LDom service group:
# hagrp -online group -any