Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configuring zone resource in a parallel service group with the hazonesetup utility
The hazonesetup utility helps you configure a zone under VCS. This section covers typical scenarios based on the location of the zone root.
In the case of a zone resource in parallel service group, the zone root can be on local or shared storage that the node owns.
Consider an example in a two-node cluster (sysA and sysB). Zone local-zone1 is configured on sysA and local-zone2 is configured on sysB.
To configure a zone under VCS control using the hazonesetup utility when the zone root is on local storage
- Boot the local zone on all the nodes outside VCS.
sysA# zoneadm -z local-zone1 boot sysB# zoneadm -z local-zone2 boot
- To use the hazonesetup utility, ensure you have a IP configured for the non-global zone and hostname of the global zone is resolvable from non-global zone.
- Run the hazonesetup utility with correct arguments on all the nodes successively.
sysA# hazonesetup -g zone_grp -r zone_res -z local-zone1\ -p password -a -l -s sysA,sysB sysB# hazonesetup -g zone_grp -r zone_res -z local-zone2\ -p password -a -l -s sysA,sysB
Note:
If you want to use a particular user for password-less communication use -u option of the hazonesetup command. If -u option is not specified a default user is used for password-less communication.
- Running the hazonesetup utility on first node adds parallel zone service group and zone resource in VCS configuration. Running the hazonesetup utility on other nodes detect that the zone service group and zone resource are already present in VCS configuration and update the configuration accordingly for password-less communication.
Note:
Run the hazonesetup utility on all the nodes in the cluster that have a zone running on that node. This is required as hazonesetup runs the halogin command inside the local zone that enables password-less communication between local zone and global zone.
You can use the same user for multiple zones across systems. Specify the same user name using the -u
option while running the hazonesetup utility for different zones on different systems. When you do not specify a user name while running the hazonesetup utility, the utility creates a user with the default user name z_resname_hostname for a non-secure cluster and z_resname_clustername for a secure cluster.
To configure a zone under VCS control using the hazonesetup utility when the zone root is on shared storage that the node owns
- Configure a parallel service group with required storage resources (DiskGroup, Volume, Mount, etc.) to mount the zone root on the nodes. Set the required dependency between storage resources (DiskGroup->Volume->Mount). Make sure that you configure all the required attributes of all the storage resources to bring them online on cluster node. You may have to localize certain attributes of storage resources to online them in parallel on all the nodes of the cluster. If you have a parallel Service Group and you use a Diskgroup resource, the attributes for this resource must be localized, otherwise you may end up importing the same diskgroup on 2 nodes at the same time on a non-CVM environment.
sysA# hagrp -add zone_grp sysA# hagrp -modify zone_grp Parallel 1 sysA# hagrp -modify zone_grp SystemList sysA 0 sysB 1 sysA# hares -add zone_dg DiskGroup zone_grp sysA# hares -add zone_vol Volume zone_grp sysA# hares -add zone_mnt Mount zone_grp sysA# hares -link zone_mnt zone_vol sysA# hares -link zone_vol zone_dg
See the Cluster Server Bundled Agents Reference Guide for more details on configuring storage resources.
- Bring the service group online on all the nodes. This command mounts the zone root on all the nodes.
sysA# hagrp -online zone_grp -any
- Boot the local zone on all the nodes outside VCS.
sysA# zoneadm -z local-zone1 boot sysB# zoneadm -z local-zone2 boot
- Run the hazonesetup utility with correct arguments on all the nodes successively.
sysA# hazonesetup -g zone_grp -r zone_res -z \ local-zone1 -p password -a -l -s sysA,sysB sysB# hazonesetup -g zone_grp -r zone_res -z \ local-zone2 -p password -a -l -s sysA,sysB
Running the hazonesetup utility on first node adds parallel zone service group and zone resource in VCS configuration. Running the hazonesetup utility on other nodes detect that the zone service group and zone resource are already present in VCS configuration and update the configuration accordingly for password-less communication.
Note:
If you want to use a particular user for password-less communication use
-u
option of the hazonesetup command. If-u
option is not specified a default user is used for password-less communication. - Set the proper dependency between the Zone resource and other storage resources. The Zone resource should depend on storage resource (Mount->Zone).
sysA# hares -link zone_res zone_mnt
Note:
Run the hazonesetup utility on all the nodes in the cluster that have a zone running on that node. This is required as the hazonesetup utility runs the halogin command inside the local zone that enables password-less communication between local zone and global zone.
You can use the same user for multiple zones across systems. Specify the same user name using the
-u
option while running the hazonesetup utility for different zones on different systems. When you do not specify a user name while running the hazonesetup utility, the utility creates a user with the default user namesz_resname_hostname
for a non-secure cluster andz_resname_clustername
for a secure cluster.