Veritas InfoScale™ 7.3.1 Virtualization Guide - Solaris
- Section I. Overview of Veritas InfoScale Solutions used in Solaris virtualization
- Section II. Zones and Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- About VCS support for zones
- About the Mount agent
- Configuring VCS in zones
- Prerequisites for configuring VCS in zones
- Deciding on the zone root location
- Configuring the service group for the application
- Exporting VxVM volumes to a non-global zone
- About SF Oracle RAC support for Oracle RAC in a zone environment
- Known issues with supporting SF Oracle RAC in a zone environment
- Software limitations of Storage Foundation support of non-global zones
- Storage Foundation and High Availability Solutions support for Solaris Projects
- Storage Foundation and High Availability Solutions support for Solaris Zones
- Section III. Oracle VM Server for SPARC
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Oracle VM Server for SPARC deployment models
- Benefits of deploying Storage Foundation High Availability solutions in Oracle VM server for SPARC
- Features
- Split Storage Foundation stack model
- Guest-based Storage Foundation stack model
- Layered Storage Foundation stack model
- System requirements
- Installing Storage Foundation in a Oracle VM Server for SPARC environment
- Provisioning storage for a guest domain
- Software limitations
- Known issues
- Cluster Server support for using CVM with multiple nodes in a Oracle VM Server for SPARC environment
- VCS: Configuring Oracle VM Server for SPARC for high availability
- About VCS in a Oracle VM Server for SPARC environment
- About Cluster Server configuration models in an Oracle VM Server for SPARC environment
- Cluster Server setup to fail over a logical domain on a failure of logical domain
- Cluster Server setup to fail over an Application running inside logical domain on a failure of Application
- Oracle VM Server for SPARC guest domain migration in VCS environment
- Overview of a live migration
- About configuring VCS for Oracle VM Server for SPARC with multiple I/O domains
- Configuring VCS to manage a Logical Domain using services from multiple I/O domains
- Configuring storage services
- Configure a service group to monitor services from multiple I/O domains
- Configure the AlternateIO resource
- Configure the service group for a Logical Domain
- SF Oracle RAC support for Oracle VM Server for SPARC environments
- Support for live migration in FSS environments
- Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARC
- Section IV. Reference
Configuring a zone resource in a failover service group with the hazonesetup utility
The hazonesetup
utility helps you configure a zone under VCS. This section covers typical scenarios based on where the zone root is located.
Two typical setups for zone configuration in a failover scenario follow:
Zone root on local storage
Zone root on shared storage
Consider an example in a two-node cluster (sysA and sysB). Zone local-zone is configured on both the nodes.
To configure a zone under VCS control using the hazonesetup utility when the zone root is on local storage
- Boot the non-global zone on first node outside VCS.
sysA# zoneadm -z local-zone boot
- To use the hazonesetup utility, ensure you have a IP configured for the non-global zone and hostname of the global zone is resolvable from non-global zone.
# zlogin local-zone # ping sysA
- Run the hazonesetup utility with correct arguments on the first node. This adds failover zone service group and zone resource in VCS configuration.
sysA# hazonesetup -g zone_grp -r zone_res -z local-zone\ -p password -a -s sysA,sysB
Note:
If you want to use a particular user for password-less communication use -u option of the hazonesetup utility. If -u option is not specified a default user is used for password-less communication.
- Switch the zone service group to next node in the cluster.
sysA# hagrp -switch zone_grp -to sysB
- Run the hazonesetup utility with correct arguments on the node. The hazonesetup utlity detects that the zone service group and zone resource are already present in VCS configuration and update the configuration accordingly for password-less communication.
sysB# hazonesetup -g zone_grp -r zone_res -z local-zone\ -p password -a -s sysA,sysB
- Repeat step 4 and step 5 for all the remaining nodes in the cluster.
To configure a zone under VCS control using the hazonesetup utility when the zone root is on shared storage
- Configure a failover service group with required storage resources (DiskGroup, Volume, Mount, etc.) to mount the zone root on the node. Set the required dependency between storage resources (DiskGroup->Volume->Mount). Make sure you configure all the required attributes of all the storage resources in order to bring them online on cluster node.
sysA# hagrp -add zone_grp sysA# hagrp -modify zone_grp SystemList sysA 0 sysB 1 sysA# hares -add zone_dg DiskGroup zone_grp sysA# hares -add zone_vol Volume zone_grp sysA# hares -add zone_mnt Mount zone_grp sysA# hares -link zone_mnt zone_vol sysA# hares -link zone_vol zone_dg sysA# hares -modify zone_dg DiskGroup zone_dg sysA# hares -modify zone_dg Enabled 1 sysA# hares -modify zone_vol Volume volume_name sysA# hares -modify zone_vol DiskGroup zone_dg sysA# hares -modify zone_vol Enabled 1 sysA# hares -modify zone_mnt MountPoint /zone_mnt sysA# hares -modify zone_mnt BlockDevice /dev/vx/dsk/zone_dg/volume_name sysA# hares -modify zone_mnt FSType vxfs sysA# hares -modify zone_mnt MountOpt rw sysA# hares -modify zone_mnt FsckOpt %-y sysA# hares -modify zone_mnt Enabled 1
When the zone root is on a ZFS file system, use the following commands:
sysA# hagrp -add zone_grp sysA# hagrp -modify zone_grp SystemList sysA 0 sysB 1 sysA# hares -add zone_zpool Zpool zone_grp sysA# hares -modify zone_zpool AltRootPath /zone_root_mnt sysA# hares -modify zone_zpool PoolName zone1_pool sysA# hares -modify zone_zpool Enabled 1
- Bring the service group online on first node. This mounts the zone root on first node.
sysA# hagrp -online zone_grp -sys sysA
- Boot the local zone on first node outside VCS.
sysA# zoneadm -z local-zone boot
- To use the hazonesetup utility, ensure you have a IP configured for the non-global zone and hostname of the global zone is resolvable from non-global zone.
# zlogin local-zone # ping sysA
- Run the hazonesetup utility with correct arguments on the first node. Use the service group configured in step 1. This adds the zone resource to VCS configuration.
sysB# hazonesetup -g zone_grp -r zone_res -z local-zone \ -p password -a -s sysA,sysB
Note:
If you want to use a particular user for password-less communication use -u option of the hazonesetup utility. If -u option is not specified a default user is used for password-less communication.
- Set the proper dependency between the Zone resource and other storage resources. The Zone resource should depend on storage resource (Mount or Zpool ->Zone).
sysA# hares -link zone_res zone_mnt
When the zone root is on a ZFS file system, use following command:
sysA# hares -link zone_res zone_zpool
- Switch the service group to next node in the cluster.
sysA# hagrp -switch zone_grp -to sysB
- Run the hazonesetup utility with correct arguments on the node. The hazonesetup utility detects that the service group and the zone resource are already present in VCS configuration and update the configuration accordingly for password-less communication.
sysB# hazonesetup -g zone_grp -r zone_res -z local-zone\ -p password -a -s sysA,sysB
- Repeat step 7 and step 8 for all the remaining nodes in the cluster