InfoScale™ 9.0 Release Notes - Solaris
- Introduction and product requirements
- Changes introduced in this release
- Fixed issues
- Limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- InfoScale Volume Manager software limitations
- File System (VxFS) software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation software limitations
- Known issues
- Issues related to installation, licensing, upgrade, and uninstallation
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- InfoScale Volume Manager known issues
- File System (VxFS) known issues
- Dynamic Multi-Pathing known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Issues related to installation, licensing, upgrade, and uninstallation
Uninstallation of packages during rolling upgrade fails if kernel packages are present in non-global zones under VCS control (4054919)
As part of the InfoScale rolling upgrade process on Solaris, a few kernel packages are uninstalled before their newer versions are installed. If a non-global zone is under VCS control during the rolling upgrade, that zone fails over to the other system. Consequently, the state of the zone is changed to configured, and therefore, the uninstallation of the related kernel packages fails.
Before you perform a rolling upgrade on Solaris, check whether any non-global zones are under VCS control and whether any related kernel packages are installed. The InfoScale product installer does not handle the upgrade of such non-global zones. Consequently, the non-global zone resources that are under VCS control go into the FAULTED state.
Workaround: Perform the following steps to bring the service group of such a non-global zone online and test its failover.
- Clear the fault on the service group.
# /opt/VRTS/bin/hagrp -clear <zone_service_group_name> -any
- Take the zone resource offline on the current cluster node.
# /opt/VRTS/bin/hares -offline <zone_resource_name> -sys <system_name>
- Check whether the zone is in the configured state.
- Set the publisher value to Veritas for the packages and the patches that have been installed with the rolling upgrade.
- Attach the zone.
# zoneadm -z <zone_name> attach -u
When the zone is successfully attached, it transitions to the installed state.
- Check whether all the packages in the zone are upgraded to the latest version.
Note that kernel packages like
VRTSodm
andVRTSvxfs
may not be upgraded, because they need to be handled differently. - To upgrade the
VRTSodm
and theVRTSvxfs
packages within the zone, uninstall and then install them again.# pkg -R <zone_path>/root uninstall VRTSodm VRTSvxfs
# pkg -R <zone_path>/root install --accept --no-backup-be VRTSodm VRTSvxfs
- Perform steps 2 through 7 on each cluster node separately.
- After all the nodes are updated, bring the zone service group online on any of the cluster nodes.
# /opt/VRTS/bin/hagrp -online <zone_service_group> -sys <system_name>
- After the service group comes online, test the failover by switching the group to another node.
# /opt/VRTS/bin/hagrp -switch <zone_service_group> -to <system_name>