InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Solaris
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Upgrading SFCFSHA using Boot Environment upgrade
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Reconciling major/minor numbers for NFS shared disks
- Appendix G. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
Updating the configuration and confirming startup
Perform the following steps on each upgraded node.
To update the configuration and confirm startup
- Remove the /etc/VRTSvcs/conf/config/.stale file, if it exists.
# rm -f /etc/VRTSvcs/conf/config/.stale
- Verify that LLT is running:
# lltconfig LLT is running
- Verify GAB is configured:
# gabconfig -l | grep 'Driver.state' | \ grep Configured Driver state : Configured
- Verify VxVM daemon is started and enabled:
# /opt/VRTS/bin/vxdctl mode mode: enabled
- Confirm all upgraded nodes are in a running state.
# gabconfig -a
- If any process fails to start after the upgrade, enter the following to start it:
# /opt/VRTS/install/installer -start sys1 sys2
- After the configuration is complete, the CVM and SFCFSHA groups may come up frozen. To find out the frozen CVM and SFCFSHA groups, enter the following command:
# /opt/VRTS/bin/hastatus -sum
If the groups are frozen, unfreeze CVM and SFCFSHA groups using the following commands for each group:
Make the configuration read/write.
# /opt/VRTS/bin/haconf -makerw
Unfreeze the group.
# /opt/VRTS/bin/hagrp -unfreeze group_name -persistent
Save the configuration.
# /opt/VRTS/bin/haconf -dump -makero
- If VVR is configured, and the CVM and SFCFSHA groups are offline, bring the groups online in the following order:
Bring online the CVM groups on all systems.
# /opt/VRTS/bin/hagrp -online group_name -sys sys1 # /opt/VRTS/bin/hagrp -online group_name -sys sys2
where group_name is the VCS service group that has the CVMVolDg resource.
Bring online the RVGShared groups and the virtual IP on the master node using the following commands:
# hagrp -online RVGShared -sys masterhost # hares -online ip_name -sys masterhost
Bring online the SFCFSHA groups on all systems.
# /opt/VRTS/bin/hagrp -online group_name -sys sys1 # /opt/VRTS/bin/hagrp -online group_name -sys sys2
where group_name is the VCS service group that has the CFSMount resource.
If the SFCFSHA service groups do not come online then your file system could be dirty.
Note:
If you upgrade to Veritas InfoScale Enterprise 9.0 and the file systems are dirty, you have to deport the shared disk group and import it as non-shared. After the import, run fsck. fsck should succeed. Then deport the disk group and import it back as shared.
- Find out which node is the CVM master. Enter the following:
# vxdctl -c mode
- On the CVM master node, upgrade the CVM protocol. Enter the following:
# vxdctl upgrade