InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Completing the SFCFSHA configuration
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading SFCFSHA using YUM
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Performing rolling upgrades in CVR environments
In a CVR environment, you perform the rolling upgrade on the secondary site first and then on the primary site.
To perform a rolling upgrade in a CVR environment
- Perform a rolling upgrade on the secondary sites first, without stopping the replication.
Check whether the cluster nodes have joined the cluster back after the upgrade.
Continue to monitor the replication status.
# vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Perform a rolling upgrade for all the nodes on the primary site by following these steps:
At the primary site, the VxFS file systems are always mounted. Therefore, if you directly begin the upgrade, the installer returns the following error:
CPI ERROR V-9-40-1480 Some VxFS file systems are mounted on mount points /<volume_mount_point> on node <node_name> and need to be unmounted before upgrade.
In this event, take the applications offline and unmount the file system manually on the indicated nodes, before you upgrade them.
Additionally, you may have to take the following actions on these nodes:
If any parallel service groups exist in the hierarchy, take them offline.
# /opt/VRTS/bin/hagrp -offline <global_application_name> -sys <node_name>
# /opt/VRTS/bin/cfsumount /<volume_mount_point> <node_name>
Make sure that CVM is the final child in the hierarchy, that it is left online, and rest of the hierarchy is taken offline on that node.
If any parent service groups exist in the hierarchy, take them offline.
# /opt/VRTS/bin/cfsumount /<volume_mount_point> <node_name>
# /opt/VRTS/bin/hagrp -dep cvm
Parent Child Relationship <global_application_name> cvm online local firm
<global_application_name> is the parent service group of the CVM; take it offline.
# /opt/VRTS/bin/hagrp -offline <global_application_name> -sys <node_name>
If any local service groups exist in the hierarchy, switch them to another cluster node that is not being upgraded at the same time.
After you upgrade these nodes, mount the file system again and bring the VCS application online on the targeted node.
# /opt/VRTS/bin/cfsmount /<volume_mount_point> <node_name>
# /opt/VRTS/bin/hagrp -online <global_application_name> -sys <node_name>
Consequently, you may have to take the following actions on these upgraded nodes:
Bring the parallel service groups online.
Switch the local VCS service groups back.
Make sure to bring all the service groups, which were taken offline before the upgrade, online again.
Check the replication status and the system or the service group state, and only then proceed to perform the upgrade on the next node.
- Check the replication status on the primary site.
- Migrate primary role to the upgraded secondary and then proceed with upgrade on the secondary (old primary).
To upgrade disk group and disk layout versions on replication hosts
- Upgrade the disk group version on all the Secondaries for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
- Upgrade the disk group version on the Primary for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
- Upgrade the disk layout version (DLV) on the Primary for all the VxFS file systems.
# /opt/VRTS/bin/vxupgrade -n 17 <vxfs_mount_point_name>
# /opt/VRTS/bin/fstyp -v <disk_path_for_mount_point_volume>
The DLV upgrade is automatically replicated to the Secondaries.