InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Completing the SFCFSHA configuration
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading SFCFSHA using YUM
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Upgrading VVR sites for InfoScale 7.3.1
Use the product installer to first upgrade VVR on the Secondaries and then on the Primary.
To upgrade a Secondary
- Stop the replication to a Secondary by initiating stoprep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> stoprep <RVG_name> <secondary_hostname>
- Verify that the replication has stopped.
# /usr/sbin/vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Upgrade VVR from any version from 7.3.1 to the latest on the Secondary.
- Start the replication to the Secondary host by initiating startrep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> startrep <RVG_name> <secondary_hostname>
- Verify that the replication has started.
# /usr/sbin/vradmin -g <disk_group_name> -l repstatus <RVG_name>
To upgrade the Primary
- Verify that the replication status is consistent and up-to-date.
# /usr/sbin/vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Take the applications and the mount points down.
- Stop the replication to a Secondary by initiating stoprep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> stoprep <RVG_name> <secondary_hostname>
- Verify that the replication has stopped.
# /usr/sbin/vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Upgrade VVR from any version from 7.3.1 to the latest on the Primary.
- Start the replication to the Secondary host by initiating startrep on the Primary.
# /usr/sbin/vradmin -g <disk_group_name> startrep <RVG_name> <secondary_hostname>
- Verify that the replication has started.
# /usr/sbin/vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Mount all the file systems and start all the applications on the Primary.