Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
Last Published:
2025-04-21
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring CP server using response files
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Configuring a secure cluster node by node
- Completing the SFHA configuration
- Verifying and updating licenses on the system
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- Preparing to upgrade SFHA
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks
- Post-upgrade tasks when VCS agents for VVR are configured
- About enabling LDAP authentication for clusters that run in secure mode
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Unfreezing the service groups
This section describes how to unfreeze services groups and bring them online.
To unfreeze the service groups
- On any node in the cluster, make the VCS configuration writable:
# haconf -makerw
- Edit the
/etc/VRTSvcs/conf/config/main.cf
file to remove the deprecated attributes, SRL and RLinks, in the RVG and RVGShared resources. - Verify the syntax of the main.cf file, using the following command:
# hacf -verify
- Unfreeze all service groups that you froze previously. Enter the following command on any node in the cluster:
# hagrp -unfreeze service_group -persistent
- Save the configuration on any node in the cluster.
# haconf -dump -makero
- If you are upgrading in a shared disk group environment, bring online the RVGShared groups with the following commands:
# hagrp -online RVGShared -sys masterhost
- Bring the respective IP resources online on each node.
See Preparing for the upgrade when VCS agents are configured.
Type the following command on any node in the cluster.
# hares -online ip_name -sys system
This IP is the virtual IP that is used for replication within the cluster.
- In shared disk group environment, online the virtual IP resource on the master node.