Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
Last Published:
2025-04-21
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring CP server using response files
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Configuring a secure cluster node by node
- Completing the SFHA configuration
- Verifying and updating licenses on the system
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- Preparing to upgrade SFHA
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks
- Post-upgrade tasks when VCS agents for VVR are configured
- About enabling LDAP authentication for clusters that run in secure mode
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Configuring cluster processes on the new node
Perform the steps in the following procedure to configure cluster processes on the new node.
- Do not apply for SUSE Linux.
- Edit the /etc/llthosts file on the existing nodes. Using vi or another text editor, add the line for the new node to the file. The file resembles:
0 sys1 1 sys2 2 sys5
- Copy the /etc/llthosts file from one of the existing systems over to the new system. The /etc/llthosts file must be identical on all nodes in the cluster.
- Create an
/etc/llttab
file on the new system. For example:set-node sys5 set-cluster 101
link eth1 eth-[MACID for eth1] - ether - - link eth2 eth-[MACID for eth2] - ether - -
Except for the first line that refers to the node, the file resembles the /etc/llttab files on the existing nodes. The second line, the cluster ID, must be the same as in the existing nodes.
- Use vi or another text editor to create the file
/etc/gabtab
on the new node. This file must contain a line that resembles the following example:/sbin/gabconfig -c -nN
Where N represents the number of systems in the cluster including the new node. For a three-system cluster, N would equal 3.
- Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file on the new system.
- Use vi or another text editor to create the file
/etc/VRTSvcs/conf/sysname
on the new node. This file must contain the name of the new node added to the cluster.For example:
sys5
- Create the Unique Universal Identifier file
/etc/vx/.uuids/clusuuid
on the new node:# /opt/VRTSvcs/bin/uuidconfig.pl -rsh -clus -copy \ -from_sys sys1 -to_sys sys5
- Start the LLT, GAB, and ODM drivers on the new node:
For systemd environments with supported Linux distributions::
# systemctl start llt # systemctl start gab # systemctl restart vxodm
For other supported Linux distributions:
# /etc/init.d/llt start
# /etc/init.d/gab start
# /etc/init.d/vxodm restart
- On the new node, verify that the GAB port memberships:
# gabconfig -a GAB Port Memberships =============================================================== Port a gen df204 membership 012