InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - AIX
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Upgrading the operating system
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading Volume Replicator
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Support for AIX Live Update
- Appendix B. Installation scripts
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. High availability agent information
- Appendix F. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix G. Changing NFS server major numbers for VxVM volumes
- Appendix H. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
Modifying the VCS configuration files on existing nodes
Modify the configuration files on the remaining nodes of the cluster to remove references to the deleted nodes.
Tasks for modifying the cluster configuration files:
Edit the /etc/llthosts file
Edit the /etc/gabtab file
Modify the VCS configuration to remove the node
For an example main.cf:
To edit the /etc/llthosts file
- On each of the existing nodes, edit the
/etc/llthosts
file to remove lines that contain references to the removed nodes.For example, if sys5 is the node removed from the cluster, remove the line "2 sys5" from the file:
0 sys1 1 sys2 2 sys5
Change to:
0 sys1 1 sys2
- Modify the following command in the
/etc/gabtab
file to reflect the number of systems after the node is removed:/sbin/gabconfig -c -nN
where N is the number of remaining nodes in the cluster.
For example, with two nodes remaining, the file resembles:
/sbin/gabconfig -c -n2
Modify the VCS configuration file main.cf to remove all references to the deleted node.
Use one of the following methods to modify the configuration:
Edit the
/etc/VRTSvcs/conf/config/main.cf
fileThis method requires application down time.
Use the command line interface
This method allows the applications to remain online on all remaining nodes.
The following procedure uses the command line interface and modifies the sample VCS configuration to remove references to the deleted node. Run the steps in the procedure from one of the existing nodes in the cluster. The procedure allows you to change the VCS configuration while applications remain online on the remaining nodes.
To modify the cluster configuration using the command line interface (CLI)
- Back up the
/etc/VRTSvcs/conf/config/main.cf
file.# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.3node.bak
- Change the cluster configuration to read-write mode:
# haconf -makerw
- Remove the node from the AutoStartList attribute of the service group by specifying the remaining nodes in the desired order:
# hagrp -modify cvm AutoStartList sys1 sys2
- Remove the node from the SystemList attribute of the service group:
# hagrp -modify cvm SystemList -delete sys5
If the system is part of the SystemList of a parent group, it must be deleted from the parent group first.
- Remove the node from the CVMNodeId attribute of the service group:
# hares -modify cvm_clus CVMNodeId -delete sys5
- If you have the other service groups (such as the database service group or the ClusterService group) that have the removed node in their configuration, perform step 4 and step 5 for each of them.
- Remove the deleted node from the NodeList attribute of all CFS mount resources:
# hares -modify CFSMount NodeList -delete sys5
- Remove the deleted node from the system list of any other service groups that exist on the cluster. For example, to delete the node sys5:
# hagrp -modify appgrp SystemList -delete sys5
- Remove the deleted node from the cluster system list:
# hasys -delete sys5
- Save the new configuration to disk:
# haconf -dump -makero
- Verify that the node is removed from the VCS configuration.
# grep -i sys5 /etc/VRTSvcs/conf/config/main.cf
If the node is not removed, use the VCS commands as described in this procedure to remove the node.
More Information
Sample configuration file for removing a node from the cluster