InfoScale™ 9.0 Cluster Server Configuration and Upgrade Guide - AIX
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- Appendix D. Migrating LLT links from IPv4 to IPv6 or dual-stack
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Installation script options
- Appendix G. Troubleshooting VCS configuration
- Appendix H. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix I. Changing NFS server major numbers for VxVM volumes
- Appendix J. Upgrading the Steward process
Finishing the phased upgrade
Complete the following procedure to complete the upgrade.
To finish the upgrade
- Upgrade the cluster protocol version by performing the following tasks sequentially:
Identify the current cluster protocol version.
haclus -version -info
Check whether the current version is compatible with the newer cluster version and whether it can be upgraded successfully.
haclus -version -verify <newer-cluster-version>
For example:
# /opt/VRTSvcs/bin/haclus -version -verify 8.0.0.0000
Upgrade the cluster to the newer protocol version.
haclus -version -update <newer-cluster-version>
For example:
# /opt/VRTSvcs/bin/haclus -version -update 8.0.0.0000
- Verify that the cluster UUID is the same on the nodes in the second subcluster and the first subcluster. Run the following command to display the cluster UUID:
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -display node1 [node2 ...]
If the cluster UUID differs, manually copy the cluster UUID from a node in the first subcluster to the nodes in the second subcluster. For example:
# /opt/VRTSvcs/bin/uuidconfig.pl [-rsh] -clus -copy -from_sys node01 -to_sys node03 node04
- Reboot the node03 and node04 in the second subcluster.
# /usr/sbin/shutdown -r
The nodes in the second subcluster join the nodes in the first subcluster.
- In the
/etc/default/llt
file, change the value of the LLT_START attribute.In the
/etc/default/gab
file, change the value of the GAB_START attribute.In the
/etc/default/vxfen
file, change the value of the VXFEN_START attribute.In the
/etc/default/vcs
file, change the value of the VCS_START attribute.LLT_START = 1 GAB_START = 1 VXFEN_START =1 VCS_START =1
- Start LLT and GAB.
# /etc/init.d/llt.rc start
# /etc/init.d/gab.rc start
- Seed node03 and node04 in the second subcluster.
# gabconfig -x
- On the second half of the cluster, start VCS:
# cd /opt/VRTS/install
# ./installer -start sys3 sys4
- Check to see if VCS and its components are up.
# gabconfig -a GAB Port Memberships =============================================================== Port a gen nxxxnn membership 0123 Port b gen nxxxnn membership 0123 Port h gen nxxxnn membership 0123
- Run an hastatus -sum command to determine the status of the nodes, service groups, and cluster.
# hastatus -sum -- SYSTEM STATE -- System State Frozen A node01 RUNNING 0 A node02 RUNNING 0 A node03 RUNNING 0 A node04 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B sg1 node01 Y N ONLINE B sg1 node02 Y N ONLINE B sg1 node03 Y N ONLINE B sg1 node04 Y N ONLINE B sg2 node01 Y N ONLINE B sg2 node02 Y N ONLINE B sg2 node03 Y N ONLINE B sg2 node04 Y N ONLINE B sg3 node01 Y N ONLINE B sg3 node02 Y N OFFLINE B sg3 node03 Y N OFFLINE B sg3 node04 Y N OFFLINE B sg4 node01 Y N OFFLINE B sg4 node02 Y N ONLINE B sg4 node03 Y N OFFLINE B sg4 node04 Y N OFFLINE
- After the upgrade is complete, start the VxVM volumes (for each disk group) and mount the VxFS file systems.
In this example, you have performed a phased upgrade of VCS. The service groups were down when you took them offline on node03 and node04, to the time VCS brought them online on node01 or node02.