Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Cluster Server Configuration and Upgrade Guide - AIX
Last Published:
2025-04-18
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: AIX
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- Appendix D. Migrating LLT links from IPv4 to IPv6 or dual-stack
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Installation script options
- Appendix G. Troubleshooting VCS configuration
- Appendix H. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix I. Changing NFS server major numbers for VxVM volumes
- Appendix J. Upgrading the Steward process
Moving the service groups to the second subcluster
Perform the following steps to establish the service group's status and to switch the service groups.
To move service groups to the second subcluster
- On the first subcluster, determine where the service groups are online.
# hagrp -state
The output resembles:
#Group Attribute System Value sg1 State node01 |ONLINE| sg1 State node02 |ONLINE| sg1 State node03 |ONLINE| sg1 State node04 |ONLINE| sg2 State node01 |ONLINE| sg2 State node02 |ONLINE| sg2 State node03 |ONLINE| sg2 State node04 |ONLINE| sg3 State node01 |ONLINE| sg3 State node02 |OFFLINE| sg3 State node03 |OFFLINE| sg3 State node04 |OFFLINE| sg4 State node01 |OFFLINE| sg4 State node02 |ONLINE| sg4 State node03 |OFFLINE| sg4 State node04 |OFFLINE|
- Offline the parallel service groups (sg1 and sg2) from the first subcluster. Switch the failover service groups (sg3 and sg4) from the first subcluster (node01 and node02) to the nodes on the second subcluster (node03 and node04). For SFHA, vxfen sg is the parallel service group.
# hagrp -offline sg1 -sys node01 # hagrp -offline sg2 -sys node01 # hagrp -offline sg1 -sys node02 # hagrp -offline sg2 -sys node02 # hagrp -switch sg3 -to node03 # hagrp -switch sg4 -to node04
- On the nodes in the first subcluster, unmount all the VxFS file systems that VCS does not manage, for example:
# df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd4 20971520 8570080 60% 35736 2% / /dev/hd2 5242880 2284528 57% 55673 9% /usr /dev/hd9var 4194304 3562332 16% 5877 1% /var /dev/hd3 6291456 6283832 1% 146 1% /tmp /dev/hd1 262144 261408 1% 62 1% /home /dev/hd11admin 262144 184408 30% 6 1% /admin /proc - - - - - /proc /dev/hd10opt 20971520 5799208 73% 65760 5% /opt /dev/vx/dsk/dg2/dg2vol1 10240 7600 26% 4 1% /mnt/dg2/dg2vol1 /dev/vx/dsk/dg2/dg2vol2 10240 7600 26% 4 1% /mnt/dg2/dg2vol2 /dev/vx/dsk/dg2/dg2vol3 10240 7600 26% 4 1% /mnt/dg2/dg2vol3
# umount /mnt/dg2/dg2vol1 # umount /mnt/dg2/dg2vol2 # umount /mnt/dg2/dg2vol3
- On the nodes in the first subcluster, stop all VxVM volumes (for each disk group) that VCS does not manage.
- Make the configuration writable on the first subcluster.
# haconf -makerw
- Freeze the nodes in the first subcluster.
# hasys -freeze -persistent node01 # hasys -freeze -persistent node02
- Dump the configuration and make it read-only.
# haconf -dump -makero
- Verify that the service groups are offline on the first subcluster that you want to upgrade.
# hagrp -state
Output resembles:
#Group Attribute System Value sg1 State node01 |OFFLINE| sg1 State node02 |OFFLINE| sg1 State node03 |ONLINE| sg1 State node04 |ONLINE| sg2 State node01 |OFFLINE| sg2 State node02 |OFFLINE| sg2 State node03 |ONLINE| sg2 State node04 |ONLINE| sg3 State node01 |OFFLINE| sg3 State node02 |OFFLINE| sg3 State node03 |ONLINE| sg3 State node04 |OFFLINE| sg4 State node01 |OFFLINE| sg4 State node02 |OFFLINE| sg4 State node03 |OFFLINE| sg4 State node04 |ONLINE|