InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring CP server using response files
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Configuring a secure cluster node by node
- Completing the SFHA configuration
- Verifying and updating licenses on the system
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- Preparing to upgrade SFHA
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks
- Post-upgrade tasks when VCS agents for VVR are configured
- About enabling LDAP authentication for clusters that run in secure mode
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Restoring the original configuration when VCS agents are configured
This section describes how to restore a configuration with VCS configured agents.
Note:
Restore the original configuration only after you have upgraded VVR on all nodes for the Primary and Secondary cluster.
To restore the original configuration
- Import all the disk groups in your VVR configuration.
# vxdg -t import diskgroup
Each disk group should be imported onto the same node on which it was online when the upgrade was performed. The reboot after the upgrade could result in another node being online; for example, because of the order of the nodes in the AutoStartList. In this case, switch the VCS group containing the disk groups to the node on which the disk group was online while preparing for the upgrade.
# hagrp -switch grpname -to system
- Recover all the disk groups by typing the following command on the node on which the disk group was imported in step 1.
# vxrecover -bs
- Upgrade all the disk groups on all the nodes on which VVR has been upgraded:
# vxdg upgrade diskgroup
- On all nodes that are Secondary hosts of VVR, make sure the data volumes on the Secondary are the same length as the corresponding ones on the Primary. To shrink volumes that are longer on the Secondary than the Primary, use the following command on each volume on the Secondary:
# vxassist -g diskgroup shrinkto volume_name volume_length
where volume_length is the length of the volume on the Primary.
Note:
Do not continue until you complete this step on all the nodes in the Primary and Secondary clusters on which VVR is upgraded.
- Restore the configuration according to the method you used for upgrade:
If you upgraded with the VVR upgrade scripts
Complete the upgrade by running the vvr_upgrade_finish script on all the nodes on which VVR was upgraded. We recommend that you first run the vvr_upgrade_finish script on each node that is a Secondary host of VVR.
Perform the following tasks in the order indicated:
To run the vvr_upgrade_finish script, type the following command:
# /disc_path/scripts/vvr_upgrade_finish
where disc_path is the location where the Veritas software disc is mounted.
Attach the RLINKs on the nodes on which the messages were displayed:
# vxrlink -g diskgroup -f att rlink_name
If you upgraded with the product installer
Use the Veritas InfoScale product installer and select start a Product. Or use the installation script with the -start option.
- Bring online the RVGLogowner group on the master:
# hagrp -online RVGLogownerGrp -sys masterhost
- If you plan on using IPv6, you must bring up IPv6 addresses for virtual replication IP on primary/secondary nodes and switch from using IPv4 to IPv6 host names or addresses, enter:
# vradmin changeip newpri=v6 newsec=v6
where v6 is the IPv6 address.
- Restart the applications that were stopped.