InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring CP server using response files
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Configuring a secure cluster node by node
- Completing the SFHA configuration
- Verifying and updating licenses on the system
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- Preparing to upgrade SFHA
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks
- Post-upgrade tasks when VCS agents for VVR are configured
- About enabling LDAP authentication for clusters that run in secure mode
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Upgrading Storage Foundation and High Availability using the product installer
Note:
Root Disk Encapsulation (RDE) is not supported on Linux from 9.0 onwards.
For details on installing and upgrading Veritas InfoScale using the installer with the -yum option, refer to the Veritas InfoScale Installation guide.
Use this procedure to upgrade Storage Foundation and High Availability (SFHA).
To upgrade Storage Foundation and High Availability
- Log in as superuser.
- Take all service groups offline.
List all service groups:
# /opt/VRTSvcs/bin/hagrp -list
For each service group listed, take it offline:
# /opt/VRTSvcs/bin/hagrp -offline service_group \ -sys system_name
- Enter the following commands on each node to freeze HA service group operations:
# haconf -makerw # hasys -freeze -persistent nodename # haconf -dump -makero
- Use the following command to check if any VxFS file systems or Storage Checkpoints are mounted:
# df -h | grep vxfs
- Unmount all Storage Checkpoints and file systems:
# umount /checkpoint_name # umount /filesystem
- Verify that all file systems have been cleanly unmounted:
# echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean flags 0 mod 0 clean clean_value
A clean_value value of 0x5a indicates the file system is clean, 0x3c indicates the file system is dirty, and 0x69 indicates the file system is dusty. A dusty file system has pending extended operations.
Perform the following steps in the order listed:
If a file system is not clean, enter the following commands for that file system:
# fsck -t vxfs filesystem # mount -t vxfs filesystem mountpoint # umount mountpoint
This should complete any extended operations that were outstanding on the file system and unmount the file system cleanly.
There may be a pending large RPM clone removal extended operation if the umount command fails with the following error:
file system device busy
You know for certain that an extended operation is pending if the following message is generated on the console:
Storage Checkpoint asynchronous operation on file_system file system still in progress.
If an extended operation is pending, you must leave the file system mounted for a longer time to allow the operation to complete. Removing a very large RPM clone can take several hours.
Repeat this step to verify that the unclean file system is now clean.
- If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPM. Use the following command to take the cache area offline:
# sfcache offline cachename
- Stop activity to all VxVM volumes. For example, stop any applications such as databases that access the volumes, and unmount any file systems that have been created on the volumes.
- Stop all the volumes by entering the following command for each disk group:
# vxvol -g diskgroup stopall
To verify that no volumes remain open, use the following command:
# vxprint -Aht -e v_open
- Make a record of the mount points for VxFS file systems and VxVM volumes that are defined in the /etc/fstab file. You will need to recreate these entries in the /etc/fstab file on the freshly installed system.
- Perform any necessary preinstallation checks.
- Check if you need to make updates to the operating system. If you need to update the operating system, perform the following tasks:
Rename the
/etc/llttab
file to prevent LLT from starting automatically when the node starts:# mv /etc/llttab /etc/llttab.save
Upgrade the operating system.
Refer to the operating system's documentation for more information.
After you have upgraded the operating system, restart the nodes:
# shutdown -r now
Rename the
/etc/llttab
file to its original name:# mv /etc/llttab.save /etc/llttab
- To invoke the common installer, run the installer command on the disc as shown in this example:
# cd /cdrom/cdrom0 # ./installer
- Enter G to upgrade and press Return.
- You are prompted to enter the system names (in the following example, "host1") on which the software is to be installed. Enter the system name or names and then press Return.
Enter the 64 bit <platform> system names separated by spaces : [q, ?] host1 host2
where <platform> is the platform on which the system runs.
Depending on your existing configuration, various messages and prompts may appear. Answer the prompts appropriately.
During the system verification phase, the installer checks if the boot disk is encapsulated and the upgrade's path. If the upgrade is not supported, you need to un-encapsulate the boot disk.
- The installer asks if you agree with the terms of the End User License Agreement. Press y to agree and continue.
- The installer discovers if any of the systems that you are upgrading have mirrored encapsulated boot disks. You now have the option to create a backup of the systems' root disks before the upgrade proceeds. If you want to split the mirrors on the encapsulated boot disks to create the backup, answer y.
- The installer then prompts you to name the backup root disk. Enter the name for the backup and mirrored boot disk or press Enter to accept the default.
Note:
The split operation can take some time to complete.
- You are prompted to start the split operation. Press y to continue.
- Reboot the system if the boot disk is encapsulated before the upgrade.
- If you need to re-encapsulate and mirror the root disk on each of the nodes, follow the procedures in the "Administering Disks" chapter of the Storage Foundation Administrator's Guide.
If you have split the mirrored root disk to back it up, then after a successful reboot, verify the upgrade and re-join the backup disk group. If the upgrade fails, revert to the backup disk group.
- If necessary, reinstate any missing mount points in the /etc/fstab file on each node that you recorded in step 10.
- If any VCS configuration files need to be restored, stop the cluster, restore the files to the /etc/VRTSvcs/conf/config directory, and restart the cluster.
- Make the VCS configuration writable again from any node in the upgraded group:
# haconf -makerw
- Enter the following command on each node in the upgraded group to unfreeze HA service group operations:
# hasys -unfreeze -persistent nodename
- Make the configuration read-only:
# haconf -dump -makero
- Bring all of the VCS service groups, such as failover groups, online on the required node using the below command:
# hagrp -online groupname -sys nodename
- Restart all the volumes by entering the following command for each disk group:
# vxvol -g diskgroup startall
- Remount all VxFS file systems and Storage Checkpoints on all nodes:
# mount /filesystem # mount /checkpoint_name
You can perform the following optional configuration steps:
If you want to use features of InfoScale 9.0 for which you do not currently have an appropriate license installed, obtain the license and run the vxlicinst command to add it to your system.
To upgrade VxFS Disk Layout versions and VxVM Disk Group versions, follow the upgrade instructions.
More Information