Veritas Access 7.3 Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About NIC bonding and NIC exclusion
- About VLAN Tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
About rolling upgrades
This release of Veritas Access supports rolling upgrades from 7.2.1.1 and later versions. Rolling upgrade is supported on RHEL 6.6, 6.7, and 6.8. A rolling upgrade minimizes service and application downtime for highly available clusters by limiting the upgrade time to the amount of time that it takes to perform a service group failover. Nodes with different product versions can be run in one cluster.
A rolling upgrade has two main phases where the installer upgrades the kernel RPMs in Phase 1 and VCS agent-related non-kernel RPMs in Phase 2.
The upgrade process divides the cluster into two subclusters, first subcluster and the second subcluster.
In Phase 1, the upgrade is performed on the second subcluster. The upgrade process stops all services and resources on the nodes of the second subcluster. All services (including the VIP groups) failover to the first subcluster. The parallel service groups on the second subcluster are taken offline.
During the failover process, the clients that are connected to the VIP groups of the second subcluster nodes are intermittently interrupted. For those clients that do not time out, the service is resumed after the VIP groups become online on one of the nodes of the first subcluster.
The installer upgrades the kernel RPMs on the second subcluster. The nodes of the first subcluster nodes continue to serve the clients.
Once Phase 1 of the rolling upgrade is complete on the second subcluster, Phase 1 of the rolling upgrade is performed on the first subcluster. The applications are failed over to the second subcluster. The parallel service groups are brought online on the second subcluster and are taken offline on the first subcluster.
After Phase 1 is complete, the nodes run with new RPMs but with the old protocol version.
During Phase 2 of the rolling upgrade, all remaining RPMs are upgraded on all the nodes of the cluster simultaneously. VCS and VCS agent packages are upgraded. The kernel drivers are upgraded to the new protocol version. Applications stay online during Phase 2. The High Availability Daemon (HAD) stops and starts again.