Storage Foundation for Oracle® RAC 8.0.2 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- Performing a phased upgrade of SF Oracle RAC from version 7.3.1 and later release
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading Volume Replicator
- Performing post-upgrade tasks
- Section IV. Installation of Oracle RAC
- Before installing Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Configuring the CSSD resource
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Configuring VCS service groups for Oracle RAC
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Configuring server-based fencing on the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC on the new node
- Removing a node from SF Oracle RAC clusters
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Sample installation and configuration values
- SF Oracle RAC worksheet
- Appendix D. Configuration files
- Sample configuration files
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Appendix J. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix K. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Performing rolling upgrades in CVR environments
In a CVR environment, you perform the rolling upgrade on the secondary site first and then on the primary site.
To perform a rolling upgrade in a CVR environment
- Perform a rolling upgrade on the secondary sites first, without stopping the replication.
Check whether the cluster nodes have joined the cluster back after the upgrade.
Continue to monitor the replication status.
# vradmin -g <disk_group_name> -l repstatus <RVG_name>
- Perform a rolling upgrade for all the nodes on the primary site by following these steps:
At the primary site, the VxFS file systems are always mounted. Therefore, if you directly begin the upgrade, the installer returns the following error:
CPI ERROR V-9-40-1480 Some VxFS file systems are mounted on mount points /<volume_mount_point> on node <node_name> and need to be unmounted before upgrade.
In this event, take the applications offline and unmount the file system manually on the indicated nodes, before you upgrade them.
Additionally, you may have to take the following actions on these nodes:
If any parallel service groups exist in the hierarchy, take them offline.
# /opt/VRTS/bin/hagrp -offline <global_application_name> -sys <node_name>
# /opt/VRTS/bin/cfsumount /<volume_mount_point> <node_name>
Make sure that CVM is the final child in the hierarchy, that it is left online, and rest of the hierarchy is taken offline on that node.
If any parent service groups exist in the hierarchy, take them offline.
# /opt/VRTS/bin/cfsumount /<volume_mount_point> <node_name>
# /opt/VRTS/bin/hagrp -dep cvm
Parent Child Relationship <global_application_name> cvm online local firm
<global_application_name> is the parent service group of the CVM; take it offline.
# /opt/VRTS/bin/hagrp -offline <global_application_name> -sys <node_name>
If any local service groups exist in the hierarchy, switch them to another cluster node that is not being upgraded at the same time.
After you upgrade these nodes, mount the file system again and bring the VCS application online on the targeted node.
# /opt/VRTS/bin/cfsmount /<volume_mount_point> <node_name>
# /opt/VRTS/bin/hagrp -online <global_application_name> -sys <node_name>
Consequently, you may have to take the following actions on these upgraded nodes:
Bring the parallel service groups online.
Switch the local VCS service groups back.
Make sure to bring all the service groups, which were taken offline before the upgrade, online again.
Check the replication status and the system or the service group state, and only then proceed to perform the upgrade on the next node.
- Check the replication status on the primary site.
- Migrate primary role to the upgraded secondary and then proceed with upgrade on the secondary (old primary).
To upgrade disk group and disk layout versions on replication hosts
- Upgrade the disk group version on all the Secondaries for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
- Upgrade the disk group version on the Primary for all the disk groups.
# /usr/sbin/vxdg upgrade <disk_group_name>
- Upgrade the disk layout version (DLV) on the Primary for all the VxFS file systems.
# /opt/VRTS/bin/vxupgrade -n 17 <vxfs_mount_point_name>
# /opt/VRTS/bin/fstyp -v <disk_path_for_mount_point_volume>
The DLV upgrade is automatically replicated to the Secondaries.