Cluster Server 7.3.1 Agent for Oracle Installation and Configuration Guide - Solaris
- Introducing the Cluster Server agent for Oracle
- About the Cluster Server agent for Oracle
- How the agent makes Oracle highly available
- About Cluster Server agent functions for Oracle
- Oracle agent functions
- How the Oracle agent supports health check monitoring
- ASMInst agent functions
- Oracle agent functions
- Installing and configuring Oracle
- About VCS requirements for installing Oracle
- About Oracle installation tasks for VCS
- Installing ASM binaries for Oracle 11gR2 or 12c in a VCS environment
- Configuring Oracle ASM on the first node of the cluster
- Installing Oracle binaries on the first node of the cluster
- Installing and removing the agent for Oracle
- Configuring VCS service groups for Oracle
- Configuring Oracle instances in VCS
- Before you configure the VCS service group for Oracle
- Configuring the VCS service group for Oracle
- Setting up detail monitoring for VCS agents for Oracle
- Enabling and disabling intelligent resource monitoring for agents manually
- Administering VCS service groups for Oracle
- Pluggable database (PDB) migration
- Troubleshooting Cluster Server agent for Oracle
- Verifying the Oracle health check binaries and intentional offline for an instance of Oracle
- Appendix A. Resource type definitions
- Appendix B. Sample configurations
- Sample single Oracle instance configuration
- Sample multiple Oracle instances (single listener) configuration
- Sample multiple instance (multiple listeners) configuration
- Sample Oracle configuration with shared server support
- Sample configuration for Oracle instances in Solaris zones
- Sample Oracle ASM configurations
- Appendix C. Best practices
- Appendix D. Using the SPFILE in a VCS cluster for Oracle
- Appendix E. OHASD in a single instance database environment
About VCS requirements for installing Oracle
Make sure you meet the following requirements to install Oracle in a VCS cluster:
Kernel parameter configuration | Each node on which you want to install Oracle must meet the following Oracle configuration requirements:
See Oracle documentation for the corresponding operating system for specific requirement details. |
Location of the $ORACLE_HOME | Depending on your environment, you can place the Oracle home directory ($ORACLE_HOME) in one of the following ways:
If you want to use Oracle ASM, then you must place the Oracle home directory only on the local disks of each node. Review the advantages of each approach to make a decision. |
Configurations with multiple Oracle instances (SIDs) | You can have multiple Oracle instances that are defined in a single cluster. In such cases, the parameter file for each instance must be accessible on all the nodes in the service group's SystemList attribute. Note: If you installed multiple versions of Oracle on the same system, make sure that the SIDs are unique. |
Location of Oracle database tablespaces | If you plan to create the tablespaces using regular (UFS or VxFS) files, the file systems that contain these files must be located on shared disks. Create the same file system mount points on each node. If you use raw devices on shared disks for Oracle tablespaces, you must meet the following requirements:
For example, if you use Veritas Volume Manager, type: # vxedit -g diskgroup_name set group=dba \ user=oracle mode=660 volume_name Note: The user oracle and the group dba must be local and not Network Information Service (NIS and NIS+) users |
Location of core files for Oracle processes that terminate abnormally |
The VCS agent framework sets the current directory to /opt/VRTSagents/ha/bin/Oracle before it runs the Oracle agent scripts or the programs that execute the Oracle binaries. Oracle binaries, which run as the user oracle, do not have permission to write to /opt/VRTSagents/ha/bin/Oracle. So, any "core" files that the Oracle binaries generate when the processes terminate abnormally are lost. Veritas recommends using the coreadm (1M) command on Solaris to specify the name and the location of such core files. |
Transparent listener failover | You can enable Oracle Server clients to reconnect after a node switch without reconfiguring. For such reconnections you must include at least one IP resource in the service group for the Oracle resource. The hostname mapping the IP address of this resource must be used for the Host field in the file $TNS_ADMIN/listener.ora. If you use the TCP/IP protocol for Oracle client/server communication, verify that the file /etc/services contains the service name of the Oracle Net Service. You must verify this file on each node that is defined in the service group's SystemList attribute. |
Listener authentication in VCS environment |
The Netlsnr agent supports OS authentication as well as password authentication for the listener process. If you use Oracle 10g or later, recommends you to configure OS authentication. If you want to configure a listener password, make sure that you configure the password correctly. A misconfigured password can cause the listener to fault. See Encrypting Oracle database user and listener passwords. Refer to the Oracle documentation for details on configuring the listener authentication. |
Long pathname limitation for $ORACLE_HOME |
The Solaris process table limits process pathnames to 79 characters. The full pathname of processes in $ORACLE_HOME can possibly have 80 characters or more. In this case, you can create a soft link to the $ORACLE_HOME directory. You can then use the soft link in place of the long filename in the Home attribute in the main.cf file. See Replacing the long pathnames for $ORACLE_HOME in the agent attributes. |
Oracle NLS information | You can define the NLS information in one of the following ways:
Defining the parameters in the Oracle parameters file affects NLS settings for the Oracle server. Defining the environment variables affects the NLS input and output of client utilities. |
Hot backup of Oracle database in VCS environment |
The hot backup of Oracle database is enabled by default in VCS environment. A node can fail during a hot backup of an Oracle database. During such failures, VCS can fail over to another node only if the following requirements are met:
If you do not meet VCS requirements, you must manually end the hot backup and then fail over Oracle to another node. Note: If a node fails during a hot backup of container database or pluggable database for Oracle 12C, you must set AutoEndBkup attribute of the corresponding CDB resource to 1. When the AutoEndBkup is set to 1 for the CDB, it also ends the backup of both CDB and PDB during Online. See Failing over Oracle after a VCS node failure during hot backup. Note: If you set the AutoEndBkup attribute value to 0, then to avoid unexpected VCS behavior you must enable detail monitoring. |
Storage devices for Oracle ASM configurations in VCS |
You can choose one of the following storage devices for Oracle ASM:
If you want to configure mirroring for ASM disks that use VxVM or CVM volumes, then you must configure VxVM mirroring and not configure ASM mirroring. From Oracle 11g R2 or 12c, the ASMInst agent does not support pfile or spfile for ASM instances on the ASM disk groups. recommends that you copy this file from ASM disk group to the local file system. |
ASM instances configured on VxVM or CVM volumes in a Solaris zone environment | In a Solaris zone environment, you must do the following for the ASM instances that are configured on VxVM or CVM volumes:
|