Please enter search query.
Search <book_title>...
Storage Foundation for Oracle® RAC 8.0.2 Configuration and Upgrade Guide - Linux
Last Published:
2023-06-05
Product(s):
InfoScale & Storage Foundation (8.0.2)
Platform: Linux
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- Performing a phased upgrade of SF Oracle RAC from version 7.3.1 and later release
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading Volume Replicator
- Performing post-upgrade tasks
- Section IV. Installation of Oracle RAC
- Before installing Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Configuring the CSSD resource
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Configuring VCS service groups for Oracle RAC
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Configuring server-based fencing on the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC on the new node
- Removing a node from SF Oracle RAC clusters
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Sample installation and configuration values
- SF Oracle RAC worksheet
- Appendix D. Configuration files
- Sample configuration files
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Appendix J. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix K. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Adding the storage resources to the VCS configuration
You need to add the CVMVolDg and CFSMount resources to the VCS configuration.
Note:
Set the attribute "Critical" to "0" for all the resources in the cvm service group. This ensures that critical CVM and CFS resources are always online.
To add the storage resources created on CFS to the VCS configuration
- Change the permission on the VCS configuration file to read-write mode:
# haconf -makerw
- Configure the CVM volumes under VCS:
# hares -add ocrvotevol_resname CVMVolDg cvm_grpname
# hares -modify ocrvotevol_resname Critical 0
# hares -modify ocrvotevol_resname CVMDiskGroup ocrvote_dgname
# hares -modify ocrvotevol_resname CVMVolume -add ocrvote_volname
# hares -modify ocrvotevol_resname CVMActivation sw
- Set up the file system under VCS:
# hares -add ocrvotemnt_resname CFSMount cvm_grpname
# hares -modify ocrvotemnt_resname Critical 0
# hares -modify ocrvotemnt_resname MountPoint ocrvote_mnt
# hares -modify ocrvotemnt_resname BlockDevice \ /dev/vx/dsk/ocrvote_dgname/ocrvote_volname
- Link the parent and child resources:
# hares -link ocrvotevol_resname cvm_clus
# hares -link ocrvotemnt_resname ocrvotevol_resname
# hares -link ocrvotemnt_resname vxfsckd
- Enable the resources:
# hares -modify ocrvotevol_resname Enabled 1
# hares -modify ocrvotemnt_resname Enabled 1
# haconf -dump -makero
- Verify the configuration of the CVMVolDg and CFSMount resources in the main.cf file.
For example:
CFSMount ocrvote_mnt_ocrvotedg ( Critical = 0 MountPoint = "/ocrvote" BlockDevice = "/dev/vx/dsk/ocrvotedg/ocrvotevol" ) CVMVolDg ocrvote_voldg_ocrvotedg ( Critical = 0 CVMDiskGroup = ocrvotedg CVMVolume = { ocrvotevol } CVMActivation = sw ) ocrvote_mnt_ocrvotedg requires ocrvote_voldg_ocrvotedg ocrvote_mnt_ocrvotedg requires vxfsckd ocrvote_voldg_ocrvotedg requires cvm_clus
- Bring the CFSMount and CVMVolDg resources online on all systems in the cluster:
# hares -online ocrvotevol_resname -sys node_name
# hares -online ocrvotemnt_resname -sys node_name
Verify that the resources are online on all systems in the cluster:
# hares -state ocrvotevol_resname
# hares -state ocrvotemnt_resname