Please enter search query.
Search <book_title>...
Storage Foundation for Oracle® RAC 7.3.1 Administrator's Guide - Linux
Last Published:
2018-01-16
Product(s):
InfoScale & Storage Foundation (7.3.1)
- Section I. SF Oracle RAC concepts and administration
- Overview of Storage Foundation for Oracle RAC
- About Storage Foundation for Oracle RAC
- Component products and processes of SF Oracle RAC
- About Virtual Business Services
- Administering SF Oracle RAC and its components
- Administering SF Oracle RAC
- Starting or stopping SF Oracle RAC on each node
- Administering VCS
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Administering the CP server
- Administering CFS
- Administering CVM
- Changing the CVM master manually
- Administering Flexible Storage Sharing
- Backing up and restoring disk group configuration data
- Administering SF Oracle RAC global clusters
- Administering SF Oracle RAC
- Overview of Storage Foundation for Oracle RAC
- Section II. Performance and troubleshooting
- Troubleshooting SF Oracle RAC
- About troubleshooting SF Oracle RAC
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the SF Oracle RAC cluster nodes
- Issues during online migration of coordination points
- Troubleshooting Cluster Volume Manager in SF Oracle RAC clusters
- Troubleshooting CFS
- Troubleshooting interconnects
- Troubleshooting Oracle
- Troubleshooting ODM in SF Oracle RAC clusters
- Prevention and recovery strategies
- Tunable parameters
- Troubleshooting SF Oracle RAC
- Section III. Reference
Adding disks from a recovered site to the coordinator disk group
In a campus cluster environment, consider a case where the primary site goes down and the secondary site comes online with a limited set of disks. When the primary site restores, the primary site's disks are also available to act as coordinator disks. You can use the vxfenswap utility to add these disks to the coordinator disk group.
To add new disks from a recovered site to the coordinator disk group
- Make sure system-to-system communication is functioning properly.
- Make sure that the cluster is online.
# vxfenadm -d
I/O Fencing Cluster Information: ================================ Fencing Protocol Version: 201 Fencing Mode: SCSI3 Fencing SCSI3 Disk Policy: dmp Cluster Members: * 0 (sys1) 1 (sys2) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running)
- Verify the name of the coordinator disk group.
# cat /etc/vxfendg vxfencoorddg
- Run the following command:
# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS sdx auto:cdsdisk - (vxfencoorddg) online sdy auto - - offline sdz auto - - offline
- Verify the number of disks used in the coordinator disk group.
# vxfenconfig -l I/O Fencing Configuration Information: ====================================== Count : 1 Disk List Disk Name Major Minor Serial Number Policy
/dev/vx/rdmp/sdx 32 48 R450 00013154 0312 dmp
- When the primary site comes online, start the vxfenswap utility on any node in the cluster:
# vxfenswap -g vxfencoorddg [-n]
- Verify the count of the coordinator disks.
# vxfenconfig -l I/O Fencing Configuration Information: ====================================== Single Disk Flag : 0 Count : 3 Disk List Disk Name Major Minor Serial Number Policy
/dev/vx/rdmp/sdx 32 48 R450 00013154 0312 dmp /dev/vx/rdmp/sdy 32 32 R450 00013154 0313 dmp /dev/vx/rdmp/sdz 32 16 R450 00013154 0314 dmp