Please enter search query.
Search <book_title>...
Storage Foundation for Oracle® RAC 7.3.1 Administrator's Guide - Linux
Last Published:
2018-01-16
Product(s):
InfoScale & Storage Foundation (7.3.1)
- Section I. SF Oracle RAC concepts and administration
- Overview of Storage Foundation for Oracle RAC
- About Storage Foundation for Oracle RAC
- Component products and processes of SF Oracle RAC
- About Virtual Business Services
- Administering SF Oracle RAC and its components
- Administering SF Oracle RAC
- Starting or stopping SF Oracle RAC on each node
- Administering VCS
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Administering the CP server
- Administering CFS
- Administering CVM
- Changing the CVM master manually
- Administering Flexible Storage Sharing
- Backing up and restoring disk group configuration data
- Administering SF Oracle RAC global clusters
- Administering SF Oracle RAC
- Overview of Storage Foundation for Oracle RAC
- Section II. Performance and troubleshooting
- Troubleshooting SF Oracle RAC
- About troubleshooting SF Oracle RAC
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the SF Oracle RAC cluster nodes
- Issues during online migration of coordination points
- Troubleshooting Cluster Volume Manager in SF Oracle RAC clusters
- Troubleshooting CFS
- Troubleshooting interconnects
- Troubleshooting Oracle
- Troubleshooting ODM in SF Oracle RAC clusters
- Prevention and recovery strategies
- Tunable parameters
- Troubleshooting SF Oracle RAC
- Section III. Reference
Testing the coordinator disk group using the -c option of vxfentsthdw
Use the vxfentsthdw utility to verify disks are configured to support I/O fencing. In this procedure, the vxfentsthdw utility tests the three disks, one disk at a time from each node.
The procedure in this section uses the following disks for example:
From the node sys1, the disks are seen as
/dev/sdg
,/dev/sdh
, and/dev/sdi
.From the node sys2, the same disks are seen as
/dev/sdx
,/dev/sdy
, and/dev/sdz
.
Note:
To test the coordinator disk group using the vxfentsthdw utility, the utility requires that the coordinator disk group, vxfencoorddg, be accessible from two nodes.
To test the coordinator disk group using vxfentsthdw -c
- Use the vxfentsthdw command with the -c option. For example:
# vxfentsthdw -c vxfencoorddg
- Enter the nodes you are using to test the coordinator disks:
Enter the first node of the cluster: sys1 Enter the second node of the cluster: sys2
- Review the output of the testing process for both nodes for all disks in the coordinator disk group. Each disk should display output that resembles:
ALL tests on the disk /dev/sdg have PASSED. The disk is now ready to be configured for I/O Fencing on node sys1 as a COORDINATOR DISK. ALL tests on the disk /dev/sdx have PASSED. The disk is now ready to be configured for I/O Fencing on node sys2 as a COORDINATOR DISK.
- After you test all disks in the disk group, the vxfencoorddg disk group is ready for use.