Storage Foundation for Sybase ASE CE 7.4 Administrator's Guide - Linux
- Overview of Storage Foundation for Sybase ASE CE
- About Storage Foundation for Sybase ASE CE
- About SF Sybase CE components
- About optional features in SF Sybase CE
- Administering SF Sybase CE and its components
- Administering SF Sybase CE
- Starting or stopping SF Sybase CE on each node
- Administering VCS
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Administering CVM
- Changing the CVM master manually
- Administering CFS
- Administering the Sybase agent
- Administering SF Sybase CE
- Troubleshooting SF Sybase CE
- About troubleshooting SF Sybase CE
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting Cluster Volume Manager in SF Sybase CE clusters
- Troubleshooting interconnects
- Troubleshooting Sybase ASE CE
- Prevention and recovery strategies
- Prevention and recovery strategies
- Managing SCSI-3 PR keys in SF Sybase CE cluster
- Prevention and recovery strategies
- Tunable parameters
- Appendix A. Error messages
Testing the shared disks using the vxfentsthdw -m option
Review the procedure to test the shared disks. By default, the utility uses the -m option.
This procedure uses the diskpath_a disk in the steps.
If the utility does not show a message stating a disk is ready, verification has failed. Failure of verification can be the result of an improperly configured disk array. It can also be caused by a bad disk.
If the failure is due to a bad disk, remove and replace it. The vxfentsthdw utility indicates a disk can be used for I/O fencing with a message resembling:
The disk diskpath_a is ready to be configured for I/O Fencing on node system1
Note:
For A/P arrays, run the vxfentsthdw command only on active enabled paths.
To test disks using the vxfentsthdw script
- Make sure system-to-system communication is functioning properly.
- From one node, start the utility.
# vxfentsthdw [-n]
- After reviewing the overview and warning that the tests overwrite data on the disks, confirm to continue the process and enter the node names.
******** WARNING!!!!!!!! ******** THIS UTILITY WILL DESTROY THE DATA ON THE DISK!! Do you still want to continue : [y/n] (default: n) y Enter the first node of the cluster: system1 Enter the second node of the cluster: system2
Enter the names of the disks you are checking. For each node, the disk may be known by the same name:
Enter the disk name to be checked for SCSI-3 PGR on node system1 in the format: for dmp: /dev/vx/rdmp/sdx for raw: /dev/sdx Make sure it's the same disk as seen by nodes system1 and system2 /dev/sdr Enter the disk name to be checked for SCSI-3 PGR on node system2 in the format: for dmp: /dev/vx/rdmp/sdx for raw: /dev/sdx Make sure it's the same disk as seen by nodes system1 and system2 /dev/sdr
If the serial numbers of the disks are not identical, then the test terminates.
- Review the output as the utility performs the checks and report its activities.
- If a disk is ready for I/O fencing on each node, the utility reports success:
ALL tests on the disk diskpath_a have PASSED The disk is now ready to be configured for I/O Fencing on node system1 ... Removing test keys and temporary files, if any ... . .
- Run the vxfentsthdw utility for each disk you intend to verify.