Cluster Server 7.4.1 Administrator's Guide - Linux
- Section I. Clustering concepts and terminology
- Introducing Cluster Server
- About Cluster Server
- About cluster control guidelines
- About the physical components of VCS
- Logical components of VCS
- Types of service groups
- About resource monitoring
- Agent classifications
- About cluster control, communications, and membership
- About security services
- Components for administering VCS
- About cluster topologies
- VCS configuration concepts
- Introducing Cluster Server
- Section II. Administration - Putting VCS to work
- About the VCS user privilege model
- Administering the cluster from the command line
- About administering VCS from the command line
- About installing a VCS license
- Administering LLT
- Starting VCS
- Stopping the VCS engine and related processes
- Logging on to VCS
- About managing VCS configuration files
- About managing VCS users from the command line
- About querying VCS
- About administering service groups
- Modifying service group attributes
- About administering resources
- Enabling and disabling IMF for agents by using script
- Linking and unlinking resources
- About administering resource types
- About administering clusters
- Configuring applications and resources in VCS
- VCS bundled agents for UNIX
- Configuring NFS service groups
- About NFS
- Configuring NFS service groups
- Sample configurations
- About configuring the RemoteGroup agent
- About configuring Samba service groups
- About testing resource failover by using HA fire drills
- Predicting VCS behavior using VCS Simulator
- Section III. VCS communication and operations
- About communications, membership, and data protection in the cluster
- About cluster communications
- About cluster membership
- About membership arbitration
- About membership arbitration components
- About server-based I/O fencing
- About majority-based fencing
- About the CP server service group
- About secure communication between the VCS cluster and CP server
- About data protection
- Examples of VCS operation with I/O fencing
- About cluster membership and data protection without I/O fencing
- Examples of VCS operation without I/O fencing
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About configuring a CP server to support IPv6 or dual stack
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- Controlling VCS behavior
- VCS behavior on resource faults
- About controlling VCS behavior at the service group level
- About AdaptiveHA
- Customized behavior diagrams
- About preventing concurrency violation
- VCS behavior for resources that support the intentional offline functionality
- VCS behavior when a service group is restarted
- About controlling VCS behavior at the resource level
- VCS behavior on loss of storage connectivity
- Service group workload management
- Sample configurations depicting workload management
- The role of service group dependencies
- About communications, membership, and data protection in the cluster
- Section IV. Administration - Beyond the basics
- VCS event notification
- VCS event triggers
- Using event triggers
- List of event triggers
- Virtual Business Services
- Section V. Veritas High Availability Configuration wizard
- Introducing the Veritas High Availability Configuration wizard
- Administering application monitoring from the Veritas High Availability view
- Administering application monitoring from the Veritas High Availability view
- Administering application monitoring from the Veritas High Availability view
- Section VI. Cluster configurations for disaster recovery
- Connecting clusters–Creating global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Prerequisites for global clusters
- Setting up a global cluster
- About IPv6 support with global clusters
- About cluster faults
- About setting up a disaster recovery fire drill
- Test scenario for a multi-tiered environment
- Administering global clusters from the command line
- About global querying in a global cluster setup
- Administering clusters in global cluster setup
- Setting up replicated data clusters
- Setting up campus clusters
- Connecting clusters–Creating global clusters
- Section VII. Troubleshooting and performance
- VCS performance considerations
- How cluster components affect performance
- How cluster operations affect performance
- VCS performance consideration when a system panics
- About scheduling class and priority configuration
- VCS agent statistics
- About VCS tunable parameters
- Troubleshooting and recovery for VCS
- VCS message logging
- Gathering VCS information for support analysis
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting issues with systemd unit service files
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting sites
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the VCS cluster nodes
- Issues during online migration of coordination points
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting licensing
- Licensing error messages
- Troubleshooting secure configurations
- Troubleshooting wizard-based configuration issues
- Troubleshooting issues with the Veritas High Availability view
- VCS message logging
- VCS performance considerations
- Section VIII. Appendixes
Removing preexisting keys
If you encountered a split-brain condition, use the vxfenclearpre utility to remove CP Servers, SCSI-3 registrations, and reservations on the coordinator disks, Coordination Point servers, as well as on the data disks in all shared disk groups.
You can also use this procedure to remove the registration and reservation keys of another node or other nodes on shared disks or CP server.
To clear keys after split-brain
- Stop VCS on all nodes.
# hastop -all
- Make sure that the port h is closed on all the nodes. Run the following command on each node to verify that the port h is closed:
# gabconfig -a
Port h must not appear in the output.
- Stop I/O fencing on all nodes. Enter the following command on each node:
For RHEL 7, SLES 12, and supported RHEL distributions:
# systemctl stop vxfen
For earlier versions of RHEL, SLES, and supported RHEL distributions:
# /etc/init.d/vxfen stop
- If you have any applications that run outside of VCS control that have access to the shared storage, then shut down all other nodes in the cluster that have access to the shared storage. This prevents data corruption.
- Start the vxfenclearpre script:
# /opt/VRTSvcs/vxfen/bin/vxfenclearpre
- Read the script's introduction and warning. Then, you can choose to let the script run.
Do you still want to continue: [y/n] (default : n) y
In some cases, informational messages resembling the following may appear on the console of one of the nodes in the cluster when a node is ejected from a disk/LUN. You can ignore these informational messages.
<date> <system name> scsi: WARNING: /sbus@3,0/lpfs@0,0/ sd@0,1(sd91): <date> <system name> Error for Command: <undecoded cmd 0x5f> Error Level: Informational <date> <system name> scsi: Requested Block: 0 Error Block 0 <date> <system name> scsi: Vendor: <vendor> Serial Number: 0400759B006E <date> <system name> scsi: Sense Key: Unit Attention <date> <system name> scsi: ASC: 0x2a (<vendor unique code 0x2a>), ASCQ: 0x4, FRU: 0x0
The script cleans up the disks and displays the following status messages.
Cleaning up the coordinator disks... Cleared keys from n out of n disks, where n is the total number of disks. Successfully removed SCSI-3 persistent registrations from the coordinator disks. Cleaning up the Coordination Point Servers... ................... [10.209.80.194]:50001: Cleared all registrations [10.209.75.118]:443: Cleared all registrations Successfully removed registrations from the Coordination Point Servers. Cleaning up the data disks for all shared disk groups ... Successfully removed SCSI-3 persistent registration and reservations from the shared data disks. See the log file /var/VRTSvcs/log/vxfen/vxfen.log You can retry starting fencing module. In order to restart the whole product, you might want to reboot the system.
- Start the fencing module on all the nodes.
For RHEL 7, SLES 12, and supported RHEL distributions:
# systemctl start vxfen
For earlier versions of RHEL, SLES, and supported RHEL distributions:
# /etc/init.d/vxfen start
- Start VCS on all nodes.
# hastart