Cluster Server 7.3.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a rolling upgrade of VCS
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Upgrading VCS using Live Upgrade and Boot Environment upgrade
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Installation script options
- Appendix F. Troubleshooting VCS configuration
- Appendix G. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix H. Reconciling major/minor numbers for NFS shared disks
- Appendix I. Upgrading the Steward process
Configuring CoordPoint agent to monitor coordination points
The following procedure describes how to manually configure the CoordPoint agent to monitor coordination points.
The CoordPoint agent can monitor CP servers and SCSI-3 disks.
See the Cluster Server Bundled Agents Reference Guide for more information on the agent.
To configure CoordPoint agent to monitor coordination points
- Ensure that your VCS cluster has been properly installed and configured with fencing enabled.
- Create a parallel service group vxfen and add a coordpoint resource to the vxfen service group using the following commands:
# haconf -makerw # hagrp -add vxfen # hagrp -modify vxfen SystemList sys1 0 sys2 1 # hagrp -modify vxfen AutoFailOver 0 # hagrp -modify vxfen Parallel 1 # hagrp -modify vxfen SourceFile "./main.cf" # hares -add coordpoint CoordPoint vxfen # hares -modify coordpoint FaultTolerance 0 # hares -override coordpoint LevelTwoMonitorFreq # hares -modify coordpoint LevelTwoMonitorFreq 5 # hares -modify coordpoint Enabled 1 # haconf -dump -makero
- Configure the Phantom resource for the vxfen disk group.
# haconf -makerw # hares -add RES_phantom_vxfen Phantom vxfen # hares -modify RES_phantom_vxfen Enabled 1 # haconf -dump -makero
- Verify the status of the agent on the VCS cluster using the hares commands. For example:
# hares -state coordpoint
The following is an example of the command and output::
# hares -state coordpoint
# Resource Attribute System Value coordpoint State sys1 ONLINE coordpoint State sys2 ONLINE
- Access the engine log to view the agent log. The agent log is written to the engine log.
The agent log contains detailed CoordPoint agent monitoring information; including information about whether the CoordPoint agent is able to access all the coordination points, information to check on which coordination points the CoordPoint agent is reporting missing keys, etc.
To view the debug logs in the engine log, change the dbg level for that node using the following commands:
# haconf -makerw
# hatype -modify Coordpoint LogDbg 10
# haconf -dump -makero
The agent log can now be viewed at the following location:
/var/VRTSvcs/log/engine_A.log
Note:
The Coordpoint agent is always in the online state when the I/O fencing is configured in the majority or the disabled mode. For both these modes the I/O fencing does not have any coordination points to monitor. Thereby, the Coordpoint agent is always in the online state.