Cluster Server 7.4.1 Administrator's Guide - Linux
- Section I. Clustering concepts and terminology
- Introducing Cluster Server
- About Cluster Server
- About cluster control guidelines
- About the physical components of VCS
- Logical components of VCS
- Types of service groups
- About resource monitoring
- Agent classifications
- About cluster control, communications, and membership
- About security services
- Components for administering VCS
- About cluster topologies
- VCS configuration concepts
- Introducing Cluster Server
- Section II. Administration - Putting VCS to work
- About the VCS user privilege model
- Administering the cluster from the command line
- About administering VCS from the command line
- About installing a VCS license
- Administering LLT
- Starting VCS
- Stopping the VCS engine and related processes
- Logging on to VCS
- About managing VCS configuration files
- About managing VCS users from the command line
- About querying VCS
- About administering service groups
- Modifying service group attributes
- About administering resources
- Enabling and disabling IMF for agents by using script
- Linking and unlinking resources
- About administering resource types
- About administering clusters
- Configuring applications and resources in VCS
- VCS bundled agents for UNIX
- Configuring NFS service groups
- About NFS
- Configuring NFS service groups
- Sample configurations
- About configuring the RemoteGroup agent
- About configuring Samba service groups
- About testing resource failover by using HA fire drills
- Predicting VCS behavior using VCS Simulator
- Section III. VCS communication and operations
- About communications, membership, and data protection in the cluster
- About cluster communications
- About cluster membership
- About membership arbitration
- About membership arbitration components
- About server-based I/O fencing
- About majority-based fencing
- About the CP server service group
- About secure communication between the VCS cluster and CP server
- About data protection
- Examples of VCS operation with I/O fencing
- About cluster membership and data protection without I/O fencing
- Examples of VCS operation without I/O fencing
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About configuring a CP server to support IPv6 or dual stack
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- Controlling VCS behavior
- VCS behavior on resource faults
- About controlling VCS behavior at the service group level
- About AdaptiveHA
- Customized behavior diagrams
- About preventing concurrency violation
- VCS behavior for resources that support the intentional offline functionality
- VCS behavior when a service group is restarted
- About controlling VCS behavior at the resource level
- VCS behavior on loss of storage connectivity
- Service group workload management
- Sample configurations depicting workload management
- The role of service group dependencies
- About communications, membership, and data protection in the cluster
- Section IV. Administration - Beyond the basics
- VCS event notification
- VCS event triggers
- Using event triggers
- List of event triggers
- Virtual Business Services
- Section V. Veritas High Availability Configuration wizard
- Introducing the Veritas High Availability Configuration wizard
- Administering application monitoring from the Veritas High Availability view
- Administering application monitoring from the Veritas High Availability view
- Administering application monitoring from the Veritas High Availability view
- Section VI. Cluster configurations for disaster recovery
- Connecting clusters–Creating global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Prerequisites for global clusters
- Setting up a global cluster
- About IPv6 support with global clusters
- About cluster faults
- About setting up a disaster recovery fire drill
- Test scenario for a multi-tiered environment
- Administering global clusters from the command line
- About global querying in a global cluster setup
- Administering clusters in global cluster setup
- Setting up replicated data clusters
- Setting up campus clusters
- Connecting clusters–Creating global clusters
- Section VII. Troubleshooting and performance
- VCS performance considerations
- How cluster components affect performance
- How cluster operations affect performance
- VCS performance consideration when a system panics
- About scheduling class and priority configuration
- VCS agent statistics
- About VCS tunable parameters
- Troubleshooting and recovery for VCS
- VCS message logging
- Gathering VCS information for support analysis
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting issues with systemd unit service files
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting sites
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the VCS cluster nodes
- Issues during online migration of coordination points
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting licensing
- Licensing error messages
- Troubleshooting secure configurations
- Troubleshooting wizard-based configuration issues
- Troubleshooting issues with the Veritas High Availability view
- VCS message logging
- VCS performance considerations
- Section VIII. Appendixes
To add or remove a failover system
Each row in the application table displays the status of an application on systems that are part of a VCS cluster. The displayed system/s either form a single-system Cluster Server (VCS) cluster with application restart configured as a high-availability measure, or a multi-system VCS cluster with application failover configured. In the displayed cluster, you can add a new system as a failover system for the configured application.
The system must fulfill the following conditions:
The system is not part of any other VCS cluster.
The system has at least two network adapters.
The host name of the system must be resolvable through the DNS server or, locally, using /etc/hosts file entries.
The required ports are not blocked by a firewall.
The application is installed identically on all the systems, including the proposed new system.
To add a failover system, perform the following steps:
Note:
The following procedure describes generic steps to add a failover system. The wizard automatically populates values for initially configured systems in some fields. These values are not editable.
To add a failover system
- In the appropriate row of the application table, click More > Add Failover System.
- Review the instructions on the welcome page of the Veritas High Availability Configuration Wizard, and click Next.
- If you want to add a system from the Cluster systems list to the Application failover targets list, on the Configuration Inputs panel, select the system in the Cluster systems list. Use the Edit icon to specify an administrative user account on the system. You can then move the required system from the Cluster system list to the Application failover targets list. Use the up and down arrow keys to set the order of systems in which VCS agent must failover applications.
If you want to specify a failover system that is not an existing cluster node, on the Configuration Inputs panel, click Add System, and in the Add System dialog box, specify the following details:
System Name or IP address
Specify the name or IP address of the system that you want to add to the VCS cluster.
User name
Specify the user name with administrative privileges on the system.
If you want to specify the same user account on all systems that you want to add, check the Use the specified user account on all systems box.
Password
Specify the password for the account you specified.
Use the specified user account on all systems
Click this check box to use the specified user credentials on all the cluster systems.
The wizard validates the details, and the system then appears in the Application failover target list.
- If you are adding a failover system from the existing VCS cluster, the Network Details panel does not appear.
If you are adding a new failover system to the existing cluster, on the Network Details panel, review the networking parameters used by existing failover systems. Appropriately modify the following parameters for the new failover system.
Note:
The wizard automatically populates the networking protocol (UDP or Ethernet) used by the existing failover systems for Low Latency Transport communication. You cannot modify these settings.
To configure links over ethernet, select the adapter for each network communication link. You must select a different network adapter for each communication link.
To configure links over UDP, specify the required details for each communication link.
Network Adapter
Select a network adapter for the communication links.
You must select a different network adapter for each communication link.
Veritas recommends that one of the network adapters must be a public adapter and the VCS cluster communication link using this adapter is assigned a low priority.
Note:
Do not select the teamed network adapter or the independently listed adapters that are a part of teamed NIC.
IP Address
Select the IP address to be used for cluster communication over the specified UDP port.
Port
Specify a unique port number for each link. You can use ports in the range 49152 to 65535.
The specified port for a link is used for all the cluster systems on that link.
Subnet mask
Displays the subnet mask to which the specified IP belongs.
- If a virtual IP is not configured as part of your application monitoring configuration, the Virtual Network Details page is not displayed. Else, on the Virtual Network Details panel, review the following networking parameters that the failover system must use, and specify the NIC:
Virtual IP address
Specifies a unique virtual IP address.
Subnet mask
Specifies the subnet mask to which the IP address belongs.
NIC
For each newly added system, specify the network adaptor that must host the specified virtual IP.
- On the Configuration Summary panel, review the VCS cluster configuration summary, and then click Next to proceed with the configuration.
- On the Implementation panel, the wizard adds the specified system to the VCS cluster, if it is not already a part. It then adds the system to the list of failover targets. The wizard displays a progress report of each task.
If the wizard displays an error, click View Logs to review the error description, troubleshoot the error, and re-run the wizard from the Veritas High Availability view.
Click Next.
- On the Finish panel, click Finish. This completes the procedure for adding a failover system. You can view the system in the appropriate row of the application table.
Similarly you can also remove a system from the list of application failover targets.
Note:
You cannot remove a failover system if an application is online or partially online on the system.
To remove a failover system
- In the appropriate row of the application table, click More > Remove Failover System.
- On the Remove Failover System panel, click the system that you want to remove from the monitoring configuration, and then click OK.
Note:
This procedure only removes the system from the list of failover target systems, not from the VCS cluster. To remove a system from the cluster, use VCS commands. For details, see the VCS Administrator's Guide.