InfoScale™ 9.0 Cluster Server Administrator's Guide - Windows
- Section I. Clustering concepts and terminology
- Introducing Cluster Server
- About Cluster Server
- About cluster control guidelines
- About the physical components of VCS
- Logical components of VCS
- Types of service groups
- Agent classifications
- About cluster control, communications, and membership
- About security services
- About cluster topologies
- VCS configuration concepts
- Introducing Cluster Server
- Section II. Administration - Putting VCS to work
- About the VCS user privilege model
- Getting started with VCS
- Administering the cluster from the command line
- About administering VCS from the command line
- Stopping the VCS engine and related processes
- About managing VCS configuration files
- About managing VCS users from the command line
- About querying VCS
- About administering service groups
- Modifying service group attributes
- About administering resources
- About administering resource types
- About administering clusters
- Configuring resources and applications in VCS
- About configuring resources and applications
- About Virtual Business Services
- About Intelligent Resource Monitoring (IMF)
- About fast failover
- How VCS monitors storage components
- About storage configuration
- About configuring network resources
- About configuring file shares
- About configuring IIS sites
- About configuring services
- Before you configure a service using the GenericService agent
- About configuring processes
- About configuring Microsoft Message Queuing (MSMQ)
- About configuring the infrastructure and support agents
- About configuring applications using the Application Configuration Wizard
- Adding resources to a service group
- About application monitoring on single-node clusters
- Configuring the service group in a non-shared storage environment
- About the VCS Application Manager utility
- About testing resource failover using virtual fire drills
- Modifying the cluster configuration
- Section III. Administration - Beyond the basics
- Controlling VCS behavior
- VCS behavior on resource faults
- About controlling VCS behavior at the service group level
- Customized behavior diagrams
- VCS behavior for resources that support the intentional offline functionality
- About controlling VCS behavior at the resource level
- Service group workload management
- Sample configurations depicting workload management
- The role of service group dependencies
- VCS event notification
- VCS event triggers
- List of event triggers
- Controlling VCS behavior
- Section IV. Cluster configurations for disaster recovery
- Connecting clusters–Creating global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Prerequisites for global clusters
- Setting up a global cluster
- Configuring replication resources in VCS
- About IPv6 support with global clusters
- About cluster faults
- About setting up a disaster recovery fire drill
- Test scenario for a multi-tiered environment
- Administering global clusters from Cluster Manager (Java console)
- Administering global clusters from the command line
- About global querying in a global cluster setup
- Administering clusters in global cluster setup
- Setting up replicated data clusters
- Connecting clusters–Creating global clusters
- Section V. Troubleshooting and performance
- VCS performance considerations
- How cluster components affect performance
- How cluster operations affect performance
- VCS performance consideration when a system panics
- VCS agent statistics
- Troubleshooting and recovery for VCS
- VCS message logging
- Handling network failure
- Troubleshooting VCS startup
- Troubleshooting service groups
- Troubleshooting and recovery for global clusters
- VCS utilities
- VCS performance considerations
- Section VI. Appendixes
- Appendix A. VCS user privileges—administration matrices
- Appendix B. Cluster and system states
- Appendix C. VCS attributes
- Appendix D. Configuring LLT over UDP
- Appendix E. Handling concurrency violation in any-to-any configurations
- Appendix F. Accessibility and VCS
- Appendix G. Executive Order logging
Configuring the cluster using the Cluster Configuration Wizard
After installing the software, set up the components required to run Cluster Server. The VCS Cluster Configuration Wizard (VCW) sets up the cluster infrastructure, including LLT and GAB, the user account for the VCS Helper service, and provides an option for configuring the VCS Authentication Service in the cluster. The wizard also configures the ClusterService group, which contains resources for notification and global clusters (GCO). You can also use VCW to modify or delete cluster configurations.
Note:
After configuring the cluster you must not change the names of the nodes that are part of the cluster. If you wish to change a node name, run VCW to remove the node from the cluster, rename the system, and then run VCW again to add that system to the cluster.
Note the following prerequisites before you proceed:
The required network adapters (NICs), and SCSI controllers are installed and connected to each system.
Arctera recommends the following actions for network adapters:
Disable the ethernet auto-negotiation options on the private NICs to prevent:
Loss of heartbeats on the private networks
VCS from mistakenly declaring a system as offline
Contact the NIC manufacturer for details on this process.
Remove TCP/IP from the private NICs to lower system overhead.
Verify that the public network adapters on each node use static IP addresses (DHCP is not supported) and name resolution is configured for each node.
Arctera recommends that you use three network adapters (two NICs exclusively for the VCS private network and one for the public network) per system. You can implement the second private link as a low-priority link over a public interface. Route each private NIC through a separate hub or switch to avoid single points of failure. Arctera recommends that you disable TCP/IP from private NICs to lower system overhead.
Note:
If you wish to use Windows NIC teaming, you must select the Static Teaming mode. Only the Static Teaming mode is currently supported.
Use independent hubs or switches for each VCS communication network (GAB and LLT). You can use cross-over Ethernet cables for two-node clusters. GAB supports hub-based or switch network paths, or two-system clusters with direct network links.
Verify the DNS settings for all systems on which the application is installed and ensure that the public adapter is the first adapter in the Connections list.
When enabling DNS name resolution, make sure that you use the public network adapters, and not those configured for the VCS private network.
The logged on user must have local Administrator privileges on the system where you run the wizard. The user account must be a domain user account.
The logged on user must have administrative access to all systems selected for cluster operations. Add the domain user account to the local Administrator group of each system.
If you plan to create a new user account for the VCS Helper service, the logged on user must have Domain Administrator privileges or must belong to the Domain Account Operators group.
When configuring a user account for the Veritas VCS Helper service, make sure that the user account is a domain user. The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.
Make sure the VCS Helper service domain user account has "Add workstations to domain" privilege enabled in the Active Directory.
Verify that each system can access the storage devices and each system recognizes the attached shared disk.
Use Windows Disk Management on each system to verify that the attached shared LUNs (virtual disks) are visible.
If you plan to set up a disaster recovery (DR) environment, you must configure the wide-area connector process for global clusters.
If you are setting up a Replicated Data Cluster configuration, add only the systems in the primary zone (zone 0) to the cluster, at this time.
To configure a VCS cluster using the wizard
- Start the VCSCluster Configuration Wizard from the Apps menu on the Start screen.
- Read the information on the Welcome panel and click Next.
- On the Configuration Options panel, click Cluster Operations and click Next.
- On the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.
To discover information about all systems and users in the domain, do the following:
Clear Specify systems and users manually.
Click Next.
Proceed to step 8.
To specify systems and user names manually (recommended for large domains), do the following:
Select Specify systems and users manually.
Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes.
Click Next.
If you chose to retrieve the list of systems, proceed to step 6. Otherwise, proceed to the next step.
- On the System Selection panel, type the name of each system to be added, click Add, and then click Next.
Do not specify systems that are part of another cluster.
Proceed to step 8.
- On the System Selection panel, specify the systems for the cluster and then click Next.
Do not select systems that are part of another cluster.
Enter the name of the system and click Add to add the system to the Selected Systems list, or click to select the system in the Domain Systems list and then click the > (right-arrow) button.
- The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier. Review the status and then click Next.
Select the system to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again.
A system can be rejected for any of the following reasons:
System is not pingable.
WMI access is disabled on the system.
Wizard is unable to retrieve the system architecture or operating system.
Product is either not installed or there is a version mismatch.
- On the Cluster Configuration Options panel, click Create New Cluster and then click Next.
- On the Cluster Details panel, specify the details for the cluster and then click Next.
Specify the cluster details as follows:
Cluster Name
Type a name for the new cluster. Arctera recommends a maximum length of 32 characters for the cluster name.
Cluster ID
Select a cluster ID from the suggested cluster IDs in the drop-down list, or type a unique ID for the cluster. The cluster ID can be any number from 0 to 65535.
Note:
If you chose to specify systems and users manually in step 4 or if you share a private network between more than one domain, make sure that the cluster ID is unique.
Operating System
From the drop-down list, select the operating system.
All the systems in the cluster must have the same operating system and architecture.
Available Systems
Select the systems that you wish to configure in the cluster.
Check the Select all systems check box to select all the systems simultaneously.
The wizard discovers the NICs on the selected systems. For single-node clusters with the required number of NICs, the wizard prompts you to configure a private link heartbeat. In the dialog box, click Yes to configure a private link heartbeat.
- The wizard validates the selected systems for cluster membership. After the systems are validated, click Next.
If a system is not validated, review the message associated with the failure and restart the wizard after rectifying the problem.
If you chose to configure a private link heartbeat in step 9, proceed to the next step. Otherwise, proceed to step 12.
- On the Private Network Configuration panel, configure the VCS private network and then click Next. You can configure the VCS private network either over the ethernet or over the User Datagram Protocol (UDP) layer using IPv4 or IPv6 network.
Do one of the following:
Select Configure LLT over Ethernet.
Select the check boxes next to the two NICs to be assigned to the private network. You can assign a maximum of eight network links.
Arctera recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one of the NICs and use the low-priority NIC for both public and as well as private communication.
If there are only two NICs on a selected system, Arctera recommends that you lower the priority of at least one NIC that will be used for private as well as public network communication.
To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.
If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Arctera recommends that you do not select teamed NICs for the private network.
The wizard configures the LLT service (over ethernet) on the selected network adapters.
To configure the VCS private network over the User Datagram Protocol (UDP) layer, complete the following steps:
Select Configure LLT over UDP on IPv4 network or Configure LLT over UDP on IPv6 network depending on the IP protocol that you wish to use.
The IPv6 option is disabled if the network does not support IPv6.
Select the check boxes next to the NICs to be assigned to the private network. You can assign a maximum of eight network links. Arctera recommends reserving two NICs exclusively for the VCS private network.
For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet.
The IP address is used for the VCS private communication over the specified UDP port.
Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK.
For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier.
The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication.
- On the VCS Helper Service User Account panel, specify the name of a domain user for the VCS Helper service.
The Veritas High Availability Engine (HAD), which runs in the context of the local system built-in account, uses the Veritas VCS Helper service user context to access the network. This account does not require Domain Administrator privileges.
Specify the domain user details as follows:
To specify an existing user, do one of the following:
Click Existing user and select a user name from the drop-down list.
If you chose not to retrieve the list of users in step 4, type the user name in the Specify User field and then click Next.
To specify a new user, click New user and type a valid user name in the Create New User field and then click Next.
Do not append the domain name to the user name; do not type the user name as Domain\user or user@domain.
In the Password dialog box, type the password for the specified user and click OK, and then click Next.
- On the Configure Security Service Option panel, specify security options for the cluster communications and then click Next.
Do one of the following:
To use VCS cluster user privileges, click Use VCS User Privileges and then type a user name and password.
The wizard configures this user as a VCS Cluster Administrator. In this mode, communication between cluster nodes and clients, including Cluster Manager (Java Console), occurs using the encrypted VCS cluster administrator credentials. The wizard uses the VCSEncrypt utility to encrypt the user password.
The default user name for the VCS administrator is admin and the password is password. Both are case-sensitive. You can accept the default user name and password for the VCS administrator account or type a new name and password.
Arctera recommends that you specify a new user name and password.
To use the single sign-on feature, click Use Single Sign-on.
In this mode, the VCS Authentication Service is used to secure communication between cluster nodes and clients by using digital certificates for authentication and SSL to encrypt communication over the public network. VCS uses SSL encryption and platform-based authentication. The Veritas High Availability Engine (HAD) and Veritas Command Server run in secure mode.
The wizard configures all the cluster nodes as root brokers (RB) and authentication brokers (AB). Authentication brokers serve as intermediate registration and certification authorities. Authentication brokers have certificates signed by the root. These brokers can authenticate clients such as users and services. The wizard creates a copy of the certificates on all the cluster nodes.
- Review the summary information on the Summary panel, and click Configure.
The wizard configures the VCS private network. If the selected systems have LLT or GAB configuration files, the wizard displays an informational dialog box before overwriting the files. In the dialog box, click OK to overwrite the files. Otherwise, click Cancel, exit the wizard, move the existing files to a different location, and rerun the wizard.
The wizard starts running commands to configure VCS services. If an operation fails, click View configuration log file to see the log.
- On the Completing Cluster Configuration panel, click Next to configure the ClusterService group; this group is required to set up components for notification and for global clusters.
To configure the ClusterService group later, click Finish.
At this stage, the wizard has collected the information required to set up the cluster configuration. After the wizard completes its operations, with or without the ClusterService group components, the cluster is ready to host application service groups. The wizard also starts the VCS engine (HAD) and the Veritas Command Server at this stage.
- On the Cluster Service Components panel, select the components to be configured in the ClusterService group and then click Next.
Do the following:
Check the Notifier Option check box to configure notification of important events to designated recipients.
Check the GCO Option check box to configure the wide-area connector (WAC) process for global clusters.The WAC process is required for inter-cluster communication.
Configure the GCO Option using this wizard only if you are configuring a Disaster Recovery (DR) environment and are not using the Disaster Recovery wizard.
You can configure the GCO Option using the DR wizard. The Disaster Recovery chapters in the application solutions guides discuss how to use the Disaster Recovery wizard to configure the GCO option.