InfoScale™ 9.0 Storage Foundation and High Availability Solutions Solutions Guide - Windows
- Section I. Introduction
- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center
- SFW best practices for storage
- Section II. Quick Recovery
- Section III. High Availability
- High availability: Overview
- How VCS monitors storage components
- Deploying InfoScale Enterprise for high availability: New installation
- Notes and recommendations for cluster and application configuration
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Modifying the application service groups
- Adding DMP to a clustering configuration
- High availability: Overview
- Section IV. Campus Clustering
- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes
- Installing the application on cluster nodes
- Section V. Replicated Data Clusters
- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation
- Notes and recommendations for cluster and application configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- Configuring a RVG service group for replication
- Configuring the resources in the RVG service group for RDC replication
- Configuring the VMDg or VMNSDg resources for the disk groups
- Configuring the RVG Primary resources
- Adding the nodes from the secondary zone to the RDC
- Verifying the RDC configuration
- Section VI. Disaster Recovery
- Disaster recovery: Overview
- Deploying disaster recovery: New application installation
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Setting up your replication environment
- About configuring disaster recovery with the DR wizard
- Installing and configuring the application or server role (secondary site)
- Configuring replication and global clustering
- Configuring the global cluster option for wide-area failover
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Testing fault readiness by running a fire drill
- About the Fire Drill Wizard
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- Deleting the fire drill configuration
- Section VII. Microsoft Clustering Solutions
- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Implementing a dynamic quorum resource
- Deploying SFW with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Installing the application on the cluster nodes
- Deploying SFW and VVR with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site
- Reviewing the prerequisites and the configuration
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
- Section VIII. Server Consolidation
- Server consolidation overview
- Server consolidation configurations
- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP
- About this configuration
- SFW features that support server consolidation
Adding nodes to a cluster
If you are setting up a Replicated Data Cluster, use the VCS Cluster Configuration Wizard (VCW) to add the systems in the secondary zone (zone1) to the existing cluster.
You use the VCS Cluster Configuration Wizard (VCW) to add one or more nodes to an existing cluster.
Prerequisites for adding a node to an existing cluster are as follows:
Verify that the logged-on user has VCS cluster administrator privileges.
The logged-on user must be a local administrator on the system where you run the wizard.
Verify that Command Server is running on all nodes in the cluster. Select Services on the Administrative Tools menu and verify that the Veritas Command Server shows that it is started.
Verify that the high availability daemon (HAD) is running on the node on which you run the wizard. Open the Services window, and verify that the Veritas High availability engine service is running.
To add a node to a VCS cluster
- Start the VCS Cluster Configuration wizard.
Click Start > All Programs > Veritas > Veritas Cluster Server > Configuration Tools > Cluster Configuration Wizard.
Run the wizard from the node to be added or from a node in the cluster. The node that is being added should be part of the domain to which the cluster belongs.
- Read the information on the Welcome panel and click Next.
- On the Configuration Options panel, click Cluster Operations and click Next.
- In the Domain Selection panel, select or type the name of the domain in which the cluster resides and select the discovery options.
To discover information about all the systems and users in the domain, do the following:
Clear the Specify systems and users manually check box.
Click Next.
Proceed to step 8.
To specify systems and user names manually (recommended for large domains), do the following:
Check the Specify systems and users manually check box.
Additionally, you may instruct the wizard to retrieve a list of systems and users in the domain by selecting appropriate check boxes.
Click Next.
If you chose to retrieve the list of systems, proceed to step 6. Otherwise proceed to the next step.
- On the System Selection panel, complete the following and click Next:
Type the name of an existing node in the cluster and click Add.
Type the name of the system to be added to the cluster and click Add.
If you specify only one node of an existing cluster, the wizard discovers all nodes for that cluster. To add a node to an existing cluster, you must specify a minimum of two nodes; one that is already a part of a cluster and the other that is to be added to the cluster.
Proceed to step 8.
- On the System Selection panel, specify the systems to be added and the nodes for the cluster to which you are adding the systems.
Enter the system name and click Add to add the system to the Selected Systems list. Alternatively, you can select the systems from the Domain Systems list and click the right-arrow icon.
If you specify only one node of an existing cluster, the wizard discovers all nodes for that cluster. To add a node to an existing cluster, you must specify a minimum of two nodes; one that is already a part of a cluster and the other that is to be added to the cluster.
- The System Report panel displays the validation status, whether Accepted or Rejected, of all the systems you specified earlier.
A system can be rejected for any of the following reasons:
The system does not respond to a ping request.
WMI access is disabled on the system.
The wizard is unable to retrieve information about the system's architecture or operating system.
VCS is either not installed on the system or the version of VCS is different from what is installed on the system on which you are running the wizard.
Click on a system name to see the validation details. If you wish to include a rejected system, rectify the error based on the reason for rejection and then run the wizard again.
Click Next to proceed.
- On the Cluster Configuration Options panel, click Edit Existing Cluster and click Next.
- On the Cluster Selection panel, select the cluster to be edited and click Next.
If you chose to specify the systems manually in step 4, only the clusters configured with the specified systems are displayed.
- On the Edit Cluster Options panel, click Add Nodes and click Next.
In the Cluster User Information dialog box, type the user name and password for a user with administrative privileges to the cluster and click OK.
The Cluster User Information dialog box appears only when you add a node to a cluster with VCS user privileges (a cluster that is not a secure cluster).
- On the Cluster Details panel, check the check boxes next to the systems to be added to the cluster and click Next.
The right pane lists nodes that are part of the cluster. The left pane lists systems that can be added to the cluster.
- The wizard validates the selected systems for cluster membership. After the nodes have been validated, click Next.
If a node does not get validated, review the message associated with the failure and restart the wizard after rectifying the problem.
- On the Private Network Configuration panel, configure the VCS private network communication on each system being added and then click Next. How you configure the VCS private network communication depends on how it is configured in the cluster. If LLT is configured over Ethernet, you have to use the same on the nodes being added. Similarly, if LLT is configured over UDP in the cluster, you have use the same on the nodes being added.
Do one of the following:
To configure the VCS private network over Ethernet, do the following:
Select the check boxes next to the two NICs to be assigned to the private network.
Arctera recommends reserving two NICs exclusively for the private network. However, you could lower the priority of one NIC and use the low-priority NIC for public and private communication.
If you have only two NICs on a selected system, it is recommended that you lower the priority of at least one NIC that will be used for private as well as public network communication.
To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.
If your configuration contains teamed NICs, the wizard groups them as "NIC Group #N" where "N" is a number assigned to the teamed NIC. A teamed NIC is a logical NIC, formed by grouping several physical NICs together. All NICs in a team have an identical MAC address. Arctera recommends that you do not select teamed NICs for the private network.
The wizard configures the LLT service (over Ethernet) on the selected network adapters.
To configure the VCS private network over the User Datagram Protocol (UDP) layer, do the following:
Select the check boxes next to the two NICs to be assigned to the private network. You can assign maximum eight network links. Arctera recommends reserving at least two NICs exclusively for the VCS private network. You could lower the priority of one NIC and use the low-priority NIC for both public and private communication.
If you have only two NICs on a selected system, it is recommended that you lower the priority of at least one NIC that will be used for private as well as public network communication. To lower the priority of a NIC, right-click the NIC and select Low Priority from the pop-up menu.
Specify a unique UDP port for each of the link. Click Edit Ports if you wish to edit the UDP ports for the links. You can use ports in the range 49152 to 65535. The default ports numbers are 50000 and 50001 respectively. Click OK.
For each selected NIC, verify the displayed IP address. If a selected NIC has multiple IP addresses assigned, double-click the field and choose the desired IP address from the drop-down list. In case of IPv4, each IP address can be in a different subnet.
The IP address is used for the VCS private communication over the specified UDP port.
For each selected NIC, double-click the respective field in the Link column and choose a link from the drop-down list. Specify a different link (Link1 or Link2) for each NIC. Each link is associated with a UDP port that you specified earlier.
The wizard configures the LLT service (over UDP) on the selected network adapters. The specified UDP ports are used for the private network communication.
- On the Public Network Communication panel, select a NIC for public network communication, for each system that is being added, and then click Next.
This step is applicable only if you have configured the ClusterService service group, and the system being added has multiple adapters. If the system has only one adapter for public network communication, the wizard configures that adapter automatically.
- Specify the credentials for the user in whose context the VCS Helper service runs.
- Review the summary information and click Add.
- The wizard starts running commands to add the node. After all commands have been successfully run, click Finish.
If you are setting up a Replicated Data Cluster, return to the task list: