InfoScale™ 9.0 Storage Foundation and High Availability Solutions Solutions Guide - Windows
- Section I. Introduction
- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center
- SFW best practices for storage
- Section II. Quick Recovery
- Section III. High Availability
- High availability: Overview
- How VCS monitors storage components
- Deploying InfoScale Enterprise for high availability: New installation
- Notes and recommendations for cluster and application configuration
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Modifying the application service groups
- Adding DMP to a clustering configuration
- High availability: Overview
- Section IV. Campus Clustering
- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes
- Installing the application on cluster nodes
- Section V. Replicated Data Clusters
- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation
- Notes and recommendations for cluster and application configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- Configuring a RVG service group for replication
- Configuring the resources in the RVG service group for RDC replication
- Configuring the VMDg or VMNSDg resources for the disk groups
- Configuring the RVG Primary resources
- Adding the nodes from the secondary zone to the RDC
- Verifying the RDC configuration
- Section VI. Disaster Recovery
- Disaster recovery: Overview
- Deploying disaster recovery: New application installation
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Setting up your replication environment
- About configuring disaster recovery with the DR wizard
- Installing and configuring the application or server role (secondary site)
- Configuring replication and global clustering
- Configuring the global cluster option for wide-area failover
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Testing fault readiness by running a fire drill
- About the Fire Drill Wizard
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- Deleting the fire drill configuration
- Section VII. Microsoft Clustering Solutions
- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Implementing a dynamic quorum resource
- Deploying SFW with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Installing the application on the cluster nodes
- Deploying SFW and VVR with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site
- Reviewing the prerequisites and the configuration
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
- Section VIII. Server Consolidation
- Server consolidation overview
- Server consolidation configurations
- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP
- About this configuration
- SFW features that support server consolidation
Sample campus cluster configuration
The following sample configuration represents a campus cluster with two sites, Site A and Site B.
With SFW, a campus cluster can be set up using a Cluster Server (VCS) configuration. Both configurations involve setting up a single cluster with two nodes that are in separate buildings and are connected via a single subnet and Fibre Channel SAN. Each node has its own storage array with an equal number of disks and contains mirrored data of the storage on the other array. SFW provides the mirrored storage and the disk groups that make it possible to fail over the storage by deporting the disk groups on one node and importing them on the other.
If a site failure occurs in a two-node campus cluster, the remaining cluster node will not be able to bring the cluster disk groups online because it cannot reserve a majority of disks in the disk groups. To allow for failover to the other site, a procedure forces the import to the other node, allowing a cluster disk group to be brought online on another node when that node has a minority of the cluster disks.
Implementing these force import procedures should be done with care. The primary site may appear to have failed but what really has happened is that both the storage interconnect between sites and the heartbeats have been lost. In that case, cluster disk groups can still be online on the primary node. If a force import is done so that the data can be accessed on the secondary site, the cluster disks will be online on both sites, risking data corruption.