InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
VCS campus cluster requirements
Review the following requirements for VCS campus clusters:
You must install VCS.
You must have a single VCS cluster with at least one node in each of the two sites, where the sites are separated by a physical distance of no more than 80 kilometers. When the sites are separated more than 80 kilometers, you can run Global Cluster Option (GCO) configuration.
You must have redundant network connections between nodes. All paths to storage must also be redundant.
Arctera recommends the following in a campus cluster setup:
A common cross-site physical infrastructure for storage and LLT private networks.
Arctera recommends a common cross-site physical infrastructure for storage and LLT private networks
Technologies such as Dense Wavelength Division Multiplexing (DWDM) for network and I/O traffic across sites. Use redundant links to minimize the impact of network failure.
You must install Volume Manager with the FMR license and the Site Awareness license.
Arctera recommends that you configure I/O fencing to prevent data corruption in the event of link failures.
See the Cluster Server Configuration and Upgrade Guide for more details.
You must configure storage to meet site-based allocation and site-consistency requirements for VxVM.
All the nodes in the site must be tagged with the appropriate VxVM site names.
All the disks must be tagged with the appropriate VxVM site names.
The VxVM site names of both the sites in the campus cluster must be added to the disk groups.
The allsites attribute for each volume in the disk group must be set to on. (By default, the value is set to on.)
The siteconsistent attribute for the disk groups must be set to on.
Each host at a site must be connected to a storage switch. The switch must have access to storage arrays at all the sites..
In environments where the Flexible Storage Sharing (FSS) feature is being used, nodes may not be connected to an external storage switch. Locally attached storage needs to be exported to enable network sharing with nodes within and across the site. For more details on FSS, see the Storage Foundation Cluster File System High Availability Administrator's Guide.
SF Oracle RAC campus clusters require mirrored volumes with storage allocated from both sites.