InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
Best practices for setting up replication
Set up replication according to the following best practices:
Create one RVG for each application, rather than for each server. For example, if a server is running three separate databases that are being replicated, create three separate RVGs for each database. Creating three separate RVGs helps to avoid write-order dependency between the applications and provides three separate SRLs for maximum performance per application.
Create one RVG per disk group. Creating one RVG per disk group enables you to efficiently implement application clustering for high availability, where only one RVG needs to be failed over by the service group. If the disk group contains more than one RVG, the applications using the other RVGs would have to be stopped to facilitate the failover. You can use the Disk Group Split feature to migrate application volumes to their own disk groups before associating the volumes to the RVG.
Plan the size and layout of the data volumes based on the requirement of your application.
Plan the size of the network between the Primary and each Secondary host.
Lay out the SRL appropriately to support the performance characteristics needed by the application. Because all writes to the data volumes in an RVG are first written to the SRL, the total write performance of an RVG is bound by the total write performance of the SRL. For example, dedicate separate disks to SRLs and if possible dedicate separate controllers to the SRL.
Size the SRL appropriately to avoid overflow.
The Volume Replicator Advisor (VRAdvisor), a tool to collect and analyze samples of data, can help you determine the optimal size of the SRL.
Include all the data volumes used by the application in the same RVG. This is mandatory.
Provide dedicated bandwidth for VVR over a separate network. The RLINK replicates data critical to the survival of the business. Compromising the RLINK compromises the business recovery plan.
Use the same names for the data volumes on the Primary and Secondary nodes. If the data volumes on the Primary and Secondary have different names, you must map the name of the Secondary data volume to the appropriate Primary data volume.
Use the same name and size for the SRLs on the Primary and Secondary nodes because the Secondary SRL becomes the Primary SRL when the Primary role is transferred.
Mirror all data volumes and SRLs. This is optional if you use hardware-based mirroring.
The vradmin utility creates corresponding RVGs on the Secondary of the same name as the Primary. If you choose to use the vxmake command to create RVGs, use the same names for corresponding RVGs on the Primary and Secondary nodes.
Associate a DCM to each data volume on the Primary and the Secondary if the DCMs had been removed for some reason. By default, the vradmin createpri and vradmin addsec commands add DCMs if they do not exist.
In a shared disk group environment, currently only the cvm master node should be assigned the logowner role. It is recommended to enable the PreOnline trigger for the RVGLogOwner agent, so that the VVR logowner always resides on the CVM master node.
In a shared disk group environment, currently only the cvm master node should be assigned the logowner role.
The on-board write cache should not be used with VVR. The application must also store data to disk rather than maintaining it in memory. The takeover system, which could be a peer primary node in case of clustered configurations or the secondary site, must be capable of accessing all required information. This requirement precludes the use of anything inside a single system inaccessible by the peer. NVRAM accelerator boards and other disk caching mechanisms for performance are acceptable, but must be done on the external array and not on the local host.