InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
Configuring the secondary site
The setup requirements for the secondary site parallel the requirements for the primary site with a few additions or exceptions as noted below.
Table: Tasks for setting up a parallel global cluster at the secondary site
Task | Description |
---|---|
Set up the cluster | |
Set up the database | See “To set up the SFCFSHA database for the secondary site”. See “To set up the Oracle RAC database for the secondary site”. See “To set up the Sybase ASE CE database for the secondary site”. |
Important requirements for parallel global clustering:
Cluster names on the primary and secondary sites must be unique.
You must use the same OS user and group IDs for your database for installation and configuration on both the primary and secondary clusters.
For Oracle RAC, you must use the same directory structure, name, permissions for the CRS/GRID and database binaries.
You can use an existing parallel cluster or you can install a new cluster for your secondary site.
Consult your product installation guide for planning information as well as specific configuration guidance for the steps below.
See the Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide.
See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.
See the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.
To set up the cluster on secondary site
- Install and configure servers and storage.
- If you are using hardware-based replication, install the sofware for managing your array.
- Verify that you have the correct installation options enabled, whether you are using keyless licensing or installing keys manually. You must have the GCO option for a global cluster. If you are using VVR for replication, you must have it enabled.
- Prepare, install, and configure your Storage Foundation and High Availability (SFHA) Solutions product according to the directions in your product's installation guide.
For a multi-node cluster, configure I/O fencing.
- For a single-node cluster, do not enable I/O fencing. Fencing will run in disabled mode.
- Prepare systems and storage for a global cluster. Identify the hardware and storage requirements before installing your database software.
For SFCFSHA, you will need to set up:
Local storage for database software
Shared storage for resources which are not replicated as part of the hardware-based or host-based replication
Replicated storage for database files
You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.
For SF Oracle RAC, you will need to set up:
Local storage for Oracle RAC and CRS binaries
Shared storage for OCR and Vote disk which is not replicated as part of the hardware-based or host-based replication
Replicated shared storage for database files
You must use the same directory structure, name, permissions for the CRS/GRID and database binaries as on the primary.
For SF Sybase CE, you will need to set up:
Shared storage for File sSystem and Cluster File System for Sybase ASE CE binaries which is not replicated
Shared storage for the quorum device which is not replicated
Replicated storage for database files
You must use the same directory structure, name, permissions for the quorum and database binaries as on the primary.
Verify the configuration using procedures in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide.
Note:
You must use the same directory structure, name, permissions for the CRS/GRID and database binaries.
- For SFCFSHA, install and configure your database binaries. Consult your database documentation.
Note:
Resources which will not be replicated must be on non-replicated shared storage.
After successful database installation and configuration, verify that database resources are up on all nodes.
- For Oracle RAC, see the instructions in the Storage Foundation for Oracle RAC Configuration and Upgrade Guide for installing and configuring:
Oracle Clusterware/Grid Infrastructure software
Oracle RAC database software
The Oracle RAC binary versions must be exactly same on both sites.
Note:
OCR and Vote disk must be on non-replicated shared storage.
After successful Oracle RAC installation and configuration, verify that CRS daemons and resources are up on all nodes.
$GRID_HOME/bin/crsctl stat res -t
- For SF Sybase CE, see the instructions in the Storage Foundation for Sybase ASE CE Configuration and Upgrade Guide for installing and configuring Sybase ASE CE binaries.
Note the following configuration requirements:
The quorum device must be on non-replicated shared storage.
The Sybase binary versions must be exactly same on both sites, including the ESD versions.
Configure Sybase Binaries mounts/volumes under VCS control manually on the secondary site.
Do not create the database. The database will be replicated from the primary site.
To set up the SFCFSHA database for the secondary site
- If you are using hardware-based replication, the database, disk group, and volumes will be replicated from the primary site.
Create the directory for the CFS mount point which will host the database data and control files.
- If you are using VVR for replication, create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.
Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.
- Create subdirectories for the database as you did on the primary site.
To set up the Oracle RAC database for the secondary site
- If you are using hardware-based replication, the database, disk group, and volumes will be replicated from the primary site.
Create the directory for the CFS mount point which will host the database data and control files.
- If you are using VVR for replication, create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.
Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site.
- On each node in the cluster, copy the initialization files (pfiles,spfiles) from the primary cluster to the secondary cluster maintaining the same directory path.
For example, copy init$ORACLE_SID.ora and orapw$ORACLE_SID.ora from $ORACLE_HOME/dbs at the primary to $ORACLE_HOME/dbs at the secondary.
- As Oracle user, create the following subdirectories on the secondary site to parallel the directories on the primary site:
$ mkdir -p $ORACLE_BASE/diag $ mkdir -p $ORACLE_BASE/admin $ mkdir -p $ORACLE_BASE/admin/adump
On both the primary and secondary sites, edit the file:
$ORACLE_HOME/dbs/init$ORACLE_SID.ora
as
remote_listener = 'SCAN_NAME:1521' SPFILE=<SPFILE NAME>
- Configure listeners on the secondary site with same name as on primary. You can do this by one of the following methods:
Copy the listener.ora and tnsnames.ora files from the primary site and update the names as appropriate for the secondary site.
Use Oracle's netca utility to to configure the listener.ora and tnsnames.ora files on the secondary site.
- On the secondary site, register the database using the srvctl command as the database software owner.
Registering the database only has to be done once from any node in the secondary cluster.Use the following command as the Oracle database software owner
$ $ORACLE_HOME/bin/srvctl add database -d database_name -o oracle_home
- To prevent automatic database instance restart, change the Management policy for the database (automatic, manual) to MANUAL using the srvctl command:
$ $ORACLE_HOME/bin/srvctl modify database -d database_name -y manual
You need only perform this change once from any node in the cluster.
- Register the instances using srvctl command. Execute the following command on each node:
$ $ORACLE_HOME/bin/srvctl add instance -d database_name \ -i instance_name -n node-name
If the secondary cluster has more than one node, you must add instances using the srvctl command.
For example, if the database instance name is racdb, the instance name on sys3 is racdb1 and on sys4 is racdb2.
$ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb1 -n sys3 $ $ORACLE_HOME/bin/srvctl add instance -d racdb -i racdb2 -n sys4
- Register all other resources (for example listener, ASM, service) present in cluster/GRID at the primary site to the secondary site using the srvctl command or crs_register. For command details, see Oracle documentation at Metalink.
To set up the Sybase ASE CE database for the secondary site
- Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database files when the failover occurs and the secondary is promoted to become the primary site.
- Create an identical disk group and volumes for the replicated content with the same names and size as listed on the primary site.