InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
Modifying the VCS configuration on the secondary site
The following are highlights of the procedure to modify the existing VCS configuration on the secondary site:
Add the log owner and Replicated Volume Group (RVG) service groups.
Add a service group to manage the database and the supporting resources.
Define the replication objects and agents, such that the cluster at the secondary site can function as a companion to the primary cluster.
The following steps are similar to those performed on the primary site.
To modify VCS on the secondary site
- Log into one of the nodes on the secondary site as root.
- Use the following command to save the existing configuration to disk, and make the configuration read-only while making changes:
# haconf -dump -makero
- Use the following command to make a backup copy of the main.cf file:
# cd /etc/VRTSvcs/conf/config # cp main.cf main.orig
- Use vi or another text editor to edit the main.cf file. Edit the CVM group on the secondary site.
Review the sample configuration file after the VCS installation to see the CVM configuration.
See “To view sample configuration files for SF Oracle RAC”.
See “To view sample configuration files for SF Sybase CE”.
In our example, the secondary site has clus2 consisting of the nodes sys3 and sys4. To modify the CVM service group on the secondary site, use the CVM group on the primary site as your guide.
- Example RVGLogowner service group:
Add a failover service group using the appropriate values for your cluster and nodes. Include the following resources:
RVGLogowner resource. The node on which the group is online functions as the log owner (node connected to the second cluster for the purpose of replicating data).
IP resource
NIC resources
group rlogowner ( SystemList = { sys3 = 0, sys4 = 1 } AutoStartList = { sys3, sys4 } ) IP logowner_ip ( Device = eth0 Address = "10.11.9.102" NetMask = "255.255.255.0" ) NIC nic ( Device = eth0 NetworkHosts = { "10.10.8.1" } NetworkType = ether ) RVGLogowner logowner ( RVG = dbdata_rvg DiskGroup = dbdatadg ) requires group RVGgroup online local firm logowner requires logowner_ip logowner_ip requires nic
- Add the RVG service group using the appropriate values for your cluster and nodes.
The following is an example RVGgroup service group:
group RVGgroup ( SystemList = { sys3 = 0, sys4 = 1 } Parallel = 1 AutoStartList = { sys3, sys4 } ) RVGShared dbdata_rvg ( RVG = dbdata_rvg DiskGroup = dbdatadg ) CVMVolDg dbdata_voldg ( CVMDiskGroup = dbdatadg CVMActivation = sw ) requires group cvm online local firm dbdata_rvg requires dbdata_voldg
- It is advisible to modify "OnlineRetryLimit" & "OfflineWaitLimit" attribute of IP resource type to 1 on both the clusters:
# hatype -modify IP OnlineRetryLimit 1
# hatype -modify IP OfflineWaitLimit 1
- See configuration examples below.
Add an database service group. Use the database service group on the primary site as a model for the database service group on the secondary site.
Define the database service group as a global group by specifying the clusters on the primary and secondary sites as values for the ClusterList group attribute.
Assign this global group the same name as the group on the primary site. For example, database_grp.
Include the ClusterList and ClusterFailOverPolicy cluster attributes. Veritas recommends using the Manual value.
Add the RVGSharedPri resource to the group configuration.
Remove the CVMVolDg resource, if it has been configured in your previous configuration. This resource is now part of the RVG service group.
Specify the service group to depend (online, local, firm) on the RVG service group.
- Save and close the main.cf file.
- Use the following command to verify the syntax of the /etc/VRTSvcs/conf/config/main.cf file:
# hacf -verify /etc/VRTSvcs/conf/config
- Stop and restart VCS.
# hastop -all -force
Wait for port h to stop on all nodes, and then restart VCS with the new configuration on all primary nodes one at a time.
# hastart
- Verify that VCS brings all resources online. On one node, enter the following command:
# hagrp -display
The database, RVG, and CVM groups are online on both nodes of the primary site. The RVGLogOwner and ClusterService groups are online on one node of the cluster. If either the RVG group or the RVGLogOwner group is partially online, manually bring the groups online using the hagrp -online command. This information applies to the secondary site, except for the database group which must be offline.
- Verify the service groups and their resources that are brought online. On one node, enter the following command:
# hagrp -display
The database service group is offline on the secondary site, but the ClusterService, CVM, RVG log owner, and RVG groups are online.
This completes the setup for a global cluster using VVR for replication. Veritas recommends testing a global cluster before putting it into production.
Example of the Oracle RAC database group on the secondary site:
group database_grp ( SystemList = { sys3 = 0, sys3 = 1 } ClusterList = { clus2 = 0, clus1 = 1 } Parallel = 1 OnlineRetryInterval = 300 ClusterFailOverPolicy = Manual Authority = 1 AutoStartList = { sys3, sys4 } ) RVGSharedPri dbdata_vvr_shpri ( RvgResourceName = rdbdata_rvg OnlineRetryLimit = 0 ) CFSMount dbdata_mnt ( MountPoint = "/dbdata" BlockDevice = "/dev/vx/dsk/dbdatadg/dbdata_vol" Critical = 0 ) RVGSharedPri dbdata_vvr_shpri ( RvgResourceName = dbdata_rvg OnlineRetryLimit = 0 ) Oracle rac_db ( Sid @sys3 = vrts1 Sid @sys4 = vrts2 Owner = Oracle Home = "/oracle/orahome" Pfile @sys3 = "/oracle/orahome/dbs/initvrts1.ora" Pfile @sys4 = "/oracle/orahome/dbs/initvrts2.ora" StartUpOpt = SRVCTLSTART ShutDownOpt = SRVCTLSTOP ) requires group RVGgroup online local firm dbdata_mnt requires dbdata_vvr_shpri rac_db requires dbdata_mnt RVGSharedPri dbdata_vvr_shpri ( RvgResourceName = dbdata_rvg OnlineRetryLimit = 0 ) requires group RVGgroup online local firm dbdata_mnt requires dbdata_vvr_shpri
Example of the Sybase ASE CE database group on the secondary site:
. group sybase ( SystemList = { sys3 = 0, sys4 = 1 } ClusterList = { clus2 = 0, clus1 = 1 } Parallel = 1 OnlineRetryInterval = 300 ClusterFailOverPolicy = Manual Authority = 1 # AutoStart = 0 here so faulting will not happen AutoStartList = { sys3, sys4 } ) CFSMount dbdata_mnt ( MountPoint = "/dbdata" BlockDevice = "/dev/vx/dsk/dbdatadg/dbdata_vol" ) RVGSharedPri dbdata_vvr_shpri ( RvgResourceName = dbdata_rvg OnlineRetryLimit = 0 ) CFSMount quorum_101_quorumvol_mnt ( MountPoint = "/quorum" BlockDevice = "/dev/vx/dsk/quorum_101/quorumvol" ) CVMVolDg quorum_101_voldg ( CVMDiskGroup = quorum_101 CVMVolume = { quorumvol } CVMActivation = sw ) Sybase ase ( Sid @sys3 = ase1 Sid @sys4 = ase2 Owner = sybase Home = "/sybase" Version = 15 SA = sa Quorum_dev = "/quorum/q.dat" ) requires group RVGgroup online local firm dbdata_mnt requires dbdata_vvr_shpri ase requires vxfend ase requires dbdata_mnt ase requires quorum_101_quorumvol_mnt quorum_101_quorumvol_mnt requires quorum_101_voldg