InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
Configuring VxVM disk groups for a campus cluster in a parallel cluster database environment
After configuring I/O fencing for data integrity, you must configure the VxVM disk groups for remote mirroring before installing your database.
Note:
In cloud environments where one campus cluster is configured on one site and another on the second site, FSS volumes in campus cluster configurations can be used to replicate data for achieving high data availability across sites. In this FSS-campus cluster configuration, site tags can be added based on the name of the site to make data highly available during site failures.
For the example configuration, the database is Oracle RAC.
To configure VxVM disk groups for Oracle RAC on an SF for Oracle RAC campus cluster
- Initialize the disks as CDS disks
# vxdisksetup -i disk01 format=cdsdisk # vxdisksetup -i disk02 format=cdsdisk # vxdisksetup -i disk03 format=cdsdisk # vxdisksetup -i disk05 format=cdsdisk # vxdisksetup -i disk06 format=cdsdisk # vxdisksetup -i disk07 format=cdsdisk # vxdisksetup -i disk08 format=cdsdisk
- Set the site name for each host:
# vxdctl set site=sitename
The site name is stored in the /etc/vx/volboot file. To display the site names:
# vxdctl list | grep siteid
For example, for a four node cluster with two nodes at each site, mark the sites as follows:
On the nodes at first site:
# vxdctl set site=site1
On the nodes at second site:
# vxdctl set site=site2
- Obtain the enclosure name using the following command:
# vxdmpadm listenclosure ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT FIRMWARE ============================================================================= ams_wms0 AMS_WMS 75040638 CONNECTED A/A-A 35 - hds9500-alua0 HDS9500-ALUA D600145E CONNECTED A/A-A 9 - hds9500-alua1 HDS9500-ALUA D6001FD3 CONNECTED A/A-A 6 - disk Disk DISKS CONNECTED Disk 2 -
- Set the site name for all the disks in an enclosure.
# vxdisk settag site=sitename encl:ENCLR_NAME
- Run the following command if you want to tag only the specific disks:
# vxdisk settag site=sitename disk
For example:
# vxdisk settag site=site1 disk01 # vxdisk settag site=site1 disk02 # vxdisk settag site=site1 disk03 # vxdisk settag site=site2 disk06 # vxdisk settag site=site2 disk08
- Verify that the disks are registered to a site.
# vxdisk listtag
For example:
# vxdisk listtag DEVICE NAME VALUE disk01 site site1 disk02 site site1 disk03 site site1 disk04 site site1 disk05 site site1 disk06 site site2 disk07 site site2 disk08 site site2 disk09 site site2
- Create a disk group for OCR and Vote Disks and another for Oracle data, with disks picked from both the sites. While the example below shows a single disk group, you can create as many as you need.
# vxdg -s init ocrvotedg disk05 disk07
# vxdg -s init oradatadg disk01 disk06
- Enable site-based allocation on the disk groups for each site.
# vxdg -g ocrvotedg addsite site1
# vxdg -g ocrvotedg addsite site2
# vxdg -g oradatadg addsite site1
# vxdg -g oradatadg addsite site2
- If you are using an enclosure, set the tag on the enclosure for both sites.
# vxdg -o retain -g ocrvotedg settag encl:3pardata0 site=site1
# vxdg -o retain -g ocrvotedg settag encl:3pardata1 site=site2
# vxdg -o retain -g oradatadg settag encl:3pardata0 site=site1
# vxdg -o retain -g oradatadg settag encl:3pardata1 site=site2
- Configure site consistency for the disk groups.
# vxdg -g ocrvotedg set siteconsistent=on
# vxdg -g oradatadg set siteconsistent=on
- Create one or more mirrored volumes in the disk group.
# vxassist -g ocrvotedg make ocrvotevol 2048m nmirror=2
# vxassist -g oradatadg make oradatavol 10200m nmirror=2
# vxassist -g ocrvotedg make ocrvotevol 2048m nmirror=2
# vxassist -g oradatadg make oradatavol 10200m nmirror=2
- To verify the site awareness license, use the vxlicrep command. The Veritas Volume Manager product section should indicate: Site Awareness = Enabled
With the Site Awareness license installed on all hosts, the volume created has the following characteristics by default.
The all sites attribute is set to ON; the volumes have at least one mirror at each site.
The volumes are automatically mirrored across sites.
The read policy (rdpol) is set to siteread.
The read policy can be displayed using the vxprint -ht command.
The volumes inherit the site consistency value that is set on the disk group.
- From the CVM master, start the volumes for all the disk groups.
# vxvol -g ocrvotedg startall # vxvol -g oradatadg startall
- Create a file system on each volume and mount the same.
# mkfs -t vxfs /dev/vx/rdsk/ocrvotedg/ocrvotevol
# mkfs -t vxfs /dev/vx/rdsk/oradatadg/oradatavol
# mount -t vxfs -o cluster /dev/vx/dsk/ocrvotedg/ocrvotevol /ocrvote
# mount -t vxfs -o cluster /dev/vx/dsk/oradatadg/oradatavol /oradata
- Create seperate directories for OCR and Vote file as follows:
# mkdir -p /ocrvote/ocr
# mkdir -p /ocrvote/vote
- After creating directories, change the ownership of these directories to Oracle or Grid user:
# chown -R user:group /ocrvote
Also change the ownership of /oradata to Oracle user:
# chown user:group /oradata
Note:
One Vote Disk is sufficient since it is already mirrored by VxVM.
- Install your database software.
For Oracle RAC:
Insall Oracle Clusterware/GRID
Install Oracle RAC binaries
Perform library linking of Oracle binaries
Create the database on /oradata. For detailed steps,
See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.