InfoScale™ 9.0 Disaster Recovery Implementation Guide - Linux
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Configuring the secondary site
- Configuring global clusters with VVR and Storage Foundation Cluster File System High Availability, Storage Foundation for Oracle RAC, or Storage Foundation for Sybase CE
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Reference
- Appendix A. Sample configuration files
- Sample Storage Foundation for Oracle RAC configuration files
- About sample main.cf files for Storage Foundation (SF) for Oracle RAC
- About sample main.cf files for Storage Foundation (SF) for Sybase ASE CE
- Appendix A. Sample configuration files
Configuring I/O fencing to prevent data corruption
Perform the following tasks to configure I/O fencing to prevent data corruption in the event of a communication failure.
To configure I/O fencing to prevent data corruption
- After installing and configuring SFCFSHA or SF Oracle RAC, configure I/O fencing for data integrity.
See the Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide.
See the Storage Foundation for Oracle RAC Configuration and Upgrade Guide.
- Set up the storage at a third site.
You can extend the DWDM to the third site to have FC SAN connectivity to the storage at the third site. You can also use iSCSI targets as the coordinator disks at the third site.
For example:
Enable I/O fencing by using the coordinator disks from all the three sites.
# vxdisksetup -i disk04 format=cdsdisk # vxdisksetup -i disk09 format=cdsdisk # vxdisksetup -i disk10 format=cdsdisk # hastop -all # vxdg init fencedg disk10 disk04 disk09 # vxdg -g fencedg set coordinator=on # vxdg deport fencedg # vxdg -t import fencedg # vxdg deport fencedg
Edit the main.cf to add "UseFence = SCSI3"
# vi /etc/VRTSvcs/conf/config/main.cf # more /etc/vxfendg fencedg # more /etc/vxfentab /dev/vx/rdmp/disk10 /dev/vx/rdmp/disk04 /dev/vx/rdmp/disk09 # cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfemode
For systemd environments with supported Linux distributions:
# /opt/VRTSvcs/vxfen/bin/vxfen start
For other supported Linux distributions:
# /etc/init.d/vxfen start
Starting vxfen.. Checking for /etc/vxfendg Starting vxfen.. Done
On all nodes, start VCS:
# hastart
Set the site name for each host.
# vxdctl set site=site1 # vxdctl set site=site2 # vxdctl set site=site3
- Start I/O fencing on all the sites.