Cluster Server 7.4 Agent for EMC SRDF Configuration Guide - Windows
- Introducing the agent for EMC SRDF
- Configuring the agent for EMC SRDF
- Testing VCS disaster recovery support with EMC SRDF
- How VCS recovers from various disasters in an HA/DR setup with EMC SRDF
- Setting up fire drill
Performing failback after a site failure
After a site failure at the primary site, the hosts and the storage at the primary site are down. VCS brings the global service group online at the secondary site and the EMC SRDF agent write enables the R2 devices.
The device state is PARTITIONED.
Review the details on site failure and how VCS and the agent for EMC SRDF behave in response to the failure.
See Failure scenarios in global clusters.
See Failure scenarios in replicated data clusters.
When the hosts and the storage at the primary site are restarted and the replication link is restored, the SRDF devices attain SPLIT state at both the sites. The devices are write-enabled at both sites. You can now perform a failback of the global service group to the primary site.
To perform failback after a site failure in global cluster
- Take the global service group offline at the secondary site. On a node at the secondary site, run the following command:
hagrp -offline global_group -any
Resync the devices using the symrdf restore command.
The symrdf restore command write disables the devices at both the R1 and R2 sites.
After the resync is complete, the device state is CONSISTENT or SYNCHRONIZED at both the sites.
The devices are write-enabled at the primary site and write-disabled at the secondary site.
- Bring the global service group online at the primary site. On a node in the primary site, run the following command:
hagrp -online global_group -any
This again swaps the role of R1 and R2.
To perform failback after a site failure in replicated data cluster
- Take the global service group offline at the secondary site. On a node in the secondary site, run the following command:
hagrp -offline service_group -sys sys_name
Resync the devices using the symrdf restore command.
The symrdf restore command write disables the devices at both the R1 and R2 sites.
After the resync is complete, the device state is CONSISTENT or SYNCHRONIZED at both the sites. The devices are write-enabled at the primary site and write-disabled at the secondary site.
- Bring the global service group online at the primary site. On a node in the primary site, run the following command:
hagrp -online service_group -sys sys_name
This again swaps the role of R1 and R2.