InfoScale™ 9.0 Cluster Server Agent for Hitachi TrueCopy/HP-XP Continuous Access Configuration Guide - Windows

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Windows
  1. Introducing the agent for Hitachi TrueCopy/Hewlett-Packard XP Continuous Access
    1.  
      About the agent for Hitachi TrueCopy/HP-XP Continuous Access
    2.  
      Supported software
    3.  
      Supported hardware
    4.  
      Typical Hitachi TrueCopy/Hewlett-Packard XP Continuous Access setup in a VCS cluster
    5. Hitachi TrueCopy/Hewlett-Packard XP Continuous Access agent functions
      1.  
        About the Hitachi TrueCopy/Hewlett-Packard XP Continuous Access agent's online function
  2. Configuring the agent for Hitachi TrueCopy/Hewlett-Packard XP Continuous Access
    1. Configuration concepts for the Hitachi TrueCopy/Hewlett-Packard XP Continuous Access agent
      1.  
        Resource type definition for the Hitachi TrueCopy agent
      2. Attribute definitions for the TrueCopy/HP-XP-CA agent
        1. About the SplitTakeover attribute for the Hitachi TrueCopy agent
          1.  
            SplitTakeover attribute = 0
          2.  
            SplitTakeover attribute = 1
        2. About the FreezeSecondaryOnSplit attribute for the Hitachi TrueCopy agent
          1.  
            FreezeSecondaryOnSplit attribute = 0
        3.  
          About the HTC configuration parameters
        4.  
          Special consideration for fence level NEVER
        5.  
          Considerations for calculating the AllowAutoFailoverInterval attribute value
      3.  
        Sample configuration for the TrueCopy/HP-XP-CA agent
    2. Before you configure the agent for Hitachi TrueCopy/Hewlett-Packard XP Continuous Access
      1.  
        About cluster heartbeats
      2.  
        About configuring system zones in replicated data clusters
      3.  
        About preventing split-brain
    3. Configuring the agent for Hitachi TrueCopy/Hewlett-Packard XP Continuous Access
      1.  
        Configuring the agent manually in a global cluster
      2.  
        Configuring the agent manually in a replicated data cluster
  3. Testing VCS disaster recovery support with Hitachi TrueCopy/Hewlett-Packard XP Continuous Access
    1. How VCS recovers from various disasters in an HA/DR setup with Hitachi TrueCopy/Hewlett-Packard XP Continuous Access
      1.  
        Failure scenarios in global clusters
      2.  
        Failure scenarios in replicated data clusters
      3.  
        Replication link / Application failure scenarios
    2.  
      Testing the global service group migration
    3.  
      Testing disaster recovery after host failure
    4.  
      Testing disaster recovery after site failure
    5.  
      Performing failback after a node failure or an application failure
    6.  
      Performing failback after a site failure
  4. Setting up fire drill
    1.  
      About fire drills
    2. About the HTCSnap agent
      1.  
        HTCSnap agent functions
      2.  
        Resource type definition for the HTCSnap agent
      3.  
        Attribute definitions for the HTCSnap agent
      4.  
        About the Snapshot attributes
      5.  
        Sample configuration for a fire drill service group
    3.  
      Additional considerations for running a fire drill
    4.  
      Before you configure the fire drill service group
    5. Configuring the fire drill service group
      1.  
        About the Fire Drill wizard
    6.  
      Verifying a successful fire drill

Performing failback after a site failure

After a site failure at the primary site, the hosts and the storage at the primary site are down. VCS brings the global service group online at the secondary site and the Hitachi TrueCopy/Hewlett-Packard XP Continuous Access agent write enables the S-VOL devices.

The device state is SSWS.

Review the details on site failure and how VCS and the agent for Hitachi TrueCopy/Hewlett-Packard XP Continuous Access behave in response to the failure.

See Failure scenarios in global clusters.

See Failure scenarios in replicated data clusters.

When the hosts and the storage at the primary site are restarted and the replication link is restored, you can perform a failback of the global service group to the primary site.

To perform failback after a site failure in global cluster

  1. Take the global service group offline at the secondary site. On a node at the secondary site, run the following command:
    hagrp -offline global_group -any
  2. Since the application has made writes on the secondary due to a failover, resynchronize the primary from the secondary site and reverse the P-VOL/S-VOL roles with the pairresync-swaps action on the secondary site.

    After the resync is complete, the devices in the secondary are P-VOL and the devices in the primary are S-VOL. The device state is PAIR at both the sites.

  3. Bring the global service group online at the primary site. On a node in the primary site, run the following command:
    hagrp -online global_group -any

    This again swaps the role of P-VOL and S-VOL.

To perform failback after a site failure in replicated data cluster

  1. Take the global service group offline at the secondary site. On a node in the secondary site, run the following command:
    hagrp -offline service_group -sys sys_name
  2. Since the application has made writes on the secondary due to a failover, resync the primary from the secondary site and reverse the P-VOL/S-VOL roles with the pairresync-swaps action on the secondary site.

    After the resync is complete, the devices in the secondary are P-VOL and the devices in the primary are S-VOL. The device state is PAIR at both the sites.

  3. Bring the global service group online at the primary site. On a node in the primary site, run the following command:
    hagrp -online service_group -sys sys_name

    This again swaps the roles of P-VOL and S-VOL.