Veritas InfoScale™ 8.0.2 Disaster Recovery Implementation Guide - AIX
- Section I. Introducing Storage Foundation and High Availability Solutions for disaster recovery
- About supported disaster recovery scenarios
- About campus cluster configuration
- About replicated data clusters
- About global clusters
- VCS global clusters: The building blocks
- About global cluster management
- About serialization - The Authority attribute
- Planning for disaster recovery
- About supported disaster recovery scenarios
- Section II. Implementing campus clusters
- Setting up campus clusters for VCS and SFHA
- About setting up a campus cluster configuration
- About running a fire drill in a campus cluster
- About setting up a campus cluster configuration
- Setting up campus clusters for SFCFSHA, SFRAC
- Setting up campus clusters for VCS and SFHA
- Section III. Implementing replicated data clusters
- Configuring a replicated data cluster using VVR
- Configuring a replicated data cluster using third-party replication
- Section IV. Implementing global clusters
- Configuring global clusters for VCS and SFHA
- Setting up VVR replication
- Creating a Replicated Data Set
- Creating a Primary RVG of an RDS
- Adding a Secondary to an RDS
- Changing the replication settings for a Secondary
- Synchronizing the Secondary and starting replication
- Starting replication when the data volumes are zero initialized
- Configuring clusters for global cluster setup
- Configuring service groups for global cluster setup
- Configuring a global cluster with Storage Foundation Cluster File System High Availability or Storage Foundation for Oracle RAC
- Configuring a global cluster with Volume Replicator and Storage Foundation Cluster File System High Availability or Storage Foundation for Oracle RAC
- Setting up replication on the primary site using VVR
- Setting up replication on the secondary site using VVR
- Configuring Cluster Server to replicate the database volume using VVR
- Configuring global clusters for VCS and SFHA
- Section V. Implementing disaster recovery configurations in virtualized environments
- Section VI. Reference
Configuring IBM PowerVM LPAR guest for disaster recovery
The IBM PowerVM is configured for disaster recovery by replicating the boot disk by using the replication methods like Hitachi TrueCopy, EMC SRDF, IBM duplicating, cloning rootvg technology, and so on. The network configuration for the LPAR on the primary site may not be effective on the secondary site, if the two sites are in the different IP subnets. To apply the different network configurations on the different sites, you will need to make additional configuration changes to the LPAR resource.
To configure LPAR for disaster recovery, you need to configure VCS on both the sites in the management LPARs with the GCO option. See the Cluster Server Administrator's Guide for more information about the global clusters.
Perform the following steps to set up the LPAR guest (managed LPAR) for disaster recovery:
- On the primary and the secondary site, create the PowerVM LPAR guest using the Hardware Management Console (HMC) with the ethernet and the client Fibre Channel (FC) virtual adapter's configuration.
Note:
The installed OS in the LPAR guest is replicated using the IBM rootvg cloning technology or the DR strategy N_Port ID Virtualization (NPIV).
- On the LPAR guest, copy and install the
VRTSvcsnr
fileset from the VCS installation media. This fileset installs thevcs-reconfig
service in the LPAR guest. This service ensures that the site-specific-network parameters are applied when the LPAR boots. You can install theVRTSvcsnr
fileset by performing the following steps:# mkdir /<temp_dir> # cp <media>/pkgs/VRTSvcsnr.bff /<tmp_dir> # cd /<temp_dir> # installp -a -d VRTSvcsnr.bff VRTSvcsnr
- Create a VCS service group and add a VCS LPAR resource for the LPAR guest. Configure the DROpts attribute of the LPAR resource with the site-specific values for each of the following: IPAddress, Netmask, Gateway, DNSServers (nameserver), DNSSearchPath, Device, Domain, and HostName.
Set the value of the ConfigureNetwork attribute to 1 from the DROpts attribute to make the changes effective. The LPAR agent does not apply to the DROpts attributes for the guest LPAR, if the value of the ConfigureNetwork attribute is 0. For more info about DROpts attribute see the Cluster Server Bundled Agents Reference Guide.
- [ This step is optional:] To perform the rootvg replication using NPIV, the boot disk LUN is mapped directly to the guest LPARs via NPIV, and the source production rootvg LUN is replicated using the hardware technologies like Hitachi TrueCopy, EMC SRDF, and so on for the DR Site. Subsequently, add the appropriate VCS replication resource to the LPAR DR service group. Examples of hardware replication agents are SRDF for EMC SRDF, HTC for Hitachi TrueCopy, MirrorView for EMC MirrorView, and so on. VCS LPAR resource depends on the replication resource.
For more information about the appropriate VCS replication agent that is used to configure the replication resource, you can visit our website at the following URL: https://sort.veritas.com/agents
The replication resource ensures that when the resource is online in a site, the underlying replicated devices are in the primary mode, and the remote devices are in the secondary mode. Thus, when the LPAR resource is online, the underlying storage is always in the read-write mode.
- Repeat step 1 through step 4 on the secondary site.
Figure: Sample resource dependency diagram for NPIV base rootvg replication using the hardware replication technology
When the LPAR is online, the LPAR agent creates a private VLAN (with VLAN ID 123) between the management LPAR and the managed LPAR. The VLAN is used to pass the network parameters specified in the DROpts attribute to the managed LPAR. When the managed LPAR boots, it starts the vcs-reconfig
service that requests for the network configuration from the management LPAR. As a result, the network configuration is resent, as a part of the response through the same VLAN. The vcs-reconfig
service subsequently applies this configuration when the appropriate commands are run.