Veritas InfoScale™ Operations Manager 7.3.1 Add-ons User's Guide
- Section I. VCS Utilities Add-on 7.3.1
- Section II. Distribution Manager Add-on 7.3.1
- Section III. Fabric Insight Add-on 7.3.1
- Section IV. Patch Installer Add-on 7.3.1
- Introduction to Patch Installer Add-on
- Using Patch Installer Add-on
- Section V. Storage Insight Add-on 7.3.1
- Performing the deep discovery of enclosures
- About Storage Insight Add-on
- Adding HITACHI storage enclosures for deep discovery
- Editing the deep discovery configuration for an enclosure
- Monitoring the usage of thin pools
- Monitoring storage array metering data
- Managing LUN classifications
- Appendix A. Enclosure configuration prerequisites
- HITACHI enclosure configuration prerequisites
- EMC Symmetrix storage array configuration prerequisites
- Device setup requirements for EMC Symmetrix arrays
- IBM XIV enclosure configuration prerequisites
- NetApp storage enclosure configuration prerequisites
- EMC CLARiiON storage enclosures configuration prerequisites
- Hewlett-Packard Enterprise Virtual Array (HP EVA) configuration prerequisites
- IBM System Storage DS enclosure configuration prerequisites
- IBM SVC enclosure configuration prerequisites
- EMC Celerra enclosure configuration prerequisites
- EMC VNX storage enclosure configuration prerequisites
- EMC VPLEX storage enclosure configuration prerequisites
- Appendix B. Commands used by Management Server for deep discovery of enclosures
- Performing the deep discovery of enclosures
- Section VI. Storage Insight SDK Add-on 7.3.1
- Overview of Storage Insight SDK Add-on 7.3.1
- Managing Veritas InfoScale Operations Manager Storage Insight plug-ins
- About creating Storage Insight plug-in
- About discovery script
- About the enclosure discovery command output
- Creating a Storage Insight plug-in
- Editing a Storage Insight plug-in
- Testing a Storage Insight plug-in
- About creating Storage Insight plug-in
- Section VII. Storage Provisioning and Enclosure Migration Add-on 7.3.1
- Provisioning storage
- Creating a storage template using VxFS file systems
- Migrating volumes
- Provisioning storage
- Section VIII. Veritas HA Plug-in for VMware vSphere Web Client
- Introduction to Veritas HA Plug-in for vSphere Web Client
- Installation and uninstallation of Veritas HA Plug-in for vSphere Web Client
- Configurations for Veritas HA Plug-in for vSphere Web Client
- Section IX. Application Migration Add-on
- Introduction to Application Migration Add-on
- Creating and managing an application migration plan
- Understanding application migration operations
Understanding the Rehearse operation
In this operation, you can bring application online on the target cluster and test the application before performing the actual migration.
In this operation, the sync status of all volumes are checked after which cluster configuration of the selected service group is discovered in the source and translated to the target. The mirror disk group is then detached from the source cluster nodes and endian changes are performed on all volumes of the mirror disk group in the target cluster nodes.
After the endian changes are done, the service groups are brought online in the target cluster. After ensuring the application is running fine in the target, the service groups are taken offline and the target cluster configuration is removed.
Before removing the cluster configuration, a backup of the configuration is taken on the first node of the target cluster in the /etc/VRTSvcs/conf/config directory and the name will be in the following format:
main.cf_plan_name.date.time
The volumes in the mirror disk groups are then reattached to the corresponding volumes in the source disk group.
The operation aborts if all volumes between the source and mirror disk groups are not completely synced.
Initially, all RVGs are checked to ensure 100% sync. At times, if the application is writing to the volumes, data sync might be in progress to the secondary site and hence the Rehearse operation will not proceed. In such scenario, you can reduce or stop application writes so that data remains in sync.
Pre-requisites are done on target cluster nodes in order to create space optimized snapshot of volumes such as preparing volumes, cache volume, and cache object creation. IP, DiskGroup/CVmVolDg, RVGLogowner resources are then removed from the target cluster for the disk groups which are being migrated as part of the plan. This is to ensure no duplicate resources appear for an entity during cluster translation.
Cluster configuration of the selected service group and its dependencies are then discovered on the source and translated to the target. The file systems on all mounted volumes of the disk groups which are part of the replication is then frozen on the source for a moment and space optimized snapshots of these volumes are taken on the target with the help of VVR In-Band Control Messaging (vxibc). After the snapshots are taken, endian changes are performed on these snapshots and the service groups are brought online on the target. When service groups are brought online, the snapshot volumes gets mounted on the target cluster. Applications started on the target can write on these snapshot volumes until the cache volume fills up. After ensuring the application is running fine on the target, the service groups are taken offline and the target cluster configuration is removed.
Before removing the cluster configuration, the configuration is backed up on the first node of the target cluster in the /etc/VRTSvcs/conf/config
directory and the name is in the following format:
main.cf_plan_name.date.time
The disk groups are then imported on the target cluster node and the snapshots are destroyed. The IP, DiskGroup/CVMVolDg, and RVGLogowner resources are re-created on the target, as required, and then brought online.