Please enter search query.
Search <book_title>...
Veritas InfoScale™ 7.4.1 Virtualization Guide - AIX
Last Published:
2019-02-01
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: AIX
- Section I. Overview
- Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Section II. Implementation
- Setting up Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Supported configurations for Virtual I/O servers (VIOS) on AIX
- Installing and configuring Storage Foundation and High Availability (SFHA) Solutions in the logical partition (LPAR)
- Installing and configuring Cluster Server for logical partition and application availability
- Supported configurations for Virtual I/O servers (VIOS) on AIX
- Setting up Storage Foundation and High Availability Solutions in AIX PowerVM virtual environments
- Section III. Use cases for AIX PowerVM virtual environments
- Application to spindle visibility
- Simplified storage management in VIOS
- Configuring Dynamic Multi-Pathing (DMP) on Virtual I/O server
- Configuring Dynamic Multi-Pathing (DMP) pseudo devices as virtual SCSI devices
- Extended attributes in VIO client for a virtual SCSI disk
- Virtual machine (logical partition) availability
- Simplified management and high availability for IBM Workload Partitions
- Implementing Storage Foundation support for WPARs
- How Cluster Server (VCS) works with Workload Patitions (WPARs)
- Configuring VCS in WPARs
- High availability and live migration
- Limitations and unsupported LPAR features
- Multi-tier business service support
- Server consolidation
- About IBM Virtual Ethernet
- Using Storage Foundation in the logical partition (LPAR) with virtual SCSI devices
- How DMP handles I/O for vSCSI devices
- Physical to virtual migration (P2V)
- Section IV. Reference
Migrating from PowerPath to DMP on a Virtual I/O server for a dual-VIOS configuration
This following example procedure illustrates a migration from PowerPath to DMP on the Virtual I/O server, in a configuration with two VIO Servers.
Example configuration values:
Managed System: dmpviosp6 VIO server1: dmpvios1 VIO server2: dmpvios2 VIO clients: dmpvioc1 SAN LUNs: EMC Clariion array Current multi-pathing solution on VIO server: EMC PowerPath
To migrate dmpviosp6 from PowerPath to DMP
- Before migrating, back up the Virtual I/O server to use for reverting the system in case of issues.
See the IBM website for information about backing up Virtual I/O server.
- Shut down all of the VIO clients that are serviced by the VIO Server.
dmpvioc1$ halt
- Log into the VIO server partition.Use the following command to access the non-restricted root shell. All subsequent commands in this procedure must be invoked from the non-restricted shell.
$ oem_setup_env
- The following command shows lsmap output before migrating PowerPath VTD devices to DMP:
dmpvios1$ /usr/ios/cli/ioscli lsmap -all
SVSA Physloc Client Partition ID -------------- ---------------------------- -------------------- vhost0 U9117.MMA.0686502-V2-C11 0x00000004 VTD P0 Status Available LUN 0x8100000000000000 Backing device hdiskpower0 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L4 0034037 00000000 VTD P1 Status Available LUN 0x8200000000000000 Backing device hdiskpower1 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40 0240C10 0000000 VTD P2 Status Available LUN 0x8300000000000000 Backing device hdiskpower2 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40 02409A00000000
- Unconfigure all VTD devices from all virtual adapters on the system:
dmpvios1$ rmdev -p vhost0 P0 Defined P1 Defined P2 Defined
Repeat this step for all other virtual adapters.
- Migrate the devices from PowerPath to DMP.
Unmount the file system and varyoff volume groups residing on the PowerPath devices.
Display the volume groups (vgs) in the configuration:
dmpvios1$ lsvg rootvg brunovg
dmpvios1$ lsvg -p brunovg
brunovg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdiskpower3 active 511 501 103..92..102..102..102
Use the varyoffvg command on all affected vgs:
dmpvios1$ varyoffvg brunovg
Unmanage the EMC Clariion array from PowerPath control
# powermt unmanage class=clariion hdiskpower0 deleted hdiskpower1 deleted hdiskpower2 deleted hdiskpower3 deleted
- Reboot VIO server1
dmpvios1$ reboot
- After the VIO server1 reboots, verify that all of the existing volume groups on the VIO server1 and MPIO VTDs on the VIO server1 are successfully migrated to DMP.
dmpvios1$ lsvg -p brunovg
brunovg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION emc_clari0_138 active 511 501 103..92..102..102..102
Verify the mappings of the LUNs on the migrated volume groups:
dmpvios1$ lsmap -all
SVSA Physloc Client Partition ID -------------- -------------------------- ------------------ vhost0 U9117.MMA.0686502-V2-C11 0x00000000 VTD P0 Status Available LUN 0x8100000000000000 Backing device emc_clari0_130 Physloc VTD P1 Status Available LUN 0x8200000000000000 Backing device emc_clari0_136 Physloc VTD P2 Status Available LUN 0x8300000000000000 Backing device emc_clari0_137 Physloc
- Repeat step 1 to step 8 for VIO server2.
- Start all of the VIO clients.