Please enter search query.
Search <book_title>...
Dynamic Multi-Pathing 7.4.1 Administrator's Guide - AIX
Last Published:
2019-02-01
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: AIX
- Understanding DMP
- Setting up DMP to manage native devices
- Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
- Dynamic Multi-Pathing for the Virtual I/O Server
- Configuring Dynamic Multi-Pathing (DMP) on Virtual I/O server
- Configuring Dynamic Multi-Pathing (DMP) pseudo devices as virtual SCSI devices
- Extended attributes in VIO client for a virtual SCSI disk
- Administering DMP
- Configuring DMP for SAN booting
- Administering the root volume group (rootvg) under DMP control
- Extending an LVM rootvg that is enabled for DMP
- Using Storage Foundation in the logical partition (LPAR) with virtual SCSI devices
- How DMP handles I/O for vSCSI devices
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Administering disks
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Changing the disk device naming scheme
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Event monitoring
- Performance monitoring and tuning
- Appendix A. DMP troubleshooting
- Appendix B. Reference
Migrating from PowerPath to DMP on a Virtual I/O server for a dual-VIOS configuration
This following example procedure illustrates a migration from PowerPath to DMP on the Virtual I/O server, in a configuration with two VIO Servers.
Example configuration values:
Managed System: dmpviosp6 VIO server1: dmpvios1 VIO server2: dmpvios2 VIO clients: dmpvioc1 SAN LUNs: EMC Clariion array Current multi-pathing solution on VIO server: EMC PowerPath
To migrate dmpviosp6 from PowerPath to DMP
- Before migrating, back up the Virtual I/O server to use for reverting the system in case of issues.
See the IBM website for information about backing up Virtual I/O server.
- Shut down all of the VIO clients that are serviced by the VIO Server.
dmpvioc1$ halt
- Log into the VIO server partition.Use the following command to access the non-restricted root shell. All subsequent commands in this procedure must be invoked from the non-restricted shell.
$ oem_setup_env
- The following command shows lsmap output before migrating PowerPath VTD devices to DMP:
dmpvios1$ /usr/ios/cli/ioscli lsmap -all
SVSA Physloc Client Partition ID -------------- ---------------------------- -------------------- vhost0 U9117.MMA.0686502-V2-C11 0x00000004 VTD P0 Status Available LUN 0x8100000000000000 Backing device hdiskpower0 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L4 0034037 00000000 VTD P1 Status Available LUN 0x8200000000000000 Backing device hdiskpower1 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40 0240C10 0000000 VTD P2 Status Available LUN 0x8300000000000000 Backing device hdiskpower2 Physloc U789D.001.DQD04AF-P1-C5-T1-W500507630813861A-L40 02409A00000000
- Unconfigure all VTD devices from all virtual adapters on the system:
dmpvios1$ rmdev -p vhost0 P0 Defined P1 Defined P2 Defined
Repeat this step for all other virtual adapters.
- Migrate the devices from PowerPath to DMP.
Unmount the file system and varyoff volume groups residing on the PowerPath devices.
Display the volume groups (vgs) in the configuration:
dmpvios1$ lsvg rootvg brunovg
dmpvios1$ lsvg -p brunovg
brunovg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdiskpower3 active 511 501 103..92..102..102..102
Use the varyoffvg command on all affected vgs:
dmpvios1$ varyoffvg brunovg
Unmanage the EMC Clariion array from PowerPath control
# powermt unmanage class=clariion hdiskpower0 deleted hdiskpower1 deleted hdiskpower2 deleted hdiskpower3 deleted
- Reboot VIO server1
dmpvios1$ reboot
- After the VIO server1 reboots, verify that all of the existing volume groups on the VIO server1 and MPIO VTDs on the VIO server1 are successfully migrated to DMP.
dmpvios1$ lsvg -p brunovg
brunovg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION emc_clari0_138 active 511 501 103..92..102..102..102
Verify the mappings of the LUNs on the migrated volume groups:
dmpvios1$ lsmap -all
SVSA Physloc Client Partition ID -------------- -------------------------- ------------------ vhost0 U9117.MMA.0686502-V2-C11 0x00000000 VTD P0 Status Available LUN 0x8100000000000000 Backing device emc_clari0_130 Physloc VTD P1 Status Available LUN 0x8200000000000000 Backing device emc_clari0_136 Physloc VTD P2 Status Available LUN 0x8300000000000000 Backing device emc_clari0_137 Physloc
- Repeat step 1 to step 8 for VIO server2.
- Start all of the VIO clients.