Dynamic Multi-Pathing 7.3.1 Administrator's Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1)
  1. Understanding DMP
    1.  
      About Dynamic Multi-Pathing (DMP)
    2. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    3.  
      Multi-controller ALUA support
    4.  
      Multiple paths to disk arrays
    5.  
      Device discovery
    6.  
      Disk devices
    7. Disk device naming in DMP
      1.  
        About operating system-based naming
      2. About enclosure-based naming
        1.  
          Summary of enclosure-based naming
        2.  
          Enclosure based naming with the Array Volume Identifier (AVID) attribute
  2. Setting up DMP to manage native devices
    1.  
      About setting up DMP to manage native devices
    2.  
      Displaying the native multi-pathing configuration
    3.  
      Migrating LVM volume groups to DMP
    4.  
      Migrating to DMP from EMC PowerPath
    5.  
      Migrating to DMP from Hitachi Data Link Manager (HDLM)
    6.  
      Migrating to DMP from Linux Device Mapper Multipath
    7. Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
      1.  
        Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
      2.  
        Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
      3.  
        Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
    8.  
      Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
    9.  
      Removing DMP support for native devices
  3. Administering DMP
    1.  
      About enabling and disabling I/O for controllers and storage processors
    2.  
      About displaying DMP database information
    3.  
      Displaying the paths to a disk
    4.  
      Setting customized names for DMP nodes
    5. Administering DMP using the vxdmpadm utility
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        User-friendly CLI outputs for ALUA arrays
      9.  
        Displaying information about devices controlled by third-party drivers
      10.  
        Displaying extended device attributes
      11.  
        Suppressing or including devices from VxVM control
      12. Gathering and displaying I/O statistics
        1.  
          Displaying cumulative I/O statistics
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Examples of using the vxdmpadm iostat command
      13.  
        Setting the attributes of the paths to an enclosure
      14.  
        Displaying the redundancy level of a device or enclosure
      15.  
        Specifying the minimum number of active paths
      16.  
        Displaying the I/O policy
      17. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active or an ALUA array
        2.  
          Example of applying load balancing in a SAN
      18.  
        Disabling I/O for paths, controllers, array ports, or DMP nodes
      19.  
        Enabling I/O for paths, controllers, array ports, or DMP nodes
      20.  
        Renaming an enclosure
      21.  
        Configuring the response to I/O failures
      22.  
        Configuring the I/O throttling mechanism
      23.  
        Configuring Subpaths Failover Groups (SFG)
      24.  
        Configuring Low Impact Path Probing (LIPP)
      25.  
        Displaying recovery option values
      26.  
        Configuring DMP path restoration policies
      27.  
        Stopping the DMP path restoration thread
      28.  
        Displaying the status of the DMP path restoration thread
      29.  
        Configuring Array Policy Modules
  4. Administering disks
    1.  
      About disk management
    2. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. About discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding DMP support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        About third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing disks claimed in the DISKS category
        12.  
          Displaying details about an Array Support Library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    3. Changing the disk device naming scheme
      1.  
        Displaying the disk-naming scheme
      2.  
        Regenerating persistent device names
      3.  
        Changing device naming for enclosures controlled by third-party drivers
    4.  
      Discovering the association between enclosure-based disk names and OS-based disk names
  5. Dynamic Reconfiguration of devices
    1.  
      About online Dynamic Reconfiguration
    2. Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a target ID
      3.  
        Replacing LUNs dynamically from an existing target ID
      4.  
        Replacing a host bus adapter online
    3. Manually reconfiguring a LUN online that is under DMP control
      1.  
        Overview of manually reconfiguring a LUN
      2.  
        Manually removing LUNs dynamically from an existing target ID
      3.  
        Manually adding new LUNs dynamically to a new target ID
      4.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      5.  
        Scanning an operating system device tree after adding or removing LUNs
      6.  
        Manually cleaning up the operating system device tree after removing LUNs
    4.  
      Changing the characteristics of a LUN from the array side
    5.  
      Upgrading the array controller firmware online
    6.  
      Reformatting NVMe devices manually
  6. Event monitoring
    1.  
      About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
    2.  
      Fabric Monitoring and proactive error detection
    3.  
      Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
    4.  
      DMP event logging
    5.  
      Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
  7. Performance monitoring and tuning
    1.  
      About tuning Dynamic Multi-Pathing (DMP) with templates
    2.  
      DMP tuning templates
    3.  
      Example DMP tuning template
    4.  
      Tuning a DMP host with a configuration attribute template
    5.  
      Managing the DMP configuration files
    6.  
      Resetting the DMP tunable parameters and attributes to the default values
    7.  
      DMP tunable parameters and attributes that are supported for templates
    8.  
      DMP tunable parameters
  8. Appendix A. DMP troubleshooting
    1.  
      Recovering from errors when you exclude or include paths to DMP
    2.  
      Downgrading the array support
  9. Appendix B. Reference
    1.  
      Command completion for Veritas commands

How DMP works

Dynamic Multi-Pathing (DMP) provides greater availability, reliability, and performance by using the path failover feature and the load balancing feature. These features are available for multiported disk arrays from various vendors.

Disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array. DMP can also differentiate between different enclosures of a supported array that are connected to the same host system.

The multi-pathing policy that DMP uses depends on the characteristics of the disk array.

DMP supports the following standard array types:

Table:

Array type

Description

Active/Active (A/A)

Allows several paths to be used concurrently for I/O. Such arrays allow DMP to provide greater I/O throughput by balancing the I/O load uniformly across the multiple paths to the LUNs. In the event that one path fails, DMP automatically routes I/O over the other available paths.

Asymmetric Active/Active (A/A-A)

A/A-A or Asymmetric Active/Active arrays can be accessed through secondary storage paths with little performance degradation. The behavior is similar to ALUA, except that it does not support the SCSI commands that an ALUA array supports.

Asymmetric Logical Unit Access (ALUA)

DMP supports all variants of ALUA.

Active/Passive (A/P)

Allows access to its LUNs (logical units; real disks or virtual disks created using hardware) via the primary (active) path on a single controller (also known as an access port or a storage processor) during normal operation.

In implicit failover mode (or autotrespass mode), an A/P array automatically fails over by scheduling I/O to the secondary (passive) path on a separate controller if the primary path fails. This passive port is not used for I/O until the active port fails. In A/P arrays, path failover can occur for a single LUN if I/O fails on the primary path.

This array mode supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.

Active/Passive in explicit failover mode or non-autotrespass mode (A/PF)

The appropriate command must be issued to the array to make the LUNs fail over to the secondary path.

This array mode supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.

Active/Passive with LUN group failover (A/PG)

For Active/Passive arrays with LUN group failover (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs. The primary controller and the secondary controller are each connected to a separate group of LUNs. If a single LUN in the primary controller's LUN group fails, all LUNs in that group fail over to the secondary controller.

This array mode supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.

An array policy module (APM) may define array types to DMP in addition to the standard types for the arrays that it supports.

Dynamic Multi-Pathing uses DMP metanodes (DMP nodes) to access disk devices connected to the system. For each disk in a supported array, DMP maps one node to the set of paths that are connected to the disk. Additionally, DMP associates the appropriate multi-pathing policy for the disk array with the node.

For disks in an unsupported array, DMP maps a separate node to each path that is connected to a disk. The raw and block devices for the nodes are created in the directories /dev/vx/rdmp and /dev/vx/dmp respectively.

Figure: How DMP represents multiple physical paths to a disk as one node shows how DMP sets up a node for a disk in a supported disk array.

Figure: How DMP represents multiple physical paths to a disk as one node

How DMP represents multiple physical paths to a disk as one node

DMP implements a disk device naming scheme that allows you to recognize to which array a disk belongs.

Figure: Example of multi-pathing for a disk enclosure in a SAN environment shows an example where two paths, sdf and sdm, exist to a single disk in the enclosure, but VxVM uses the single DMP node, enc0_0, to access it.

Figure: Example of multi-pathing for a disk enclosure in a SAN environment

Example of multi-pathing for a disk enclosure in a SAN environment