Dynamic Multi-Pathing 7.3.1 Administrator's Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1)
  1. Understanding DMP
    1.  
      About Dynamic Multi-Pathing (DMP)
    2. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    3.  
      Multi-controller ALUA support
    4.  
      Multiple paths to disk arrays
    5.  
      Device discovery
    6.  
      Disk devices
    7. Disk device naming in DMP
      1.  
        About operating system-based naming
      2. About enclosure-based naming
        1.  
          Summary of enclosure-based naming
        2.  
          Enclosure based naming with the Array Volume Identifier (AVID) attribute
  2. Setting up DMP to manage native devices
    1.  
      About setting up DMP to manage native devices
    2.  
      Displaying the native multi-pathing configuration
    3.  
      Migrating LVM volume groups to DMP
    4.  
      Migrating to DMP from EMC PowerPath
    5.  
      Migrating to DMP from Hitachi Data Link Manager (HDLM)
    6.  
      Migrating to DMP from Linux Device Mapper Multipath
    7. Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
      1.  
        Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
      2.  
        Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
      3.  
        Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
    8.  
      Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
    9.  
      Removing DMP support for native devices
  3. Administering DMP
    1.  
      About enabling and disabling I/O for controllers and storage processors
    2.  
      About displaying DMP database information
    3.  
      Displaying the paths to a disk
    4.  
      Setting customized names for DMP nodes
    5. Administering DMP using the vxdmpadm utility
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        User-friendly CLI outputs for ALUA arrays
      9.  
        Displaying information about devices controlled by third-party drivers
      10.  
        Displaying extended device attributes
      11.  
        Suppressing or including devices from VxVM control
      12. Gathering and displaying I/O statistics
        1.  
          Displaying cumulative I/O statistics
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Examples of using the vxdmpadm iostat command
      13.  
        Setting the attributes of the paths to an enclosure
      14.  
        Displaying the redundancy level of a device or enclosure
      15.  
        Specifying the minimum number of active paths
      16.  
        Displaying the I/O policy
      17. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active or an ALUA array
        2.  
          Example of applying load balancing in a SAN
      18.  
        Disabling I/O for paths, controllers, array ports, or DMP nodes
      19.  
        Enabling I/O for paths, controllers, array ports, or DMP nodes
      20.  
        Renaming an enclosure
      21.  
        Configuring the response to I/O failures
      22.  
        Configuring the I/O throttling mechanism
      23.  
        Configuring Subpaths Failover Groups (SFG)
      24.  
        Configuring Low Impact Path Probing (LIPP)
      25.  
        Displaying recovery option values
      26.  
        Configuring DMP path restoration policies
      27.  
        Stopping the DMP path restoration thread
      28.  
        Displaying the status of the DMP path restoration thread
      29.  
        Configuring Array Policy Modules
  4. Administering disks
    1.  
      About disk management
    2. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. About discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding DMP support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        About third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing disks claimed in the DISKS category
        12.  
          Displaying details about an Array Support Library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    3. Changing the disk device naming scheme
      1.  
        Displaying the disk-naming scheme
      2.  
        Regenerating persistent device names
      3.  
        Changing device naming for enclosures controlled by third-party drivers
    4.  
      Discovering the association between enclosure-based disk names and OS-based disk names
  5. Dynamic Reconfiguration of devices
    1.  
      About online Dynamic Reconfiguration
    2. Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a target ID
      3.  
        Replacing LUNs dynamically from an existing target ID
      4.  
        Replacing a host bus adapter online
    3. Manually reconfiguring a LUN online that is under DMP control
      1.  
        Overview of manually reconfiguring a LUN
      2.  
        Manually removing LUNs dynamically from an existing target ID
      3.  
        Manually adding new LUNs dynamically to a new target ID
      4.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      5.  
        Scanning an operating system device tree after adding or removing LUNs
      6.  
        Manually cleaning up the operating system device tree after removing LUNs
    4.  
      Changing the characteristics of a LUN from the array side
    5.  
      Upgrading the array controller firmware online
    6.  
      Reformatting NVMe devices manually
  6. Event monitoring
    1.  
      About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
    2.  
      Fabric Monitoring and proactive error detection
    3.  
      Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
    4.  
      DMP event logging
    5.  
      Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
  7. Performance monitoring and tuning
    1.  
      About tuning Dynamic Multi-Pathing (DMP) with templates
    2.  
      DMP tuning templates
    3.  
      Example DMP tuning template
    4.  
      Tuning a DMP host with a configuration attribute template
    5.  
      Managing the DMP configuration files
    6.  
      Resetting the DMP tunable parameters and attributes to the default values
    7.  
      DMP tunable parameters and attributes that are supported for templates
    8.  
      DMP tunable parameters
  8. Appendix A. DMP troubleshooting
    1.  
      Recovering from errors when you exclude or include paths to DMP
    2.  
      Downgrading the array support
  9. Appendix B. Reference
    1.  
      Command completion for Veritas commands

Specifying the I/O policy

You can use the vxdmpadm setattr command to change the Dynamic Multi-Pathing (DMP) I/O policy for distributing I/O load across multiple paths to a disk array or enclosure. You can set policies for an enclosure (for example, HDS01), for all enclosures of a particular type (such as HDS), or for all enclosures of a particular array type (such as A/A for Active/Active, or A/P for Active/Passive).

Note:

I/O policies are persistent across reboots of the system.

Table: DMP I/O policies describes the I/O policies that may be set.

Table: DMP I/O policies

Policy

Description

adaptive

This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time. For example, I/O from/to a database may exhibit both long transfers (table scans) and short transfers (random look ups). The policy is also useful for a SAN environment where different paths may have different number of hops. No further configuration is possible as this policy is automatically managed by DMP.

In this example, the adaptive I/O policy is set for the enclosure enc1:

# vxdmpadm setattr enclosure enc1 \
  iopolicy=adaptive

adaptiveminq

Similar to the adaptive policy, except that I/O is scheduled according to the length of the I/O queue on each path. The path with the shortest queue is assigned the highest priority.

balanced [partitionsize=size]

This policy is designed to optimize the use of caching in disk drives and RAID controllers. The size of the cache typically ranges from 120KB to 500KB or more, depending on the characteristics of the particular hardware. During normal operation, the disks (or LUNs) are logically divided into a number of regions (or partitions), and I/O from/to a given region is sent on only one of the active paths. Should that path fail, the workload is automatically redistributed across the remaining paths.

You can use the partitionsize attribute to specify the size for the partition. The partition size in blocks is adjustable in powers of 2 from 2 up to 231. A value that is not a power of 2 is silently rounded down to the nearest acceptable value.

Specifying a partition size of 0 is equivalent to specifying the default partition size.

The default value for the partition size is 512 blocks (256k). Specifying a partition size of 0 is equivalent to the default partition size of 512 blocks (256k).

The default value can be changed by adjusting the value of the dmp_pathswitch_blks_shift tunable parameter.

Note:

The benefit of this policy is lost if the value is set larger than the cache size.

For example, the suggested partition size for an Hitachi HDS 9960 A/A array is from 32,768 to 131,072 blocks (16MB to 64MB) for an I/O activity pattern that consists mostly of sequential reads or writes.

The next example sets the balanced I/O policy with a partition size of 4096 blocks (2MB) on the enclosure enc0:

# vxdmpadm setattr enclosure enc0 \
  iopolicy=balanced partitionsize=4096

minimumq

This policy sends I/O on paths that have the minimum number of outstanding I/O requests in the queue for a LUN. No further configuration is possible as DMP automatically determines the path with the shortest queue.

The following example sets the I/O policy to minimumq for a JBOD:

# vxdmpadm setattr enclosure Disk \
  iopolicy=minimumq

This is the default I/O policy for all arrays.

priority

This policy is useful when the paths in a SAN have unequal performance, and you want to enforce load balancing manually. You can assign priorities to each path based on your knowledge of the configuration and performance characteristics of the available paths, and of other aspects of your system.

In this example, the I/O policy is set to priority for all SENA arrays:

# vxdmpadm setattr arrayname SENA \
  iopolicy=priority

round-robin

This policy shares I/O equally between the paths in a round-robin sequence. For example, if there are three paths, the first I/O request would use one path, the second would use a different path, the third would be sent down the remaining path, the fourth would go down the first path, and so on. No further configuration is possible as this policy is automatically managed by DMP.

The next example sets the I/O policy to round-robin for all Active/Active arrays:

# vxdmpadm setattr arraytype A/A \
  iopolicy=round-robin

singleactive

This policy routes I/O down the single active path. This policy can be configured for A/P arrays with one active path per controller, where the other paths are used in case of failover. If configured for A/A arrays, there is no load balancing across the paths, and the alternate paths are only used to provide high availability (HA). If the current active path fails, I/O is switched to an alternate active path. No further configuration is possible as the single active path is selected by DMP.

The following example sets the I/O policy to singleactive for JBOD disks:

# vxdmpadm setattr arrayname Disk \
  iopolicy=singleactive