Dynamic Multi-Pathing 7.3.1 Administrator's Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1)
  1. Understanding DMP
    1.  
      About Dynamic Multi-Pathing (DMP)
    2. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    3.  
      Multi-controller ALUA support
    4.  
      Multiple paths to disk arrays
    5.  
      Device discovery
    6.  
      Disk devices
    7. Disk device naming in DMP
      1.  
        About operating system-based naming
      2. About enclosure-based naming
        1.  
          Summary of enclosure-based naming
        2.  
          Enclosure based naming with the Array Volume Identifier (AVID) attribute
  2. Setting up DMP to manage native devices
    1.  
      About setting up DMP to manage native devices
    2.  
      Displaying the native multi-pathing configuration
    3.  
      Migrating LVM volume groups to DMP
    4.  
      Migrating to DMP from EMC PowerPath
    5.  
      Migrating to DMP from Hitachi Data Link Manager (HDLM)
    6.  
      Migrating to DMP from Linux Device Mapper Multipath
    7. Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
      1.  
        Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
      2.  
        Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
      3.  
        Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
    8.  
      Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
    9.  
      Removing DMP support for native devices
  3. Administering DMP
    1.  
      About enabling and disabling I/O for controllers and storage processors
    2.  
      About displaying DMP database information
    3.  
      Displaying the paths to a disk
    4.  
      Setting customized names for DMP nodes
    5. Administering DMP using the vxdmpadm utility
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        User-friendly CLI outputs for ALUA arrays
      9.  
        Displaying information about devices controlled by third-party drivers
      10.  
        Displaying extended device attributes
      11.  
        Suppressing or including devices from VxVM control
      12. Gathering and displaying I/O statistics
        1.  
          Displaying cumulative I/O statistics
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Examples of using the vxdmpadm iostat command
      13.  
        Setting the attributes of the paths to an enclosure
      14.  
        Displaying the redundancy level of a device or enclosure
      15.  
        Specifying the minimum number of active paths
      16.  
        Displaying the I/O policy
      17. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active or an ALUA array
        2.  
          Example of applying load balancing in a SAN
      18.  
        Disabling I/O for paths, controllers, array ports, or DMP nodes
      19.  
        Enabling I/O for paths, controllers, array ports, or DMP nodes
      20.  
        Renaming an enclosure
      21.  
        Configuring the response to I/O failures
      22.  
        Configuring the I/O throttling mechanism
      23.  
        Configuring Subpaths Failover Groups (SFG)
      24.  
        Configuring Low Impact Path Probing (LIPP)
      25.  
        Displaying recovery option values
      26.  
        Configuring DMP path restoration policies
      27.  
        Stopping the DMP path restoration thread
      28.  
        Displaying the status of the DMP path restoration thread
      29.  
        Configuring Array Policy Modules
  4. Administering disks
    1.  
      About disk management
    2. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. About discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding DMP support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        About third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing disks claimed in the DISKS category
        12.  
          Displaying details about an Array Support Library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    3. Changing the disk device naming scheme
      1.  
        Displaying the disk-naming scheme
      2.  
        Regenerating persistent device names
      3.  
        Changing device naming for enclosures controlled by third-party drivers
    4.  
      Discovering the association between enclosure-based disk names and OS-based disk names
  5. Dynamic Reconfiguration of devices
    1.  
      About online Dynamic Reconfiguration
    2. Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a target ID
      3.  
        Replacing LUNs dynamically from an existing target ID
      4.  
        Replacing a host bus adapter online
    3. Manually reconfiguring a LUN online that is under DMP control
      1.  
        Overview of manually reconfiguring a LUN
      2.  
        Manually removing LUNs dynamically from an existing target ID
      3.  
        Manually adding new LUNs dynamically to a new target ID
      4.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      5.  
        Scanning an operating system device tree after adding or removing LUNs
      6.  
        Manually cleaning up the operating system device tree after removing LUNs
    4.  
      Changing the characteristics of a LUN from the array side
    5.  
      Upgrading the array controller firmware online
    6.  
      Reformatting NVMe devices manually
  6. Event monitoring
    1.  
      About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
    2.  
      Fabric Monitoring and proactive error detection
    3.  
      Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
    4.  
      DMP event logging
    5.  
      Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
  7. Performance monitoring and tuning
    1.  
      About tuning Dynamic Multi-Pathing (DMP) with templates
    2.  
      DMP tuning templates
    3.  
      Example DMP tuning template
    4.  
      Tuning a DMP host with a configuration attribute template
    5.  
      Managing the DMP configuration files
    6.  
      Resetting the DMP tunable parameters and attributes to the default values
    7.  
      DMP tunable parameters and attributes that are supported for templates
    8.  
      DMP tunable parameters
  8. Appendix A. DMP troubleshooting
    1.  
      Recovering from errors when you exclude or include paths to DMP
    2.  
      Downgrading the array support
  9. Appendix B. Reference
    1.  
      Command completion for Veritas commands

DMP tunable parameters

DMP provides various parameters that you can use to tune your environment.

Table: DMP parameters that are tunable shows the DMP parameters that can be tuned. You can set a tunable parameter online, without a reboot.

Table: DMP parameters that are tunable

Parameter

Description

dmp_cache_open

If this parameter is set to on, the first open of a device is cached. This caching enhances the performance of device discovery by minimizing the overhead that is caused by subsequent opens on the device. If this parameter is set to off, caching is not performed.

The default value is on.

dmp_daemon_count

The number of kernel threads that are available for servicing path error handling, path restoration, and other DMP administrative tasks.

The default number of threads is 10.

dmp_delayq_interval

How long DMP should wait before retrying I/O after an array fails over to a standby path. Some disk arrays are not capable of accepting I/O requests immediately after failover.

The default value is 15 seconds.

dmp_display_alua_states

For ALUA arrays, this tunable displays the asymmetric access state instead of PRIMARY or SECONDARY state in the PATH-TYPE[M] column.

The asymmetric access state can be:

  • Active/Optimized

  • Active/Non-optimized

  • Standby

  • Unavailable

  • TransitionInProgress

  • Offline

The default tunable value is on.

dmp_fast_recovery

Whether DMP should try to obtain SCSI error information directly from the HBA interface. Setting the value to on can potentially provide faster error recovery, if the HBA interface supports the error enquiry feature. If this parameter is set to off, the HBA interface is not used.

The default setting is on.

dmp_health_time

DMP detects intermittently failing paths, and prevents I/O requests from being sent on them. The value of dmp_health_time represents the time in seconds for which a path must stay healthy. If a path's state changes back from enabled to disabled within this time period, DMP marks the path as intermittently failing, and does not re-enable the path for I/O until dmp_path_age seconds elapse.

The default value is 60 seconds.

A value of 0 prevents DMP from detecting intermittently failing paths.

dmp_log_level

The level of detail that is displayed for DMP console messages. The following level values are defined:

1 - Displays all DMP log messages that are critical.

2 - Displays level 1 messages plus messages that relate to path or disk addition or removal, SCSI errors, IO errors and DMP node migration.

3 - Displays level 1 and 2 messages plus messages that relate to path throttling, suspect path, idle path and insane path logic.

4 - Displays level 1, 2 and 3 messages plus messages that relate to setting or changing attributes on a path and tunable related changes.

5 or higher - Displays level 1, 2, 3 and 4 messages plus more verbose messages.

The default value is 1.

dmp_low_impact_probe

Determines if the path probing by restore daemon is optimized or not. Set it to on to enable optimization and off to disable. Path probing is optimized only when restore policy is check_disabled or during check_disabled phase of check_periodic policy.

The default value is on.

dmp_lun_retry_timeout

Specifies a retry period for handling transient errors that are not handled by the HBA and the SCSI driver.

Specify the time in seconds.

In general, no such special handling is required. Therefore, the default value of the dmp_lun_retry_timeout tunable parameter is 30. When all paths to a disk fail, DMP fails the I/Os to the application. The paths are checked for connectivity only once.

In special cases when DMP needs to handle the transient errors, configure DMP to delay failing the I/Os to the application for a short interval. Set the dmp_lun_retry_timeout tunable parameter to a non-zero value to specify the interval. If all of the paths to the LUN fail and I/Os need to be serviced, then DMP probes the paths every five seconds for the specified interval. If the paths are restored within the interval, DMP detects this and retries the I/Os. DMP does not fail I/Os to a disk with all failed paths until the specified dmp_lun_retry_timeout interval or until the I/O succeeds on one of the paths, whichever happens first.

dmp_monitor_fabric

Determines if DMP should register for HBA events from SNIA HAB APIs. These events improve the failover performance by proactively avoiding the I/O paths that have impending failure.

The default setting is off for releases before 5.0 that have been patched to support this DDL feature. The default setting is on for 5.0 and later releases.

dmp_monitor_ownership

Determines whether the ownership monitoring is enabled for ALUA arrays. When this tunable is set to on, DMP polls the devices for LUN ownership changes. The polling interval is specified by the dmp_restore_interval tunable. The default value is on.

When the dmp_monitor_ownership tunable is off, DMP does not poll the devices for LUN ownership changes.

dmp_native_support

Determines whether DMP will do multi-pathing for native devices.

Set the tunable to on to have DMP do multi-pathing for native devices.

When Dynamic Multi-Pathing is installed as a component of another Veritas InfoScale product, the default value is off.

When Dynamic Multi-Pathing is installed as a stand-alone product, the default value is on.

dmp_path_age

The time for which an intermittently failing path needs to be monitored as healthy before DMP again tries to schedule I/O requests on it.

The default value is 300 seconds.

A value of 0 prevents DMP from detecting intermittently failing paths.

dmp_pathswitch_blks_shift

The default number of contiguous I/O blocks that are sent along a DMP path to an array before switching to the next available path. The value is expressed as the integer exponent of a power of 2; for example 9 represents 512 blocks.

The default value is 9. In this case, 512 blocks (256k) of contiguous I/O are sent over a DMP path before switching. For intelligent disk arrays with internal data caches, better throughput may be obtained by increasing the value of this tunable. For example, for the HDS 9960 A/A array, the optimal value is between 15 and 17 for an I/O activity pattern that consists mostly of sequential reads or writes.

This parameter only affects the behavior of the balanced I/O policy. A value of 0 disables multi-pathing for the policy unless the vxdmpadm command is used to specify a different partition size for an array.

dmp_probe_idle_lun

If DMP statistics gathering is enabled, set this tunable to on (default) to have the DMP path restoration thread probe idle LUNs. Set this tunable to off to turn off this feature. (Idle LUNs are VM disks on which no I/O requests are scheduled.) The value of this tunable is only interpreted when DMP statistics gathering is enabled. Turning off statistics gathering also disables idle LUN probing.

The default value is on.

dmp_probe_threshold

If the dmp_low_impact_probe is turned on, dmp_probe_threshold determines the number of paths to probe before deciding on changing the state of other paths in the same subpath failover group.

The default value is 5.

dmp_restore_cycles

If the DMP restore policy is check_periodic, the number of cycles after which the check_all policy is called.

The default value is 10.

See Configuring DMP path restoration policies.

dmp_restore_interval

The interval attribute specifies how often the path restoration thread examines the paths. Specify the time in seconds.

The default value is 300.

The value of this tunable can also be set using the vxdmpadm start restore command.

See Configuring DMP path restoration policies.

dmp_restore_policy

The DMP restore policy, which can be set to one of the following values:

  • check_all

  • check_alternate

  • check_disabled

  • check_periodic

The default value is check_disabled

The value of this tunable can also be set using the vxdmpadm start restore command.

See Configuring DMP path restoration policies.

dmp_restore_state

If this parameter is set to enabled, it enables the path restoration thread to be started.

See Configuring DMP path restoration policies.

If this parameter is set to disabled, it stops and disables the path restoration thread.

If this parameter is set to stopped, it stops the path restoration thread until the next device discovery cycle.

The default is enabled.

See Stopping the DMP path restoration thread.

dmp_scsi_timeout

Determines the timeout value to be set for any SCSI command that is sent via DMP. If the HBA does not receive a response for a SCSI command that it has sent to the device within the timeout period, the SCSI command is returned with a failure error code.

The default value is 20 seconds.

dmp_sfg_threshold

Determines the minimum number of paths that should be failed in a failover group before DMP starts suspecting other paths in the same failover group. The value of 0 disables the failover logic based on subpath failover groups.

The default value is 1.

dmp_stat_interval

The time interval between gathering DMP statistics.

The default and minimum value are 1 second.