Dynamic Multi-Pathing 7.3.1 Administrator's Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1)
  1. Understanding DMP
    1.  
      About Dynamic Multi-Pathing (DMP)
    2. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    3.  
      Multi-controller ALUA support
    4.  
      Multiple paths to disk arrays
    5.  
      Device discovery
    6.  
      Disk devices
    7. Disk device naming in DMP
      1.  
        About operating system-based naming
      2. About enclosure-based naming
        1.  
          Summary of enclosure-based naming
        2.  
          Enclosure based naming with the Array Volume Identifier (AVID) attribute
  2. Setting up DMP to manage native devices
    1.  
      About setting up DMP to manage native devices
    2.  
      Displaying the native multi-pathing configuration
    3.  
      Migrating LVM volume groups to DMP
    4.  
      Migrating to DMP from EMC PowerPath
    5.  
      Migrating to DMP from Hitachi Data Link Manager (HDLM)
    6.  
      Migrating to DMP from Linux Device Mapper Multipath
    7. Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
      1.  
        Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
      2.  
        Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
      3.  
        Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
    8.  
      Adding DMP devices to an existing LVM volume group or creating a new LVM volume group
    9.  
      Removing DMP support for native devices
  3. Administering DMP
    1.  
      About enabling and disabling I/O for controllers and storage processors
    2.  
      About displaying DMP database information
    3.  
      Displaying the paths to a disk
    4.  
      Setting customized names for DMP nodes
    5. Administering DMP using the vxdmpadm utility
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        User-friendly CLI outputs for ALUA arrays
      9.  
        Displaying information about devices controlled by third-party drivers
      10.  
        Displaying extended device attributes
      11.  
        Suppressing or including devices from VxVM control
      12. Gathering and displaying I/O statistics
        1.  
          Displaying cumulative I/O statistics
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Examples of using the vxdmpadm iostat command
      13.  
        Setting the attributes of the paths to an enclosure
      14.  
        Displaying the redundancy level of a device or enclosure
      15.  
        Specifying the minimum number of active paths
      16.  
        Displaying the I/O policy
      17. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active or an ALUA array
        2.  
          Example of applying load balancing in a SAN
      18.  
        Disabling I/O for paths, controllers, array ports, or DMP nodes
      19.  
        Enabling I/O for paths, controllers, array ports, or DMP nodes
      20.  
        Renaming an enclosure
      21.  
        Configuring the response to I/O failures
      22.  
        Configuring the I/O throttling mechanism
      23.  
        Configuring Subpaths Failover Groups (SFG)
      24.  
        Configuring Low Impact Path Probing (LIPP)
      25.  
        Displaying recovery option values
      26.  
        Configuring DMP path restoration policies
      27.  
        Stopping the DMP path restoration thread
      28.  
        Displaying the status of the DMP path restoration thread
      29.  
        Configuring Array Policy Modules
  4. Administering disks
    1.  
      About disk management
    2. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. About discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding DMP support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        About third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing disks claimed in the DISKS category
        12.  
          Displaying details about an Array Support Library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    3. Changing the disk device naming scheme
      1.  
        Displaying the disk-naming scheme
      2.  
        Regenerating persistent device names
      3.  
        Changing device naming for enclosures controlled by third-party drivers
    4.  
      Discovering the association between enclosure-based disk names and OS-based disk names
  5. Dynamic Reconfiguration of devices
    1.  
      About online Dynamic Reconfiguration
    2. Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a target ID
      3.  
        Replacing LUNs dynamically from an existing target ID
      4.  
        Replacing a host bus adapter online
    3. Manually reconfiguring a LUN online that is under DMP control
      1.  
        Overview of manually reconfiguring a LUN
      2.  
        Manually removing LUNs dynamically from an existing target ID
      3.  
        Manually adding new LUNs dynamically to a new target ID
      4.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      5.  
        Scanning an operating system device tree after adding or removing LUNs
      6.  
        Manually cleaning up the operating system device tree after removing LUNs
    4.  
      Changing the characteristics of a LUN from the array side
    5.  
      Upgrading the array controller firmware online
    6.  
      Reformatting NVMe devices manually
  6. Event monitoring
    1.  
      About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
    2.  
      Fabric Monitoring and proactive error detection
    3.  
      Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
    4.  
      DMP event logging
    5.  
      Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
  7. Performance monitoring and tuning
    1.  
      About tuning Dynamic Multi-Pathing (DMP) with templates
    2.  
      DMP tuning templates
    3.  
      Example DMP tuning template
    4.  
      Tuning a DMP host with a configuration attribute template
    5.  
      Managing the DMP configuration files
    6.  
      Resetting the DMP tunable parameters and attributes to the default values
    7.  
      DMP tunable parameters and attributes that are supported for templates
    8.  
      DMP tunable parameters
  8. Appendix A. DMP troubleshooting
    1.  
      Recovering from errors when you exclude or include paths to DMP
    2.  
      Downgrading the array support
  9. Appendix B. Reference
    1.  
      Command completion for Veritas commands

Adding unsupported disk arrays to the DISKS category

Disk arrays should be added as JBOD devices if no Array Support Library (ASL) is available for the array.

JBODs are assumed to be Active/Active (A/A) unless otherwise specified. If a suitable ASL is not available, an A/A-A, A/P, or A/PF array must be claimed as an Active/Passive (A/P) JBOD to prevent path delays and I/O failures. If a JBOD is ALUA-compliant, it is added as an ALUA array.

Warning:

This procedure ensures that Dynamic Multi-Pathing (DMP) is set up correctly on an array that is not supported by Veritas Volume Manager (VxVM). Otherwise, VxVM treats the independent paths to the disks as separate devices, which can result in data corruption.

To add an unsupported disk array to the DISKS category

  1. Use the following command to identify the vendor ID and product ID of the disks in the array:
    # /etc/vx/diag.d/vxscsiinq device_name

    where device_name is the device name of one of the disks in the array. Note the values of the vendor ID (VID) and product ID (PID) in the output from this command. For Fujitsu disks, also note the number of characters in the serial number that is displayed.

    The following example output shows that the vendor ID is SEAGATE and the product ID is ST318404LSUN18G.

    Vendor id (VID)     : SEAGATE
    Product id (PID)    : ST318404LSUN18G
    Revision            : 8507
    Serial Number       : 0025T0LA3H
  2. Stop all applications, such as databases, from accessing VxVM volumes that are configured on the array, and unmount all file systems and Storage Checkpoints that are configured on the array.
  3. If the array is of type A/A-A, A/P, or A/PF, configure it in autotrespass mode.
  4. Enter the following command to add a new JBOD category:
    # vxddladm addjbod vid=vendorid [pid=productid] \
    [serialnum=opcode/pagecode/offset/length] \
    [cabinetnum=opcode/pagecode/offset/length] policy={aa|ap}]

    where vendorid and productid are the VID and PID values that you found from the previous step. For example, vendorid might be FUJITSU, IBM, or SEAGATE. For Fujitsu devices, you must also specify the number of characters in the serial number as the length argument (for example, 10). If the array is of type A/A-A, A/P, or A/PF, you must also specify the policy=ap attribute.

    Continuing the previous example, the command to define an array of disks of this type as a JBOD would be:

    # vxddladm addjbod vid=SEAGATE pid=ST318404LSUN18G
  5. Use the vxdctl enable command to bring the array under VxVM control.
    # vxdctl enable

  6. To verify that the array is now supported, enter the following command:
    # vxddladm listjbod

    The following is sample output from this command for the example array:

    VID     PID          SerialNum            CabinetNum   Policy
                     (Cmd/PageCode/off/len) (Cmd/PageCode/off/len)
    ==============================================================
    SEAGATE ALL PIDs  18/-1/36/12           18/-1/10/11    Disk
    SUN     SESS01   	18/-1/36/12   	       18/-1/12/11 		 Disk
  7. To verify that the array is recognized, use the vxdmpadm listenclosure command as shown in the following sample output for the example array:
    # vxdmpadm listenclosure
    ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS    ARRAY_TYPE LUN_COUNT FIRMWARE
    =======================================================================
    Disk       Disk       DISKS     CONNECTED Disk       2         -

    The enclosure name and type for the array are both shown as being set to Disk. You can use the vxdisk list command to display the disks in the array:

    # vxdisk list
    DEVICE       TYPE            DISK         GROUP        STATUS
    punr710vm04_disk_1 auto:none       -            -            online invalid
    punr710vm04_disk_2 auto:none       -            -            online invalid
    punr710vm04_disk_3 auto:none       -            -            online invalid
    punr710vm04_disk_4 auto:none       -            -            online invalid
    sda                auto:none       -            -            online invalid
    xiv0_9148          auto:none       -            -            online invalid thinrclm
    ...
  8. To verify that the DMP paths are recognized, use the vxdmpadm getdmpnode command as shown in the following sample output for the example array:
    # vxdmpadm getdmpnode enclosure=Disk
    NAME                 STATE        ENCLR-TYPE   PATHS  ENBL  DSBL  ENCLR-NAME
    ==============================================================================
    punr710vm04_disk_1   ENABLED      Disk         1      1     0     disk
    punr710vm04_disk_2   ENABLED      Disk         1      1     0     disk
    punr710vm04_disk_3   ENABLED      Disk         1      1     0     disk
    punr710vm04_disk_4   ENABLED      Disk         1      1     0     disk
    sda                  ENABLED      Disk         1      1     0     disk
    ...

    The output in this example shows that there are two paths to the disks in the array.

    For more information, enter the command vxddladm help addjbod.

    See the vxddladm(1M) manual page.

    See the vxdmpadm(1M) manual page.