InfoScale™ 9.0 Dynamic Multi-Pathing Administrator's Guide - Solaris

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Solaris
  1. Understanding DMP
    1.  
      About Dynamic Multi-Pathing (DMP)
    2. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3.  
        Dynamic Reconfiguration
      4.  
        DMP support for the ZFS root pool
      5.  
        About booting from DMP devices
      6. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    3.  
      Multi-controller ALUA support
    4.  
      Multiple paths to disk arrays
    5.  
      Device discovery
    6.  
      Disk devices
    7. Disk device naming in DMP
      1.  
        About operating system-based naming
      2. About enclosure-based naming
        1.  
          Summary of enclosure-based naming
        2.  
          Enclosure based naming with the Array Volume Identifier (AVID) attribute
  2. Setting up DMP to manage native devices
    1.  
      About setting up DMP to manage native devices
    2.  
      Displaying the native multi-pathing configuration
    3.  
      Migrating ZFS pools to DMP
    4.  
      Migrating to DMP from EMC PowerPath
    5.  
      Migrating to DMP from Hitachi Data Link Manager (HDLM)
    6.  
      Migrating to DMP from Solaris Multiplexed I/O (MPxIO)
    7. Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
      1.  
        Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
      2.  
        Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
      3.  
        Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
    8.  
      Enabling and disabling DMP support for the ZFS root pool
    9.  
      Adding DMP devices to an existing ZFS pool or creating a new ZFS pool
    10.  
      Removing DMP support for native devices
  3. Administering DMP
    1.  
      About enabling and disabling I/O for controllers and storage processors
    2.  
      About displaying DMP database information
    3.  
      Displaying the paths to a disk
    4.  
      Setting customized names for DMP nodes
    5. Managing DMP devices for the ZFS root pool
      1.  
        Configuring a mirror for the ZFS root pool using a DMP device
      2.  
        Updating the boot device settings
      3.  
        Using DMP devices as swap devices or dump devices
      4.  
        Cloning the boot environment with DMP
      5.  
        Creating a snapshot of an existing boot environment
    6. Administering DMP using the vxdmpadm utility
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        User-friendly CLI outputs for ALUA arrays
      9.  
        Displaying information about devices controlled by third-party drivers
      10.  
        Displaying extended device attributes
      11.  
        Suppressing or including devices from VxVM control
      12. Gathering and displaying I/O statistics
        1.  
          Displaying cumulative I/O statistics
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Examples of using the vxdmpadm iostat command
      13.  
        Setting the attributes of the paths to an enclosure
      14.  
        Displaying the redundancy level of a device or enclosure
      15.  
        Specifying the minimum number of active paths
      16.  
        Displaying the I/O policy
      17. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active or an ALUA array
        2.  
          Example of applying load balancing in a SAN
      18.  
        Disabling I/O for paths, controllers, array ports, or DMP nodes
      19.  
        Enabling I/O for paths, controllers, array ports, or DMP nodes
      20.  
        Renaming an enclosure
      21.  
        Configuring the response to I/O failures
      22.  
        Configuring the I/O throttling mechanism
      23.  
        Configuring Subpaths Failover Groups (SFG)
      24.  
        Configuring Low Impact Path Probing (LIPP)
      25.  
        Displaying recovery option values
      26.  
        Configuring DMP path restoration policies
      27.  
        Stopping the DMP path restoration thread
      28.  
        Displaying the status of the DMP path restoration thread
      29.  
        Configuring Array Policy Modules
      30.  
        Configuring latency threshold tunable for metro/geo array
  4. Administering disks
    1.  
      About disk management
    2. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. About discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding DMP support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        About third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing disks claimed in the DISKS category
        12.  
          Displaying details about an Array Support Library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    3.  
      VxVM coexistence with ZFS
    4. Changing the disk device naming scheme
      1.  
        Displaying the disk-naming scheme
      2.  
        Regenerating persistent device names
      3.  
        Changing device naming for enclosures controlled by third-party drivers
      4. Simple or nopriv disks with enclosure-based naming
        1.  
          Removing the error state for simple or nopriv disks in the boot disk group
        2.  
          Removing the error state for simple or nopriv disks in non-boot disk groups
    5.  
      Discovering the association between enclosure-based disk names and OS-based disk names
  5. Dynamic Reconfiguration of devices
    1.  
      About online Dynamic Reconfiguration
    2. About the DMPDR utility
      1.  
        Using the DMPDR utility to reconfigure the LUNs associated with a server
    3. Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a target ID
      3.  
        Replacing LUNs dynamically from an existing target ID
      4.  
        Replacing a host bus adapter online
    4. Manually reconfiguring a LUN online that is under DMP control
      1.  
        Overview of manually reconfiguring a LUN
      2.  
        Manually removing LUNs dynamically from an existing target ID
      3.  
        Manually adding new LUNs dynamically to a new target ID
      4.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      5.  
        Scanning an operating system device tree after adding or removing LUNs
      6.  
        Manually cleaning up the operating system device tree after removing LUNs
      7.  
        Manually replacing a host bus adapter on an M5000 server
    5.  
      Changing the characteristics of a LUN from the array side
    6.  
      Upgrading the array controller firmware online
  6. Event monitoring
    1.  
      About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
    2.  
      Fabric Monitoring and proactive error detection
    3.  
      Dynamic Multi-Pathing (DMP) automated device discovery
    4.  
      Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
    5.  
      DMP event logging
    6.  
      Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
  7. Performance monitoring and tuning
    1.  
      About tuning Dynamic Multi-Pathing (DMP) with templates
    2.  
      DMP tuning templates
    3.  
      Example DMP tuning template
    4.  
      Tuning a DMP host with a configuration attribute template
    5.  
      Managing the DMP configuration files
    6.  
      Resetting the DMP tunable parameters and attributes to the default values
    7.  
      DMP tunable parameters and attributes that are supported for templates
    8.  
      DMP tunable parameters
  8. Appendix A. DMP troubleshooting
    1.  
      Displaying extended attributes after upgrading to DMP 9.0
    2.  
      Recovering from errors when you exclude or include paths to DMP
    3.  
      Downgrading the array support
  9. Appendix B. Reference
    1.  
      Command completion for InfoScale commands

Manually replacing a host bus adapter on an M5000 server

This section contains the procedure to replace an online host bus adapter (HBA) when DMP is managing multi-pathing in a Cluster File System (CFS) cluster. The HBA World Wide Port Name (WWPN) changes when the HBA is replaced. Following are the prerequisites to replace an online host bus adapter:

  • A single node or two or more node CFS or RAC cluster.

  • I/O running on CFS file system.

  • An M5000 server with at least two HBAs in separate PCIe slots and recommended Solaris patch level for HBA replacement.

To replace an online host bus adapter on an M5000 server

  1. Identify the HBAs on the M5000 server. For example, to identify Emulex HBAs, enter the following command:
    /usr/platform/sun4u/sbin/prtdiag -v | grep emlx
    00 PCIe 0 2, fc20, 10df 119, 0, 0 okay 4,
    4 SUNW,emlxs-pci10df,fc20 LPe 11002-S
    /pci@0,600000/pci@0/pci@9/SUNW,emlxs@0
    00 PCIe 0 2, fc20, 10df 119, 0, 1 okay 4,
    4 SUNW,emlxs-pci10df,fc20 LPe 11002-S
    /pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1
    00 PCIe 3 2, fc20, 10df 2, 0, 0 okay 4,
    4 SUNW,emlxs-pci10df,fc20 LPe 11002-S
    /pci@3,700000/SUNW,emlxs@0
    00 PCIe 3 2, fc20, 10df 2, 0, 1 okay 4,
    4 SUNW,emlxs-pci10df,fc20 LPe 11002-S
    /pci@3,700000/SUNW,emlxs@0,1
  2. Identify the HBA and its WWPN(s), which you want to replace using the cfgadm command.

    To identify the HBA, enter the following:

    # cfgadm -al | grep -i fibre 
    iou#0-pci#1 fibre/hp connected configured ok
    iou#0-pci#4 fibre/hp connected configured ok

    To list all HBAs, enter the following:

    # luxadm -e port
    /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0/fp@0,0:devctl
    NOT CONNECTED
    /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1/fp@0,0:devctl
    CONNECTED
    /devices/pci@3,700000/SUNW,emlxs@0/fp@0,0:devctl
    NOT CONNECTED
    /devices/pci@3,700000/SUNW,emlxs@0,1/fp@0,0:devctl
    CONNECTED

    To select the HBA to dump the portap and get the WWPN, enter the following:

    # luxadm -e dump_map /devices/pci@0,600000/pci@0/pci@9/SUNW,emlxs@0,1/
    					fp@0,0:devctl
    0     304700   0          203600a0b847900c 200600a0b847900c 0x0
    (Disk device)
    1     30a800   0          20220002ac00065f 2ff70002ac00065f 0x0
    (Disk device)
    2     30a900   0          21220002ac00065f 2ff70002ac00065f 0x0
    (Disk device)
    3     560500   0          10000000c97c3c2f 20000000c97c3c2f 0x1f
    (Unknown Type)
    4     560700   0          10000000c97c9557 20000000c97c9557 0x1f
    (Unknown Type)
    5     560b00   0          10000000c97c34b5 20000000c97c34b5 0x1f
    (Unknown Type)
    6     560900   0          10000000c973149f 20000000c973149f 0x1f
    (Unknown Type,Host Bus Adapter)

    Alternately, you can run the fcinfo hba-port Solaris command to get the WWPN(s) for the HBA ports.

  3. Ensure you have a compatible spare HBA for hot-swap.
  4. Stop the I/O operations on the HBA port(s) and disable the DMP subpath(s) for the HBA that you want to replace.
    # vxdmpadm disable ctrl=ctrl#
  5. Dynamically unconfigure the HBA in the PCIe slot using the cfgadm command.
    # cfgadm -c unconfigure iou#0-pci#1

    Look for console messages to check if the cfgadm command is unsuccessful. If the cfgadm command is unsuccessful, proceed to troubleshooting using the server hardware documentation. Check the Solaris 11 patch level recommended for dynamic reconfiguration operations and contact SUN support for further assistance.

    console messages
    Oct 24 16:21:44 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2):
    card is removed from the slot iou 0-pci 1
  6. Verify that the HBA card that is being replaced in step 5 is not in the configuration. Enter the following command:
    # cfgadm -al | grep -i fibre
    iou 0-pci 4 fibre/hp connected configured ok
  7. Mark the fiber cable(s).
  8. Remove the fiber cable(s) and the HBA that you must replace.

    For more information, see the HBA replacement procedures in SPARC Enterprise M4000/M5000/M8000/M9000 Servers Dynamic Reconfiguration (DR) User's Guide.

  9. Replace the HBA with a new compatible HBA of similar type in the same slot. The reinserted card shows up as follows:
    console messages
    iou 0-pci 1 unknown disconnected unconfigured unknown
  10. Bring the replaced HBA back into the configuration. Enter the following:
    # cfgadm -c configure iou 0-pci 1
    console messages
    Oct 24 16:21:57 m5000sb0 pcihp: NOTICE: pcihp (pxb_plx2):
    card is inserted in the slot iou#0-pci#1 (pci dev 0)
  11. Verify that the reinserted HBA is in the configuration. Enter the following:
    # cfgadm -al | grep -i fibre
    iou#0-pci 1 fibre/hp connected configured ok <====
    iou#0-pci 4 fibre/hp connected configured ok
  12. Modify fabric zoning to include the replaced HBA WWPN(s).
  13. Enable LUN security on storage for the new WWPN(s).
  14. Perform an operating system device scan to re-discover the LUNs. Enter the following:
    # cfgadm -c configure c3
  15. Clean up the device tree for old LUNs. Enter the following:
    # devfsadm -Cv

    Note:

    Sometimes replacing an HBA creates new devices. Perform cleanup operations for the LUN only when new devices are created.

  16. If DMP does not show a ghost path for the removed HBA path, enable the path using the vxdmpadm command. This performs the device scan for that particular HBA subpath(s). Enter the following:
    # vxdmpadm enable ctrl=ctrl#
  17. Verify if I/O operations are scheduled on that path. If I/O operations are running correctly on all paths, the dynamic HBA replacement operation is complete.