InfoScale™ 9.0 Dynamic Multi-Pathing Administrator's Guide - Solaris

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Solaris
  1. Understanding DMP
    1.  
      About Dynamic Multi-Pathing (DMP)
    2. How DMP works
      1. How DMP monitors I/O on paths
        1.  
          Path failover mechanism
        2.  
          Subpaths Failover Group (SFG)
        3.  
          Low Impact Path Probing (LIPP)
        4.  
          I/O throttling
      2.  
        Load balancing
      3.  
        Dynamic Reconfiguration
      4.  
        DMP support for the ZFS root pool
      5.  
        About booting from DMP devices
      6. DMP in a clustered environment
        1.  
          About enabling or disabling controllers with shared disk groups
    3.  
      Multi-controller ALUA support
    4.  
      Multiple paths to disk arrays
    5.  
      Device discovery
    6.  
      Disk devices
    7. Disk device naming in DMP
      1.  
        About operating system-based naming
      2. About enclosure-based naming
        1.  
          Summary of enclosure-based naming
        2.  
          Enclosure based naming with the Array Volume Identifier (AVID) attribute
  2. Setting up DMP to manage native devices
    1.  
      About setting up DMP to manage native devices
    2.  
      Displaying the native multi-pathing configuration
    3.  
      Migrating ZFS pools to DMP
    4.  
      Migrating to DMP from EMC PowerPath
    5.  
      Migrating to DMP from Hitachi Data Link Manager (HDLM)
    6.  
      Migrating to DMP from Solaris Multiplexed I/O (MPxIO)
    7. Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
      1.  
        Enabling Dynamic Multi-Pathing (DMP) devices for use with Oracle Automatic Storage Management (ASM)
      2.  
        Removing Dynamic Multi-Pathing (DMP) devices from the listing of Oracle Automatic Storage Management (ASM) disks
      3.  
        Migrating Oracle Automatic Storage Management (ASM) disk groups on operating system devices to Dynamic Multi-Pathing (DMP) devices
    8.  
      Enabling and disabling DMP support for the ZFS root pool
    9.  
      Adding DMP devices to an existing ZFS pool or creating a new ZFS pool
    10.  
      Removing DMP support for native devices
  3. Administering DMP
    1.  
      About enabling and disabling I/O for controllers and storage processors
    2.  
      About displaying DMP database information
    3.  
      Displaying the paths to a disk
    4.  
      Setting customized names for DMP nodes
    5. Managing DMP devices for the ZFS root pool
      1.  
        Configuring a mirror for the ZFS root pool using a DMP device
      2.  
        Updating the boot device settings
      3.  
        Using DMP devices as swap devices or dump devices
      4.  
        Cloning the boot environment with DMP
      5.  
        Creating a snapshot of an existing boot environment
    6. Administering DMP using the vxdmpadm utility
      1.  
        Retrieving information about a DMP node
      2.  
        Displaying consolidated information about the DMP nodes
      3.  
        Displaying the members of a LUN group
      4.  
        Displaying paths controlled by a DMP node, controller, enclosure, or array port
      5.  
        Displaying information about controllers
      6.  
        Displaying information about enclosures
      7.  
        Displaying information about array ports
      8.  
        User-friendly CLI outputs for ALUA arrays
      9.  
        Displaying information about devices controlled by third-party drivers
      10.  
        Displaying extended device attributes
      11.  
        Suppressing or including devices from VxVM control
      12. Gathering and displaying I/O statistics
        1.  
          Displaying cumulative I/O statistics
        2.  
          Displaying statistics for queued or erroneous I/Os
        3.  
          Examples of using the vxdmpadm iostat command
      13.  
        Setting the attributes of the paths to an enclosure
      14.  
        Displaying the redundancy level of a device or enclosure
      15.  
        Specifying the minimum number of active paths
      16.  
        Displaying the I/O policy
      17. Specifying the I/O policy
        1.  
          Scheduling I/O on the paths of an Asymmetric Active/Active or an ALUA array
        2.  
          Example of applying load balancing in a SAN
      18.  
        Disabling I/O for paths, controllers, array ports, or DMP nodes
      19.  
        Enabling I/O for paths, controllers, array ports, or DMP nodes
      20.  
        Renaming an enclosure
      21.  
        Configuring the response to I/O failures
      22.  
        Configuring the I/O throttling mechanism
      23.  
        Configuring Subpaths Failover Groups (SFG)
      24.  
        Configuring Low Impact Path Probing (LIPP)
      25.  
        Displaying recovery option values
      26.  
        Configuring DMP path restoration policies
      27.  
        Stopping the DMP path restoration thread
      28.  
        Displaying the status of the DMP path restoration thread
      29.  
        Configuring Array Policy Modules
      30.  
        Configuring latency threshold tunable for metro/geo array
  4. Administering disks
    1.  
      About disk management
    2. Discovering and configuring newly added disk devices
      1.  
        Partial device discovery
      2. About discovering disks and dynamically adding disk arrays
        1.  
          How DMP claims devices
        2.  
          Disk categories
        3.  
          Adding DMP support for a new disk array
        4.  
          Enabling discovery of new disk arrays
      3.  
        About third-party driver coexistence
      4. How to administer the Device Discovery Layer
        1.  
          Listing all the devices including iSCSI
        2.  
          Listing all the Host Bus Adapters including iSCSI
        3.  
          Listing the ports configured on a Host Bus Adapter
        4.  
          Listing the targets configured from a Host Bus Adapter or a port
        5.  
          Listing the devices configured from a Host Bus Adapter and target
        6.  
          Getting or setting the iSCSI operational parameters
        7.  
          Listing all supported disk arrays
        8.  
          Excluding support for a disk array library
        9.  
          Re-including support for an excluded disk array library
        10.  
          Listing excluded disk arrays
        11.  
          Listing disks claimed in the DISKS category
        12.  
          Displaying details about an Array Support Library
        13.  
          Adding unsupported disk arrays to the DISKS category
        14.  
          Removing disks from the DISKS category
        15.  
          Foreign devices
    3.  
      VxVM coexistence with ZFS
    4. Changing the disk device naming scheme
      1.  
        Displaying the disk-naming scheme
      2.  
        Regenerating persistent device names
      3.  
        Changing device naming for enclosures controlled by third-party drivers
      4. Simple or nopriv disks with enclosure-based naming
        1.  
          Removing the error state for simple or nopriv disks in the boot disk group
        2.  
          Removing the error state for simple or nopriv disks in non-boot disk groups
    5.  
      Discovering the association between enclosure-based disk names and OS-based disk names
  5. Dynamic Reconfiguration of devices
    1.  
      About online Dynamic Reconfiguration
    2. About the DMPDR utility
      1.  
        Using the DMPDR utility to reconfigure the LUNs associated with a server
    3. Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
      1.  
        Removing LUNs dynamically from an existing target ID
      2.  
        Adding new LUNs dynamically to a target ID
      3.  
        Replacing LUNs dynamically from an existing target ID
      4.  
        Replacing a host bus adapter online
    4. Manually reconfiguring a LUN online that is under DMP control
      1.  
        Overview of manually reconfiguring a LUN
      2.  
        Manually removing LUNs dynamically from an existing target ID
      3.  
        Manually adding new LUNs dynamically to a new target ID
      4.  
        About detecting target ID reuse if the operating system device tree is not cleaned up
      5.  
        Scanning an operating system device tree after adding or removing LUNs
      6.  
        Manually cleaning up the operating system device tree after removing LUNs
      7.  
        Manually replacing a host bus adapter on an M5000 server
    5.  
      Changing the characteristics of a LUN from the array side
    6.  
      Upgrading the array controller firmware online
  6. Event monitoring
    1.  
      About the Dynamic Multi-Pathing (DMP) event source daemon (vxesd)
    2.  
      Fabric Monitoring and proactive error detection
    3.  
      Dynamic Multi-Pathing (DMP) automated device discovery
    4.  
      Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre Channel topology
    5.  
      DMP event logging
    6.  
      Starting and stopping the Dynamic Multi-Pathing (DMP) event source daemon
  7. Performance monitoring and tuning
    1.  
      About tuning Dynamic Multi-Pathing (DMP) with templates
    2.  
      DMP tuning templates
    3.  
      Example DMP tuning template
    4.  
      Tuning a DMP host with a configuration attribute template
    5.  
      Managing the DMP configuration files
    6.  
      Resetting the DMP tunable parameters and attributes to the default values
    7.  
      DMP tunable parameters and attributes that are supported for templates
    8.  
      DMP tunable parameters
  8. Appendix A. DMP troubleshooting
    1.  
      Displaying extended attributes after upgrading to DMP 9.0
    2.  
      Recovering from errors when you exclude or include paths to DMP
    3.  
      Downgrading the array support
  9. Appendix B. Reference
    1.  
      Command completion for InfoScale commands

Configuring a mirror for the ZFS root pool using a DMP device

After the root pool is in DMP control, you can add any DMP device as a mirror to the ZFS root pool. You can attach or detach the DMP device using the zpool commands.

To replace a disk of root pool, you can use the following procedure to add the new device as a mirror to the ZFS root pool. After the new device is resilvered, you can detach the original device.

The following examples show attaching and detaching the DMP device hitachi_vsp0_00f3s0 to the ZFS root pool.

To configure a mirror for the ZFS root pool using a DMP device.

  1. Make sure the dmp_native_support tunable is set to on.
    # vxdmpadm gettune dmp_native_support
    Tunable                    Current Value Default Value
    -------------------------- ------------- ---------------
    dmp_native_support         on            off

    If the dmp_native_support tunable is not on, you must enable DMP support for native devices.

    See About setting up DMP to manage native devices.

  2. View the status of the root pool using the following command:
    # zpool status rpool
      pool: rpool
     state: ONLINE
      scan: none requested
    
    config:
    
            NAME                 STATE     READ WRITE CKSUM
            rpool                ONLINE       0     0     0
            hitachi_vsp0_00f4s0  ONLINE       0     0     0
  3. Use the format command or the fmthard command to format the partition table on the DMP device that you want to add to the root pool. Create the partition the same as the partition of the original device. In this example, the new device hitachi_vsp0_00f3 is formatted to have the similar partitions as the original devicehitachi_vsp0_00f4.
  4. Attach the DMP device hitachi_vsp0_00f3 to the root pool.
    # zpool attach rpool hitachi_vsp0_00f4s0 hitachi_vsp0_00f3s0
  5. Make sure to wait until the resilvering operation is completed, before you reboot the system.
    # zpool status rpool
      pool: rpool
     state: DEGRADED
    status: One or more devices is currently being resilvered. 
            The pool will continue to function in a degraded 
            state.
    action: Wait for the resilver to complete.
            Run 'zpool status -v' to see device specific details.
      scan: resilver in progress since Fri Feb  8 05:06:26 2013
        10.6G scanned out of 20.0G at 143M/s, 0h1m to go
        10.6G resilvered, 53.04% done
    config:
    
     NAME                      STATE    READ WRITE CKSUM
       rpool                   DEGRADED    0     0     0
         mirror-0              DEGRADED    0     0     0
           hitachi_vsp0_00f4s0 ONLINE      0     0     0
           hitachi_vsp0_00f3s0 DEGRADED    0     0     0 (resilvering)

    For the system to be bootable with the mirror disk, update the eeprom variable boot-device with the paths of the mirrored DMP device.

    See Updating the boot device settings.

    You can perform these steps while the resilvering is in progress.

  6. If the resilvering operation is completed, then verify booting from mirror disk.
    # zpool status rpool
      pool: rpool
     state: ONLINE
      scan: resilvered 20.0G in 0h10m with 0 errors on Wed Mar  
            6 05:02:36 2013
    config:
     
     NAME                       STATE   READ WRITE CKSUM
       rpool                    ONLINE     0     0     0
         mirror-0               ONLINE     0     0     0
           hitachi_vsp0_00f4s0  ONLINE     0     0     0
           hitachi_vsp0_00f3s0  ONLINE     0     0     0

  7. Update the ZFS bootloader for the new mirror disk.
    # bootadm install-bootloader hitachi_vsp0_00f3s0

    or

    # /sbin/installboot -F zfs -f /usr/plaftform/'uname -m' \
    /lib/fs/zfs/bootblk /dev/vx/rdmp/hitachi_vsp0_00f3s0