DMP inconsistency after creating a zpool storage pool on a DMP device in 5.1SP1, based on either an ALUA or Active-Passive array
Problem
Veritas Volume Manager (VxVM) 5.1SP1 introduces the facility to manage a ZFS zpool storage pool on a DMP device.
if different array modes are detected, i.e. ALUA and Active-Passive (A/P) this can break DMP configuration and result in duplicate disk names being reported by VxVM.
This is due to an interoperability issue with a Solaris limitation, which is seen after running "vxdisk scandisks".
Error Message
The "vxdisk list" output should reveal just the one dmpnode with a ZFS tag. However, after "vxdisk scandisks" after creating a storage pool on a DMP device, an inconsistency in DMP results:
# vxdisk listDEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto:cdsdisk - - online
ibm_ds8x000_01ad auto:ZFS - - ZFS <<< should be just one device
ibm_ds8x000_01ae auto - - nolabel
ibm_ds8x000_01af auto - - nolabel
ibm_ds8x001_0 auto:ZFS - - ZFS <<< duplicate device
Upon reviewing the sample output below, the LUN should consist of 4 paths associated with one dmpnode, however, the device paths have split across 2 different dmpnodes:
# vxdisk list ibm_ds8x000_01ad<snippet>
Multipathing information:
numpaths: 1
c3t500507630A084217d0 state=enabled
<snippet>
and:
# vxdisk list ibm_ds8x001_0<snippet>
Multipathing information:
numpaths: 3
c3t500507630A034217d0 state=enabled
c2t500507630A034217d0 state=enabled
c2t500507630A084217d0 state=enabled
<snippet>
NOTE: DMP should not be reported 2 different dmpnodes for the same LUN.
Cause
The underlying cause is related to Solaris and how it manages OS device handles and devices labels.
An EFI disk label should be defined for ZFS managed pools.
It should be noted that MPXIO doesn't see the same issue but only because of the way it suppresses paths.
e2233195 also relevant.
Solution
After creating a zpool storage pool on a DMP device, it is necessary to run a format in order to stamp down an EFI label for all other paths of the dmpnode. Only then, should a "vxdisk scandisks" command be run to rebuild the Volume Manager devices.
Example
# modinfo|grep vx 26 135f810 50af8 300 1 vxdmp (VxVM 5.1SP1 DMP Driver)
28 7be00000 21a630 301 1 vxio (VxVM 5.1SP1 I/O driver)
30 13a9980 1190 302 1 vxspec (VxVM 5.1SP1 control/status driv)
213 7aa9d2f0 d40 303 1 vxportal (VxFS 5.1_SP1 portal driver)
214 7a600000 1fb718 21 1 vxfs (VxFS 5.1_SP1 SunOS 5.10)
# vxdisk listDEVICE TYPE DISK GROUP STATUS
disk_2 auto:none - - online invalid
disk_3 auto:cdsdisk - - online
ibm_ds8x000_01ad auto:ZFS - - ZFS
ibm_ds8x000_01ae auto - - nolabel
ibm_ds8x000_01af auto - - nolabel
Select an alternate disk:
# vxdisk list ibm_ds8x000_01ae<snippet>
Multipathing information:
numpaths: 4
c2t500507630A034217d1s2 state=enabled
c2t500507630A084217d1s2 state=enabled
c3t500507630A084217d1s2 state=enabled
c3t500507630A034217d1s2 state=enabled
<snippet>
Review the Solaris disk VTOC:
# prtvtoc /dev/rdsk/c2t500507630A034217d1s2
prtvtoc: /dev/rdsk/c2t500507630A034217d1s2: Unable to read Disk geometry errno = 0x16
Add new SMI label using the 1st listed path of the above dmpnode:
# format -e /dev/rdsk/c2t500507630A034217d1s2
Check the disk VTOC using prtvtoc:
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 209682432 209682431
6 4 00 0 209682432 209682431
Rescan and review the revised "vxdisk list" output:
# vxdisk scandisks
# vxdisk listDEVICE TYPE DISK GROUP STATUS
disk_2 auto:none - - online invalid
disk_3 auto:cdsdisk - - online
ibm_ds8x000_01ad auto:ZFS - - ZFS
ibm_ds8x000_01ae auto:none - - online invalid <<<<<new device
ibm_ds8x000_01af auto - - nolabel
The following command will create a new ZFS zpool names "newpool":
# zpool create -m none newpool2 /dev/vx/dmp/ibm_ds8x000_01ae
Now, before another VxVM rescan, the user must label all other paths with an EFI label.
Before doing so, check 1st path has changed to an EFI label:
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 4 00 256 209698527 209698782
8 11 00 209698783 16384 209715166
The 3 remaining paths are:
c2t500507630A084217d1s2 state=enabled
c3t500507630A084217d1s2 state=enabled
c3t500507630A034217d1s2 state=enabled
Run "format -e /dev/rdsk..." on all above 3 paths and selected EFI label
Rescan and check:
# vxdisk scandisks
# vxdisk listDEVICE TYPE DISK GROUP STATUS
disk_2 auto:none - - online invalid
disk_3 auto:cdsdisk - - online
ibm_ds8x000_01ad auto:ZFS - - ZFS
ibm_ds8x000_01ae auto:ZFS - - ZFS <<<new device
ibm_ds8x000_01af auto - - nolabel
Perform another rescan:
# vxdisk scandisks
# vxdisk listDEVICE TYPE DISK GROUP STATUS
disk_2 auto:none - - online invalid
disk_3 auto:cdsdisk - - online
ibm_ds8x000_01ad auto:ZFS - - ZFS
ibm_ds8x000_01ae auto:ZFS - - ZFS
ibm_ds8x000_01af auto - - nolabel
Each path should no longer report a "s2" reference for the EFI labelled disk.
# vxdisk list ibm_ds8x000_01ae<snippet>
Multipathing information:
numpaths: 4
c2t500507630A034217d1 state=enabled
c2t500507630A084217d1 state=enabled
c3t500507630A084217d1 state=enabled
c3t500507630A034217d1 state=enabled
<snippet>
Applies To
Solaris 10
Any ALUA or Active-Passive array