How to create a Solaris ZFS zpool and manage using Veritas Dynamic Multi-pathing (DMP) dmp_native_support tunable
Description
ZFS is a type of file system presenting a pooled storage model developed by SUN (Oracle). File systems can directly draw from a common storage pool (zpool).
Veritas Volume Manager (VxVM) can be used on the same system as ZFS disks.
VxVM protects devices in use by ZFS from any VxVM operations that may overwrite the disk. These operations include initializing the disk for use by VxVM or encapsulating the disk.
If you attempt to perform one of these VxVM operations on a device that is in use by ZFS, VxVM displays an error message.
Before you can manage a previously configured ZFS disk with VxVM, you must remove it from ZFS control.
Similarly, to begin managing a VxVM managed disk now with ZFS, you must remove the disk from VxVM control.
To make the device available for ZFS, remove the VxVM label using the VxVM CLI command "vxdiskunsetup <da-name>".
When unintializing a VxVM device, ensure the disk is not associated with a disk group.
To reuse a VxVM disk as a ZFS disk
If the disk is in a VxVM managed disk group, remove the disk from the disk group or destroy the disk group if it is the only disk in the disk group
1.] To remove the disk from the disk group:
# vxdg [-g diskgroup] rmdisk <disk-name>
or
2.] To destroy the disk group, type:
# vxdg destroy <disk-group>
3.] To remove the disk from VxVM control, type #
/etc/vx/bin/vxdiskunsetup <dsk-name>
Note: You must perform step 1 or step 2, followed by step 3 in order for VxVM to release the disk. Once done, you can then initialize the disk as a ZFS device using ZFS tools.
DMP_NATIVE_SUPPORT
Veritas DMP Native Support is not enabled for any device that has a VxVM label.
The "vxdmpadm native ls" command can be used to display ZFS devices managed by dmp_native_support:
# vxdmpadm native ls
NOTE: VxVM devices can co-exist with ZFS & other native devices under DMP control.
When enabling the dmp_native_support tunable, it will skip the root ZFS pool (rpool) and only check for ZFS zpools.
The root ZFS pool (rpool) devices can be configured as follows:
- A single path (no multi-pathing)
- Bootdisk managed using MPxIO multi-pathing functionality for the root ZFS pool (rpool) devices, as long as they reside on a different HBA to the devices managed by DMP.
MPxIO is not supported in Solaris LDOM (Oracle VM for SPARC) environments, I/O domains & LDOM guests
MPxIO is not supported when Veritas DMP_NATIVE_SUPPORT is enabled
To confirm if MPxIO is disabled, type:
# stmsboot -Lstmsboot: MPXIO disabled
DMP controls Non-root ZFS pools (zpools)
How to confirm if dmp_native_support is enabled
# vxdmpadm gettune dmp_native_support Tunable Current Value Default Value
------------------------------ ------------- -------------
dmp_native_support on off
To turn on the dmp_native_support tunable, use the following command:
# vxdmpadm settune dmp_native_support=on
If the ZFS pools are not in use, turning on native support will migrate the devices to DMP managed devices. To list the visible ZFS pool names, type:
# zpool list
If ZFS pools are in use, export the ZFS zpools prior to enabling dmp native support:
# zpool export <zpool-name>
NOTE: If a disk is already multi-pathed with a third-party driver (TPD), DMP does not manage the devices unless you unmanage/remove TPD support for the corresponding devices.
After removing TPD support, turn on the dmp_native_support tunable to migrate the devices. The previously TPD managed devices will then migrate the ZFS pools onto DMP managed devices.
ZFS managed devices will default with an EFI disk label regardless of the physical LUN size. Traditionally LUNs which are larger than 1TB in size, are configured with an EFI label (instead of SMI labels).
How to create the ZFS zpool with dmp_native_support enabled
Example
1.] Check the VxVM disk is no longer used by a VxVM disk group.
# vxdisk -eo alldgs list | grep emc0_01dcemc0_01dc auto:cdsdisk - - online c1t5006048C5368E5A0d116s2 RAID
2.] Remove VxVM label from the disk.
# vxdiskunsetup emc0_01dc
3.] List the ZFS pools
# zpool listno pools available
4.] Enable dmp_native_support.
# vxdmpadm gettune dmp_native_support Tunable Current Value Default Value
------------------------------ ------------- -------------
dmp_native_support off off
Note: Ensure all ZFS zpools are exported prior to enabling dmp_native_support.
# vxdmpadm settune dmp_native_support=on
Once dmp_native_support has been enabled, ZFS managed devices will migrate under DMP control.
# zpool listNAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 68G 22.6G 45.4G 33% ONLINE -
The following "for" loop can be used to extract the "path" and "phys_path" details for a ZFS pool:
Sample Output:
# for pool in `zpool list -H -o name` ; do echo "$pool" ; zdb -C "$pool" | grep path ; donerpool
path: '/dev/dsk/emc0_01das0'
phys_path: '/pseudo/vxdmp@0:n60000970000297802241533030344435-a'
5.] The following command lists DMPNODEs that can be used for ZFS pools.
# vxdmpadm native lsDMPNODENAME POOL NAME
=============================================
emc0_01da rpool
emc0_01dc -
Note: The DMPNODE "emc0_01dc" has two paths to the device, managed by DMP
# vxdisk path | grep emc0_01dcc1t5006048C5368E5A0d116s2 emc0_01dc - - ENABLED
c1t5006048C5368E580d262s2 emc0_01dc - - ENABLED
6.] Create ZFS zpool using the DMPNODE name.
# zpool create SYMCPOOL emc0_01dc
Refresh VxVM details following ZFS zpool creation.
# vxdisk scandisks
# vxdisk -eo alldgs list | grep emc0_01dcemc0_01dc auto:ZFS - - ZFS c1t5006048C5368E5A0d116 RAID
7.] Confirm ZFS zpool has been created
# zpool listNAME SIZE ALLOC FREE CAP HEALTH ALTROOT
SYMCPOOL 2.03G 91K 2.03G 0% ONLINE -
rpool 68G 22.6G 45.4G 33% ONLINE -
# zpool status SYMCPOOL pool: SYMCPOOL
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
SYMCPOOL ONLINE 0 0 0
emc0_01dcs0 ONLINE 0 0 0
# zdb -C SYMCPOOLMOS Configuration:
version: 29
name: 'SYMCPOOL'
state: 0
txg: 4
pool_guid: 18402522972995705512
hostid: 2208761827
hostname: 'cobra'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 18402522972995705512
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 10261649738276629377
path: '/dev/dsk/emc0_01dcs0'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 24
ashift: 9
asize: 2181824512
is_log: 0
create_txg: 4
8.] The ZFS zpool name and corresponding DMPNODE name can be referenced using the "vxdmpadm native ls" command.
# vxdmpadm native lsDMPNODENAME POOL NAME
=============================================
emc0_01da rpool
emc0_01dc SYMCPOOL
Improving zpool import times
To improve the ZFS import times in connection with Veritas DMP native support, the ZFS zpool can be imported as follows:
# zpool export SYMCPOOL
Once the ZFS zpool has been exported, using the "-d /dev/vx/dmp" path to import subsequent zpools.
# zpool import -d /dev/vx/dmp/ SYMCPOOL
# zpool listNAME SIZE ALLOC FREE CAP HEALTH ALTROOT
SYMCPOOL 2.03G 142K 2.03G 0% ONLINE -
rpool 68G 22.6G 45.4G 33% ONLINE -
# zpool status SYMCPOOL pool: SYMCPOOL
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
SYMCPOOL ONLINE 0 0 0
/dev/vx/dmp/emc0_01dcs0 ONLINE 0 0 0
errors: No known data errors
# zdb -C SYMCPOOLMOS Configuration:
version: 29
name: 'SYMCPOOL'
state: 0
txg: 24
pool_guid: 7495828995333712001
hostid: 2208761827
hostname: 'rdgv240sol13'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 7495828995333712001
children[0]:
type: 'disk'
id: 0
guid: 17896871602888879025
path: '/dev/vx/dmp/emc0_01dcs0'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 24
ashift: 9
asize: 2181824512
is_log: 0
create_txg: 4
Comparison times:
# zpool export SYMCPOOL
# time zpool import SYMCPOOLreal 0m6.75s
user 0m0.05s
sys 0m0.52s
# zpool export SYMCPOOL
# time zpool import -d /dev/vx/dmp/ SYMCPOOLreal 0m5.47s
user 0m0.03s
sys 0m1.08s
To reuse the ZFS disk with VxVM
1.] Destroy the ZFS zpool name
# zpool destroy SYMCPOOL
2.] Even though the ZFS zpool has been destroyed, VxVM will still detect the ZFS label on the disk.
# vxdisk -eo alldgs list | grep emc0_01dcemc0_01dc auto:ZFS - - ZFS c1t5006048C5368E5A0d116 RAID
3.] To clear the ZFS label, type:
a.] The vxdisksetup command will return the dd string required to clear the ZFS label.
# vxdisksetup -i emc0_01dcVxVM vxdisksetup ERROR V-5-2-5716 Disk emc0_01dc is in use by ZFS. Slice(s) 0 are in use as ZFS zpool (or former) devices.
If you still want to initialize this device for VxVM use, please destroy the zpool by running 'zpool' command if it is still active, and then remove the ZFS signature from each of these slice(s) as follows:
dd if=/dev/zero of=/dev/vx/rdmp/emc0_01dcs[n] oseek=31 bs=512 count=1
[n] is the slice number.
b.] Populate the /dev/vx/rdmp path using the DMPNODE name and add "s0".
# dd if=/dev/zero of=/dev/vx/rdmp/emc0_01dcs0 oseek=31 bs=512 count=11+0 records in
1+0 records out
4.] Confirm the ZFS label has been removed and the disk status now reports "online invalid"
# vxdisk -eo alldgs list | grep emc0_01dcemc0_01dc auto:none - - online invalid c1t5006048C5368E5A0d116 RAID
5.] Now reinitialize the disk for VxVM use, by running vxdisksetup against the Veritas disk access (da) name/ DMPNODE.
# /etc/vx/bin/vxdisksetup -i emc0_01dc
# vxdisk -eo alldgs list | grep emc0_01dcemc0_01dc auto:cdsdisk - - online c1t5006048C5368E5A0d116 RAID
The disk is now ready for VxVM use once again.