Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Provisioning new usable storage
- Administering disks
- Disk devices
- Discovering and configuring newly added disk devices
- Discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Changing the disk-naming scheme
- Adding a disk to VxVM
- Rootability
- Displaying disk information
- Removing disks
- Removing and replacing disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Administering DMP using vxdmpadm
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Destroying a disk group
- Creating and administering subdisks and plexes
- Displaying plex information
- Reattaching plexes
- Creating volumes
- Types of volume layouts
- Creating a volume
- Using vxassist
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a striped volume
- Creating a volume using vxmake
- Initializing and starting a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- Displaying volume information
- Monitoring and controlling tasks
- Reclamation of storage on thin reclamation arrays
- Stopping a volume
- Resizing a volume
- Adding a mirror to a volume
- Preparing a volume for DRL and instant snapshots
- Adding traditional DRL logging to a mirrored volume
- Enabling FastResync on a volume
- Performing online relayout
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Changing the CVM master manually
- Importing disk groups as shared
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
Example of applying load balancing in a SAN
This example describes how to configure load balancing in a SAN environment where there are multiple primary paths to an Active/Passive device through several SAN switches. As can be seen in this sample output from the vxdisk list command, the device c3t2d15 has eight primary paths:
# vxdisk list c3t2d15 Device: c3t2d15 . . . numpaths: 8 c2t0d15 state=enabled type=primary c2t1d15 state=enabled type=primary c3t1d15 state=enabled type=primary c3t2d15 state=enabled type=primary c4t2d15 state=enabled type=primary c4t3d15 state=enabled type=primary c5t3d15 state=enabled type=primary c5t4d15 state=enabled type=primary
In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1.
The first step is to enable the gathering of DMP statistics:
# vxdmpadm iostat start
Next the dd command is used to apply an input workload from the volume:
# dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null &
By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, c5t4d15:
# vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 . . . cpu usage = 11294us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 0 0 0 0 0.00 0.00 c2t1d15 0 0 0 0 0.00 0.00 c3t1d15 0 0 0 0 0.00 0.00 c3t2d15 0 0 0 0 0.00 0.00 c4t2d15 0 0 0 0 0.00 0.00 c4t3d15 0 0 0 0 0.00 0.00 c5t3d15 0 0 0 0 0.00 0.00 c5t4d15 5493 0 5493 0 0.41 0.00
The vxdmpadm command is used to display the I/O policy for the enclosure that contains the device:
# vxdmpadm getattr enclosure ENC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 MinimumQ Single-Active
This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path.
To balance the I/O load across the multiple primary paths, the policy is set to round-robin as shown here:
# vxdmpadm setattr enclosure ENC0 iopolicy=round-robin # vxdmpadm getattr enclosure ENC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 MinimumQ Round-Robin
The DMP statistics are now reset:
# vxdmpadm iostat reset
With the workload still running, the effect of changing the I/O policy to balance the load across the primary paths can now be seen.
# vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 . . . cpu usage = 14403us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 1021 0 1021 0 0.39 0.00 c2t1d15 947 0 947 0 0.39 0.00 c3t1d15 1004 0 1004 0 0.39 0.00 c3t2d15 1027 0 1027 0 0.40 0.00 c4t2d15 1086 0 1086 0 0.39 0.00 c4t3d15 1048 0 1048 0 0.39 0.00 c5t3d14 1036 0 1036 0 0.39 0.00 c5t4d15 1021 0 1021 0 0.39 0.00
The enclosure can be returned to the single active I/O policy by entering the following command:
# vxdmpadm setattr enclosure ENC0 iopolicy=singleactive