Storage Foundation 8.0 Administrator's Guide - Windows
- Overview
- Setup and configuration
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Adding storage
- Disk tasks
- Remove a disk from the computer
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Mount a volume at an empty folder (Drive path)
- Expand a dynamic volume
- Shrink a dynamic volume
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Partitioned shared storage with private dynamic disk group protection
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Event monitoring and notification
- Event notification
- Configuring Automatic volume growth
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Dynamic Disk Group Split and Join troubleshooting tips
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Microsoft Exchange
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft Exchange
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- Typical deployment scenarios
- About cache area
- Configuring SmartIO
- Frequently asked questions about SmartIO
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Configuring a CVM cluster
- Administering CVM
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- Preparing the host machines
- Configuring the SFW storage
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Troubleshooting and recovery
- Using disk and volume status information
- Resolving common problem situations
- Commands or procedures used in troubleshooting and recovery
- Rescan command
- Repair volume command for dynamic mirrored volumes
- Additional troubleshooting issues
- Disk issues
- Volume issues
- Disk group issues
- Connection issues
- Issues related to boot or restart
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- CVM issues
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist (Windows-specific)
- vxsd
- vxedit
- vxdmpadm
- vxcbr
- vxsnap
- vxscrub
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
Active/Active and Active/Passive settings
Dynamic Multi-Pathing has two modes of operation for an array's paths, Active/Active and Active/Passive.
These modes also apply to the array's disks and are defined as follows:
Active/Active
The mode in which Dynamic Multi-Pathing allocates the data transfer across all the possible paths, thus enabling the desirable feature of load balancing. With this mode, Dynamic Multi-Pathing implements a round-robin algorithm, selecting each path in sequence for each successive data transfer to or from a disk. For example, if you have two paths active, A and B, the first disk transfer occurs on path A, the next on path B, and the next on path A again.
In addition to the round-robin algorithm, DMP DSMs offer the following load balancing options:
Dynamic Least Queue Depth
Selects the path with the least number of I/O requests in its queue for the next data transfer.
For example, if you have two active paths, path A with one I/O request and path B with none, DMP DSMs selects the path with the least number of I/O requests in its queue, path B, for the next data transfer.
Balanced Path
This policy is designed to optimize the use of caching in disk drives and RAID controllers. The size of the cache depends on the characteristics of the particular hardware. Generally, disks and LUNs are logically divided into a number of regions or partitions. I/O to and from a given region is sent on only one of the active paths. Adjusting the region size to be compatible with the size of the cache is beneficial so that all the contiguous blocks of I/O to that region use the same active path. The value of the partition size can be changed by adjusting the value of the tunable parameter, Block Shift.
Block Shift represents the number of contiguous I/O blocks that are sent along a path to an Active/Active array before switching to the next available path. The Block Shift value is expressed as the integer exponent of a power of 2. For example, the Block Shift value of 11 represents 211 or 2048 contiguous blocks of I/O.
The benefit of this policy is lost if the value is set larger than the cache size. The benefit is also lost when the active path fails. In this situation, the I/O is automatically redistributed across the remaining paths.
The default value of the Block Shift parameter is set to 11 so that 2048 blocks (1MB) of contiguous I/O are sent over a path before switching to a different path. Depending on your hardware, adjusting this parameter may result in better I/O throughput. Refer to your hardware documentation for more information.
Note:
Block Shift only affects the behavior of the balanced path policy. A value of 0 disables multi-pathing for the policy unless the vxdmpadm command is used to specify a different partition size for an array.
Weighted Paths
Uses the path with the lowest numerical weight. Each path is assigned a weight by the user to designate which path is favored for data transfer. If two or more paths have the same weight and are the lowest weight of all paths, then these paths are used each in turn, in round-robin fashion, for the data transfer.
For example, if you have three active paths, path A with weight of 0, path B with weight of 0, and path C with weight of 9, DMP DSMs use path A for one data transfer and then use path B for the next. Path C is in standby mode and is used if path A or path B fails.
Round robin with Subset
Uses a subset of paths, each in turn, in round-robin fashion. The user specifies the paths for data transfer that make up the subset. The remaining paths are in standby mode.
For example, if you have three active paths, path A, path B, and path C and you specify the subset to contain path A and path B, then DMP DSMs use path A for one data transfer and then use path B for the next. Path C is in standby mode and is used if path A or path B fails.
Least Blocks
Selects the path with the least number of blocks of I/O in its queue for the next data transfer.
For example, if you have two active paths, path A with one block of I/O and path B with none, DMP DSMs select the path with the least number of blocks of I/O in its queue, path B, for the next data transfer.
Active/Passive
A mode in which a path that is designated as the "Preferred Path" or "primary path " is always active and the other path or paths act as backups (standby paths) that are called into service if the current operating path fails.
The modes of operation - Active/Active and Active/Passive - are shown as options in the Load Balancing section of the program's Array Settings and Device Settings windows. The Active/Active mode enables load balancing, but the Active/Passive mode does not provide load balancing except for the Fail Over Only load balancing policy.
Note:
If a storage array cannot transfer data on one of the path configurations, the Load Balancing options appear grayed out on the screen and you cannot access these settings.
You configure the load balancing settings for the paths at the array level through the Array Settings screen, or you can accept the default setting. The default setting is dependent on the particular array. Consult the documentation for your storage array to determine the default setting of the array and any additional settings it supports.
After the appropriate array setting is made, all the disks in an array have the same load balancing setting as the array. If the array is set to active/active, you can use the Device Settings screen to change the setting on an individual disk so that it has a different load balancing setting than the array. When an array is set to active/passive, no load balancing is enabled and data transfer is limited to the one preferred or primary path only.
For all Active/Active arrays under control of DMP DSMs:
All paths to the disks are current active I/O paths. Each active path is designated by a path icon with a green circle in the VEA GUI.
For an Active/Passive load balance setting, the primary path is designated by a path icon with a checkmark in a green circle in the GUI.
The DMP DSMs are not enabled to indicate which array controller each path is connected to.
For all Active/Passive Concurrent (A/PC) and Asymmetric Logical Unit Access (ALUA) arrays under control of DMP DSMs, the load balance settings apply only to the current active I/O paths. If all the active I/O paths change or fail, the load balance settings are automatically applied to the new current active I/O paths of the arrays.
In addition, for A/PC and ALUA arrays:
The current active path is designated by a path icon with a green circle in the VEA GUI.
For an Active/Passive load balance setting, the primary path is designated by a path icon with a checkmark in a green circle in the VEA GUI.
DMP automatically selects the primary path for Active/Passive load balancing.
Round robin with Subset and Weighted Paths load balance settings are available only at the device level. They are not available at the array level.
Active paths are connected to the same array controller.