Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Provisioning new usable storage
- Administering disks
- Disk devices
- Discovering and configuring newly added disk devices
- Discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Changing the disk-naming scheme
- Adding a disk to VxVM
- Rootability
- Displaying disk information
- Removing disks
- Removing and replacing disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Administering DMP using vxdmpadm
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Destroying a disk group
- Creating and administering subdisks and plexes
- Displaying plex information
- Reattaching plexes
- Creating volumes
- Types of volume layouts
- Creating a volume
- Using vxassist
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a striped volume
- Creating a volume using vxmake
- Initializing and starting a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- Displaying volume information
- Monitoring and controlling tasks
- Reclamation of storage on thin reclamation arrays
- Stopping a volume
- Resizing a volume
- Adding a mirror to a volume
- Preparing a volume for DRL and instant snapshots
- Adding traditional DRL logging to a mirrored volume
- Enabling FastResync on a volume
- Performing online relayout
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Changing the CVM master manually
- Importing disk groups as shared
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
Connectivity policy of shared disk groups
A shared disk group provides concurrent read and write access to the volumes that it contains for all nodes in a cluster. A shared disk group can be created on any node of the cluster. This has the following advantages and implications:
All nodes in the cluster see exactly the same configuration.
Commands to change the configuration are sent to the master node.
Any changes on the master node are automatically coordinated and propagated to the slave nodes in the cluster.
Any failures that require a configuration change must be sent to the master node so that they can be resolved correctly.
As the master node resolves failures, all the slave nodes are correctly updated. This ensures that all nodes have the same view of the configuration.
The practical implication of this design is that I/O failure on any node results in the configuration of all nodes being changed. This is known as the global detach policy. However, in some cases, it is not desirable to have all nodes react in this way to I/O failure. To address this, an alternate way of responding to I/O failures, known as the local detach policy, was introduced.
The local detach policy is intended for use with shared mirrored volumes in a cluster. This policy prevents I/O failure on any of the nodes in the cluster from causing a plex to be detached. This would require the plex to be resynchronized when it is subsequently reattached.
The local detach policy is supported for disk groups that have a version number of 120 or greater.
For small mirrored volumes, non-mirrored volumes, volumes that use hardware mirrors, and volumes in private disk groups, there is no benefit in configuring the local detach policy. In most cases, it is recommended that you use the default global detach policy.
The choice between local and global detach polices is one of node availability versus plex availability when an individual node loses access to disks. Select the local detach policy for a diskgroup if you are using mirrored volumes within it, and would prefer a single node to lose write access to a volume rather than a plex of the volume being detached clusterwide. i.e. you consider the availability of your data (retaining mirrors) more important than any one node in the cluster. This will typically only apply in larger clusters, and where a parallel application is being used that can seamlessly provide the same service from the other nodes. For example, this option is not appropriate for fast failover configurations. Select the global detach policy in all other cases.
In the event of the master node losing access to all the disks containing log/config copies, the disk group failure policy is triggered. At this point no plexes can be detached, as this requires access to the log/config copies, no configuration changes to the disk group can be made, and any action requiring the kernel to write to the klog (first open, last close, mark dirty etc) will fail. If this happened in releases prior to 4.1, the master node always disabled the disk group. Release 4.1 introduces the disk group failure policy, which allows you to change this behavior for critical disk groups. This policy is only supported for disk groups that have a version number of 120 or greater.