Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Provisioning new usable storage
- Administering disks
- Disk devices
- Discovering and configuring newly added disk devices
- Discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Changing the disk-naming scheme
- Adding a disk to VxVM
- Rootability
- Displaying disk information
- Removing disks
- Removing and replacing disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Administering DMP using vxdmpadm
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Destroying a disk group
- Creating and administering subdisks and plexes
- Displaying plex information
- Reattaching plexes
- Creating volumes
- Types of volume layouts
- Creating a volume
- Using vxassist
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a striped volume
- Creating a volume using vxmake
- Initializing and starting a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- Displaying volume information
- Monitoring and controlling tasks
- Reclamation of storage on thin reclamation arrays
- Stopping a volume
- Resizing a volume
- Adding a mirror to a volume
- Preparing a volume for DRL and instant snapshots
- Adding traditional DRL logging to a mirrored volume
- Enabling FastResync on a volume
- Performing online relayout
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Changing the CVM master manually
- Importing disk groups as shared
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
Local detach policy
The local detach policy is designed to support failover applications in large clusters where the redundancy of the volume is more important than the number of nodes that can access the volume. If there is a write failure on any node, the usual I/O recovery operations are performed to repair the failure, and additionally all the nodes are contacted to see if the disk is still accessible to them. If the write failure is local, and only seen by a single node, I/O is stopped for the node that first saw the failure, and an error is returned to the application using the volume. The write failure is global if more than one node sees the failure. The volume is not disabled.
If required, configure the cluster management software to move the application to a different node, and/or remove the node that saw the failure from the cluster. The volume continues to return write errors, as long as one mirror of the volume has an error. The volume continues to satisfy read requests as long as one good plex is available.
If the reason for the I/O error is corrected and the node is still a member of the cluster, it can resume performing I/O from/to the volume without affecting the redundancy of the data.
The vxdg command can be used to set the disk detach policy on a shared disk group.
Table: Cluster behavior under I/O failure to a mirrored volume for different disk detach policies summarizes the effect on a cluster of I/O failure to the disks in a mirrored volume.
Table: Cluster behavior under I/O failure to a mirrored volume for different disk detach policies
Type of I/O failure | Local (diskdetpolicy=local) | Global (diskdetpolicy=global) |
---|---|---|
Failure of path to one disk in a volume for a single node | Reads fail only if no plexes remain available to the affected node. Writes to the volume fail. | The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. |
Failure of paths to all disks in a volume for a single node | I/O fails for the affected node. | The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. |
Failure of one or more disks in a volume for all nodes. | The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. | The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. |
More Information