Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Provisioning new usable storage
- Administering disks
- Disk devices
- Discovering and configuring newly added disk devices
- Discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Changing the disk-naming scheme
- Adding a disk to VxVM
- Rootability
- Displaying disk information
- Removing disks
- Removing and replacing disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Administering DMP using vxdmpadm
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Destroying a disk group
- Creating and administering subdisks and plexes
- Displaying plex information
- Reattaching plexes
- Creating volumes
- Types of volume layouts
- Creating a volume
- Using vxassist
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a striped volume
- Creating a volume using vxmake
- Initializing and starting a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- Displaying volume information
- Monitoring and controlling tasks
- Reclamation of storage on thin reclamation arrays
- Stopping a volume
- Resizing a volume
- Adding a mirror to a volume
- Preparing a volume for DRL and instant snapshots
- Adding traditional DRL logging to a mirrored volume
- Enabling FastResync on a volume
- Performing online relayout
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Changing the CVM master manually
- Importing disk groups as shared
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
Private and shared disk groups
The following types of disk groups are defined:
Belongs to only one node. A private disk group can only be imported by one system. LUNs in a private disk group may be physically accessible from one or more systems, but access is restricted to only one system at a time. The boot disk group (usually aliased by the reserved disk group name bootdg) is always a private disk group. | |
Can be shared by all nodes. A shared (or cluster-shareable) disk group is imported by all cluster nodes. LUNs in a shared disk group must be physically accessible from all systems that may join the cluster. |
In a CVM cluster, most disk groups are shared. LUNs in a shared disk group are accessible from all nodes in a cluster, allowing applications on multiple cluster nodes to simultaneously access the same LUN. A volume in a shared disk group can be simultaneously accessed by more than one node in the cluster, subject to license key and disk group activation mode restrictions.
You can use the vxdg command to designate a disk group as cluster-shareable.
When a disk group is imported as cluster-shareable for one node, each disk header is marked with the cluster ID. As each node subsequently joins the cluster, it recognizes the disk group as being cluster-shareable and imports it. In contrast, a private disk group's disk headers are marked with the individual node's host name. As system administrator, you can import or deport a shared disk group at any time; the operation takes place in a distributed fashion on all nodes.
Each LUN is marked with a unique disk ID. When cluster functionality for VxVM starts on the master, it imports all shared disk groups (except for any that do not have the autoimport attribute set). When a slave tries to join a cluster, the master sends it a list of the disk IDs that it has imported, and the slave checks to see if it can access them all. If the slave cannot access one of the listed disks, it abandons its attempt to join the cluster. If it can access all of the listed disks, it joins the cluster and imports the same shared disk groups as the master. When a node leaves the cluster gracefully, it deports all its imported shared disk groups, but they remain imported on the surviving nodes.
Reconfiguring a shared disk group is performed with the cooperation of all nodes. Configuration changes to the disk group are initiated by the master, and happen simultaneously on all nodes and the changes are identical. Such changes are atomic in nature, which means that they either occur simultaneously on all nodes or not at all.
Whether all members of the cluster have simultaneous read and write access to a cluster-shareable disk group depends on its activation mode setting.
The data contained in a cluster-shareable disk group is available as long as at least one node is active in the cluster. The failure of a cluster node does not affect access by the remaining active nodes. Regardless of which node accesses a cluster-shareable disk group, the configuration of the disk group looks the same.
Warning:
Applications running on each node can access the data on the VM disks simultaneously. VxVM does not protect against simultaneous writes to shared volumes by more than one node. It is assumed that applications control consistency (by using Veritas Cluster File System or a distributed lock manager, for example).