Storage Foundation 8.0 Administrator's Guide - Windows
- Overview
- Setup and configuration
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Adding storage
- Disk tasks
- Remove a disk from the computer
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Mount a volume at an empty folder (Drive path)
- Expand a dynamic volume
- Shrink a dynamic volume
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Partitioned shared storage with private dynamic disk group protection
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Event monitoring and notification
- Event notification
- Configuring Automatic volume growth
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Dynamic Disk Group Split and Join troubleshooting tips
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Microsoft Exchange
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft Exchange
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- Typical deployment scenarios
- About cache area
- Configuring SmartIO
- Frequently asked questions about SmartIO
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Configuring a CVM cluster
- Administering CVM
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- Preparing the host machines
- Configuring the SFW storage
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Troubleshooting and recovery
- Using disk and volume status information
- Resolving common problem situations
- Commands or procedures used in troubleshooting and recovery
- Rescan command
- Repair volume command for dynamic mirrored volumes
- Additional troubleshooting issues
- Disk issues
- Volume issues
- Disk group issues
- Connection issues
- Issues related to boot or restart
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- CVM issues
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist (Windows-specific)
- vxsd
- vxedit
- vxdmpadm
- vxcbr
- vxsnap
- vxscrub
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
Overview
In a campus cluster or remote mirror configuration, the hosts and storage of a cluster are allocated between two or more sites. These sites are typically connected through a redundant high-capacity network or Fibre Channel that provides access to the storage and communication between the cluster nodes.
If a disk group is configured with storage at multiple sites and if inter-site communication is disrupted, then a serious split-brain condition may occur. This happens when each site continues to update the local disk group configuration copies without being aware of site disruption. For service(s) to come up on a site while other sites are down, a complete (at least one complete plex for each volume) copy of data is needed. Currently, there is no mechanism to ensure that all volumes have a complete data plex at each site. Data plex keeps getting changed when a volume is resized, a disk is relocated, or when a new volume is added.
Site-aware allocation feature enables applications and services to function properly at a site when other sites become inaccessible. It means that even during site disruption at least one complete plex of a volume is available at each site. Such type of allocation is known as site-based allocation. Users can specify sites when creating volumes or mirrors, and site boundary limits are maintained for operations like volume grow, subdisk move, and disk relocation. Site boundary limit is crossed when a plex is not totally within a site and allocation of the plex crosses the available site boundary.
Site-aware allocation facilitates following types of site-based allocation:
Site Confined allocation
Site Separated allocation
The following table describes the terms that are used in the context of site-aware allocation.
Table: Site-aware allocation and related terminology
Terminology | Description |
---|---|
Site | Logical representation of a set of hosts and set of arrays or enclosures. |
Site Separated | Storage for a volume can be taken from a site (s) that are specified during volume creation. Storage from multiple sites is supported for such type of allocation. Storage for a volume is allocated so that each plex of the volume resides completely on the same site, i.e., if a Site Separated volume has two plexes on two sites and , each plex resides completely on a separate site.Volume Resize, relocation, relayout, or any such operation keeps each plex on its own site. Multiple plexes can reside on the same site |
Site Confined | Storage for a volume can be taken from only a site that is specified while creating a volume. Multiple sites cannot be allocated for such type of volume allocation. The volume resides entirely on the same site. Resize, relocation, relayout, or any such operation only uses storage from the same site. |
Siteless | Refers to a volume that is not tagged with any site information or site properties. By default, all volumes are Siteless. Note: On upgrading to SFW 8.0 from any previous release versions (which did not have the Siteless option), all volume types are "Siteless" by default. You can manually change the property of volumes after upgrading to either Site Confined or Site Separated provided that conditions like "volume need to be entirely on the same site" or "each plex of the volume resides entirely on a site" are met. |
Site boundary | Site boundary limit is said to be crossed when a plex is not totally within a site and allocation of the plex crosses the available site boundary. Automatic operations like hot relocation does not adhere to site boundary restrictions. Storage that is configured with such auto operations become Siteless once site boundary limit is crossed. When a storage becomes siteless, user is notified and Event Viewer logs displays logs to verify the same. |