Storage Foundation 8.0 Administrator's Guide - Windows
- Overview
- Setup and configuration
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Adding storage
- Disk tasks
- Remove a disk from the computer
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Mount a volume at an empty folder (Drive path)
- Expand a dynamic volume
- Shrink a dynamic volume
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Partitioned shared storage with private dynamic disk group protection
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Event monitoring and notification
- Event notification
- Configuring Automatic volume growth
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Dynamic Disk Group Split and Join troubleshooting tips
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Microsoft Exchange
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft Exchange
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- Typical deployment scenarios
- About cache area
- Configuring SmartIO
- Frequently asked questions about SmartIO
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Configuring a CVM cluster
- Administering CVM
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- Preparing the host machines
- Configuring the SFW storage
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Troubleshooting and recovery
- Using disk and volume status information
- Resolving common problem situations
- Commands or procedures used in troubleshooting and recovery
- Rescan command
- Repair volume command for dynamic mirrored volumes
- Additional troubleshooting issues
- Disk issues
- Volume issues
- Disk group issues
- Connection issues
- Issues related to boot or restart
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- CVM issues
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist (Windows-specific)
- vxsd
- vxedit
- vxdmpadm
- vxcbr
- vxsnap
- vxscrub
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
About preventing data corruption with I/O Fencing
I/O Fencing is a feature that prevents data corruption in the event of a communication breakdown in a cluster.
To provide high availability, Microsoft Failover Clustering takes corrective action when a node fails. In this situation, SFW also updates its components to reflect the altered CVM membership.
Problems arise when the mechanism that detects the failure breaks down because symptoms appear identical to those of a failed node. For example, if a system in a two-node cluster fails, the system stops sending heartbeats over the private interconnects. The remaining node then takes corrective action. The failure of the private interconnects, instead of the actual nodes, presents identical symptoms and causes each node to determine its peer has departed. This situation typically results in data corruption because both nodes try to take control of data storage in an uncoordinated manner.
SFW uses I/O Fencing to remove the risk that is associated with split-brain. I/O Fencing allows write access for members of the active cluster and blocks access to storage from non-members.