Storage Foundation 7.4.1 Administrator's Guide - Windows
- Overview
- Setup and configuration
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Adding storage
- Disk tasks
- Remove a disk from the computer
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Mount a volume at an empty folder (Drive path)
- Expand a dynamic volume
- Shrink a dynamic volume
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Partitioned shared storage with private dynamic disk group protection
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Event monitoring and notification
- Event notification
- Configuring Automatic volume growth
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Dynamic Disk Group Split and Join troubleshooting tips
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Microsoft Exchange
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft Exchange
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- Typical deployment scenarios
- About cache area
- Configuring SmartIO
- Frequently asked questions about SmartIO
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Configuring a CVM cluster
- Administering CVM
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- Preparing the host machines
- Configuring the SFW storage
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Troubleshooting and recovery
- Using disk and volume status information
- Resolving common problem situations
- Commands or procedures used in troubleshooting and recovery
- Rescan command
- Repair volume command for dynamic mirrored volumes
- Additional troubleshooting issues
- Disk issues
- Volume issues
- Disk group issues
- Connection issues
- Issues related to boot or restart
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- CVM issues
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist (Windows-specific)
- vxsd
- vxedit
- vxdmpadm
- vxcbr
- vxsnap
- vxscrub
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
Additional considerations for SFW Microsoft Failover Clustering support
This section contains additional information that is important in working with Microsoft Failover Clustering and Storage Foundation for Windows.
Note the following considerations:
When a cluster disk group resource is offline or a cluster disk group that is not a failover cluster resource is in a Deported state, it is not protected from access by other computers. For maximum data protection, keep Volume Manager Disk Group resources online. Note that the SFW disk group resources still retain the "Volume Manager" name.
When using the Windows Server's Failover Cluster Manager snap-in to create a disk group resource, the Volume Manager Disk Group Parameters screen might not list all the available Storage Foundation for Windows cluster disk groups in the drop-down list. If this happens, exit the New Resource wizard and use the Windows Server's Failover Cluster Manager snap-in to select the cluster group to which the resource is to be assigned. Next, move the cluster group to the cluster node where the Storage Foundation for Windows cluster disk group is currently online. Then create the Storage Foundation for Windows disk group resource.
Under the following circumstances, the VEA Disk View may not reflect the latest state of the disk(s) until a refresh is performed:
When you change the state of a cluster disk resource on one node and try to view the disks under this resource from another node on the same cluster.
When you change the state of a cluster disk resource on one node and try to view the disks under this resource from a remote computer.
SFW support of the Microsoft Failover Clustering environment allows the selection of SCSI-2 reservation mode or SCSI-3 reservation mode. Selecting the type of SCSI support for the Microsoft Failover Clustering environment is done by using the System Settings portion of the SFW Control Panel.
When selecting the type of SCSI support in a Microsoft Failover Clustering environment, it is important to know if your storage arrays support SCSI-3. SFW SCSI-3 clustering support does not let you mix the storage arrays that support SCSI-3 with the storage arrays that cannot. In a situation of mixed storage arrays, you must use SFW SCSI-2 clustering support. Refer to the HCL for arrays that support SCSI-3.
Note:
Veritas maintains a hardware compatibility list (HCL) for Storage Foundation and High Availability Solutions for Windows products on the Veritas support Web site. Check the HCL for details about your storage arrays before selecting the type of SCSI support in a Microsoft Failover Clustering environment.
After selecting the type of SCSI support, you must issue the following CLI commands to complete the setting on your system:
net stop vxsvc
net start vxsvc
Note:
If a cluster disk group is imported on the system, you must deport or move the cluster disk group to another system before issuing these CLI commands.
If SFW SCSI-2 clustering support is selected and Active/Active load balancing is desired, the SCSI-3 Persistent Group Reservations (SCSI-3 PGR) support mode must be enabled for the DMP DSM.
A cluster dynamic disk group that is part of the cluster resources cannot be a source disk group for a join command. However, it can be a target disk group for the command.
Change in Bringing a Two-Disk Cluster Group Online
In earlier versions of Volume Manager for Windows, it was possible to bring a two-disk cluster disk group online when only one disk was available. If a cluster were to lose all network communication, this allowed the disk group to be brought online on two cluster nodes simultaneously, with each node owning a single disk, possibly resulting in data loss or a partitioned cluster. Though the likelihood of this situation occurring is slim for most customers, the consequences if it does happen may be severe. However, this is no longer supported with recent versions of Volume Manager and it is not possible to bring a two-disk cluster disk group online in Volume Manager unless it complies with the normal majority algorithm which means both disks must be available.
The normal majority algorithm is (n/2 +1).
You are not allowed to deport a cluster disk group that is also a Volume Manager disk group resource for Microsoft Failover Clustering.
Connecting to a Cluster Node
If you connect to a computer from the VEA GUI using the virtual name or the virtual IP address, the VEA GUI displays the computer name of the cluster node that currently owns the virtual name and IP resources. Therefore, it is not recommended to use the virtual name or virtual IP address when connecting and administering a cluster node through SFW HA.
Instead, use the host name or the IP address of the cluster node.
Dynamic Multi-Pathing (DMP) does not support using a basic disk as a cluster resource under Microsoft Failover Clustering.
Failover may not function properly when using Dynamic Multi-Pathing with a Microsoft Failover Clustering basic disk cluster resource. Refer to Tech Note 251662 on the Veritas Support site for details.
If you want to use Dynamic Multi-Pathing with SFW and Microsoft Failover Clustering, you must convert any Microsoft Failover Clustering basic disk cluster resources to dynamic disk cluster resources before activating Dynamic Multi-Pathing. The initial setup of Microsoft Failover Clustering requires that you use a basic disk as the quorum disk. Once InfoScale Storage is installed, you should upgrade the basic disk to dynamic by including it in a dynamic cluster disk group and then convert the quorum resource from a basic disk resource to a dynamic disk resource.
Note:
DMP DSMs do not support an Active/Active setting in a Microsoft Failover Clustering environment when a quorum disk is a basic disk.
Cluster dynamic disk groups that contain iSCSI disks are not set up for persistent login on all nodes in the cluster.
SFW ensures that the iSCSI targets of cluster dynamic disk groups that contain iSCSI disks are configured for persistent login. If the persistent login is not configured for the target, SFW automatically configures it.
Cluster dynamic disk groups that contain iSCSI disks are only automatically configured for persistent login on the node where they were created. The other nodes in the cluster are not enabled for persistent login. You need to manually set up the persistent login for each of the other nodes in the cluster.
Copying the policy file, VxVolPolicies.xml, to Another Node
If the second node is configured the same as the first and if the first node's policy settings for Automatic Volume Growth are to be maintained on the second node, you need to copy the VxVolPolicies.xml file of the first node to the second node. Copy the VxVolPolicies.xml file to the same path location on the second node as its location on the first node. The default path of the VxVolPolicies.xml file is Documents and Settings\All Users\Application Data\Veritas.
More information about the policy file is available.
More information about using SFW and Microsoft Failover Clustering in a shared cluster environment with the FlashSnap off-host backup procedure is available.
If you install the Microsoft Failover Clustering feature on a server on which InfoScale Storage for Windows is already installed, then you must manually restart Veritas Enterprise Administrator Service (VxSvc) by running the following commands:
net stop vxsvc
net start vxsvc