InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Solaris
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Upgrading SFCFSHA using Boot Environment upgrade
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Reconciling major/minor numbers for NFS shared disks
- Appendix G. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
About coordination points
Coordination points provide a lock mechanism to determine which nodes get to fence off data drives from other nodes. A node must eject a peer from the coordination points before it can fence the peer from the data drives. SFCFSHA prevents split-brain when vxfen races for control of the coordination points and the winner partition fences the ejected nodes from accessing the data disks.
Note:
Typically, a fencing configuration for a cluster must have three coordination points. Arctera also supports server-based fencing with a single CP server as its only coordination point with a caveat that this CP server becomes a single point of failure.
The coordination points can either be disks or servers or both.
Coordinator disks
Disks that act as coordination points are called coordinator disks. Coordinator disks are three standard disks or LUNs set aside for I/O fencing during cluster reconfiguration. Coordinator disks do not serve any other storage purpose in the SFCFSHA configuration.
You can configure coordinator disks to use Volume Manager's Dynamic Multi-pathing (DMP) feature. Dynamic Multi-pathing (DMP) allows coordinator disks to take advantage of the path failover and the dynamic adding and removal capabilities of DMP. So, you can configure I/O fencing to use DMP devices. I/O fencing uses SCSI-3 disk policy that is dmp-based on the disk device that you use.
With the emergence of NVMe as a high-performance alternative to SCSI3 for storage connectivity, numerous storage vendors are now introducing NVMe storage arrays.
Furthermore, with the introduction of the NVMe 2.0 specification, multipathing and PGR are fully supported for NVMe storage. If the underlying storage array supports NVMe PGR feature, those NVMe LUNs can also be used as coordinator disks.
Note:
The dmp disk policy for I/O fencing supports both single and multiple hardware paths from a node to the coordinator disks. If few coordinator disks have multiple hardware paths and few have a single hardware path, then we support only the dmp disk policy. For new installations, Arctera only supports dmp disk policy for IO fencing even for a single hardware path.
See the Storage Foundation Administrator's Guide.
Coordination point servers
The coordination point server (CP server) is a software solution which runs on a remote system or cluster. CP server provides arbitration functionality by allowing the SFHA cluster nodes to perform the following tasks:
Self-register to become a member of an active SFCFSHA cluster (registered with CP server) with access to the data drives
Check which other nodes are registered as members of this active SFCFSHA cluster
Self-unregister from this active SFCFSHA cluster
Forcefully unregister other nodes (preempt) as members of this active SFCFSHA cluster
In short, the CP server functions as another arbitration mechanism that integrates within the existing I/O fencing module.
Note:
With the CP server, the fencing arbitration logic still remains on the SFCFSHA cluster.
Multiple SFCFSHA clusters running different operating systems can simultaneously access the CP server. TCP/IP based communication is used between the CP server and the SFCFSHA clusters.