InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring CP server using response files
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Configuring a secure cluster node by node
- Completing the SFHA configuration
- Verifying and updating licenses on the system
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- Preparing to upgrade SFHA
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks
- Post-upgrade tasks when VCS agents for VVR are configured
- About enabling LDAP authentication for clusters that run in secure mode
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
About coordination points
Coordination points provide a lock mechanism to determine which nodes get to fence off data drives from other nodes. A node must eject a peer from the coordination points before it can fence the peer from the data drives. SFHA prevents split-brain when vxfen races for control of the coordination points and the winner partition fences the ejected nodes from accessing the data disks.
Note:
Typically, a fencing configuration for a cluster must have three coordination points. Arctera also supports server-based fencing with a single CP server as its only coordination point with a caveat that this CP server becomes a single point of failure.
The coordination points can either be disks or servers or both.
Coordinator disks
Disks that act as coordination points are called coordinator disks. Coordinator disks are three standard disks or LUNs set aside for I/O fencing during cluster reconfiguration. Coordinator disks do not serve any other storage purpose in the SFHA configuration.
You can configure coordinator disks to use Volume Manager's Dynamic Multi-pathing (DMP) feature. Dynamic Multi-pathing (DMP) allows coordinator disks to take advantage of the path failover and the dynamic adding and removal capabilities of DMP. So, you can configure I/O fencing to use DMP devices. I/O fencing uses SCSI-3 disk policy that is dmp-based on the disk device that you use.
With the emergence of NVMe as a high-performance alternative to SCSI3 for storage connectivity, numerous storage vendors are now introducing NVMe storage arrays.
Furthermore, with the introduction of the NVMe 2.0 specification, multipathing and PGR are fully supported for NVMe storage. If the underlying storage array supports NVMe PGR feature, those NVMe LUNs can also be used as coordinator disks.
Note:
The dmp disk policy for I/O fencing supports both single and multiple hardware paths from a node to the coordinator disks. If few coordinator disks have multiple hardware paths and few have a single hardware path, then we support only the dmp disk policy. For new installations, Arctera only supports dmp disk policy for IO fencing even for a single hardware path.
See the Storage Foundation Administrator's Guide.
Coordination point servers
The coordination point server (CP server) is a software solution which runs on a remote system or cluster. CP server provides arbitration functionality by allowing the SFHA cluster nodes to perform the following tasks:
Self-register to become a member of an active SFHA cluster (registered with CP server) with access to the data drives
Check which other nodes are registered as members of this active SFHA cluster
Self-unregister from this active SFHA cluster
Forcefully unregister other nodes (preempt) as members of this active SFHA cluster
In short, the CP server functions as another arbitration mechanism that integrates within the existing I/O fencing module.
Note:
With the CP server, the fencing arbitration logic still remains on the SFHA cluster.
Multiple SFHA clusters running different operating systems can simultaneously access the CP server. TCP/IP based communication is used between the CP server and the SFHA clusters.