Cluster Server 7.4.2 Configuration and Upgrade Guide - Linux
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Completing the VCS configuration
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix D. Configuring LLT over TCP
- Manually configuring LLT over TCP using IPv4
- Manually configuring LLT over TCP using IPv6
- Appendix E. Migrating LLT links from IPv4 to IPv6 or dual-stack
- Appendix F. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
- Appendix G. Configuring the secure shell or the remote shell for communications
- Appendix H. Installation script options
- Appendix I. Troubleshooting VCS configuration
- Appendix J. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix K. Upgrading the Steward process
How LLT supports RDMA capability for faster interconnects between applications
LLT and GAB support fast interconnect between applications using RDMA technology over InfiniBand and Ethernet media (RoCE). To leverage the RDMA capabilities of the hardware and also support the existing LLT functionalities, LLT maintains two channels (RDMA and non-RDMA) for each of the configured RDMA links. Both RDMA and non-RDMA channels are capable of transferring data between the nodes and LLT provides separate APIs to their clients, such as, CFS, CVM, to use these channels. The RDMA channel provides faster data transfer by leveraging the RDMA capabilities of the hardware. The RDMA channel is mainly used for data-transfer when the client is capable to use this channel. The non-RDMA channel is created over the UDP layer and LLT uses this channel mainly for sending and receiving heartbeats. Based on the health of the non-RDMA channel, GAB decides cluster membership for the cluster. The connection management of the RDMA channel is separate from the non-RDMA channel, but the connect and disconnect operations for the RDMA channel are triggered based on the status of the non-RDMA channel
If the non-RDMA channel is up but due to some issues in RDMA layer the RDMA channel is down, in such cases the data-transfer happens over the non-RDMA channel with a lesser performance until the RDMA channel is fixed. The system logs displays the message when the RDMA channel is up or down.
LLT uses the Open Fabrics Enterprise Distribution (OFED) layer and the drivers installed by the operating system to communicate with the hardware. LLT over RDMA allows applications running on one node to directly access the memory of an application running on another node that are connected over an RDMA-enabled network. In contrast, on nodes connected over a non-RDMA network, applications cannot directly read or write to an application running on another node. LLT clients such as, CFS and CVM, have to create intermediate copies of data before completing the read or write operation on the application, which increases the latency period and affects performance in some cases.
LLT over an RDMA network enables applications to read or write to applications on another node over the network without the need to create intermediate copies. This leads to low latency, higher throughput, and minimized CPU host usage thus improving application performance. Cluster volume manager and Cluster File Systems, which are clients of LLT and GAB, can use LLT over RDMA capability for specific use cases.
More Information