InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Completing the SFCFSHA configuration
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading SFCFSHA using YUM
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Enabling LLT ports in firewall
You can use any firewall tool to enable the network ports.
While enabling ports make sure that:
No other application is using the LLT consumable network ports (50000 to 50006).
These ports are enabled in security groups in case of InfoScale installations in any cloud environment.
By default, LLT uses 50000 to 50001 port range for clustering and 50002 to 50006 for I/O shipping sockets.
Note:
In cloud for DNS based configuration, if the IPs cannot be connected then the LLT waits for 2 minutes 35 seconds; after which it gets failed. Thus, after reboot one of the nodes fail to join the cluster.
Note:
In Cloud, specifically for a different AZs, NIC IP's are NATed. The LLT drops the connection, as the packet destination may not see the actual source IP.
Ingress table:
iptables -A INPUT -p udp --dport 50000:50006 -j ACCEPT
Egress table:
iptables -A OUTPUT -p udp --sport 50000:50006 -j ACCEPT
link eth1 udp - udp 50000 - 192.168.10.1 - link eth2 udp - udp 50001 - 192.168.11.1 -
You can also use the following tunables to choose the number of sockets per link. By default 4 sockets are created for each link
Tunable | Description |
---|---|
set-udpports | Changes the port range to be used for I/O shipping if you do not want to use port range 50002 and onwards. Usage: set-udpports <initial_port_number> Example: set-udpports 60000 In this case, LLT uses the port 50000 and 50001 for clustering and 60000 and the subsequent port numbers for I/O shipping. |
set-udpthreads | Specifies how many threads per socket needs to be created. Usage: set-udpthreads <number of threads per socket> Example: set-udpthreads 3 |
set-udpsockets | Specifies how many sockets per link needs to be created. Usage: set-udpsockets <number of sockets per link> Example: set-udpsockets 6 |