Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Linux
Last Published:
2025-04-18
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Linux
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Completing the SFCFSHA configuration
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading SFCFSHA using YUM
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Manually configuring LLT over UDP multiport
Perform the following steps to configure LLT over UDP multiport.
Preparing for configuration
- Set the maximum transmission unit (MTU) of all the high-priority links to the highest value (9000) to ensure optimal performance. You may also choose to use the default MTU (1500).
Ensure that the network path MTU is the same as the MTU of the NIC.
To change the MTU size of a NIC permanently:
a.
Edit the
/etc/sysconfig/network-scripts/ifcfg-eth0
file# vi /etc/sysconfig/network-scripts/ifcfg-eth0
b.
Add MTU settings at the end of the file:
MTU=9000
c.
Save and close the file and restart networking
# service network restart
- Enable the LLT ports in firewall.
Configuring LLT over UDP Multiport
- Use the following shell script to increase the size of network buffers which consequently increases the send and receive TCP/IP buffers. This script also tunes the Rx /Tx queue size to max and enables the receive side scaling (RSS) functionality of the NIC.
#--------------------------------------- set -x for card in `cat /etc/llttab | grep -v "lowpri" | grep -w "link" | awk '{print $2}'`; do echo -e "Changeing buffers of $card" ethtool -G $card rx 4096 ethtool -G $card rx-jumbo 4096 ethtool -G $card tx 4096 ethtool -N $card rx-flow-hash udp4 sdfn ethtool -N $card rx-flow-hash tcp4 sdfn sysctl -w net.ipv4.conf.${card}.arp_ignore=1 done sysctl -w net.core.rmem_max=1600000000 sysctl -w net.core.wmem_max=1600000000 sysctl -w net.core.netdev_max_backlog=250000 sysctl -w net.core.rmem_default=4194304 sysctl -w net.core.wmem_default=4194304 sysctl -w net.core.optmem_max=4194304 sysctl -w net.ipv4.udp_rmem_min=819200 sysctl -w net.ipv4.udp_wmem_min=819200 sysctl -w net.core.netdev_budget=600 set +x #---------------------------------------------
- Install Veritas InfoScale using the installer and select UDP as LLT protocol
# ./installer
The installer automatically enables the UDP Multiport feature and creates four additional sockets for each LLT link.
- Verify that UDP multiport links are enabled
# lltstat -nvvr configured
More Information