InfoScale™ 9.0 Storage Foundation and High Availability Configuration and Upgrade Guide - Linux
- Section I. Introduction to SFHA
- Section II. Configuration of SFHA
- Preparing to configure
- Preparing to configure SFHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring CP server using response files
- Configuring SFHA
- Configuring Storage Foundation High Availability using the installer
- Configuring a secure cluster node by node
- Completing the SFHA configuration
- Verifying and updating licenses on the system
- Configuring Storage Foundation High Availability using the installer
- Configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Manually configuring SFHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Performing an automated SFHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Section III. Upgrade of SFHA
- Planning to upgrade SFHA
- Preparing to upgrade SFHA
- Upgrading Storage Foundation and High Availability
- Performing a rolling upgrade of SFHA
- Performing a phased upgrade of SFHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFHA upgrade using response files
- Upgrading SFHA using YUM
- Performing post-upgrade tasks
- Post-upgrade tasks when VCS agents for VVR are configured
- About enabling LDAP authentication for clusters that run in secure mode
- Planning to upgrade SFHA
- Section IV. Post-installation tasks
- Section V. Adding and removing nodes
- Adding a node to SFHA clusters
- Adding the node to a cluster manually
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFHA clusters
- Removing a node from a SFHA cluster
- Removing a node from a SFHA cluster
- Adding a node to SFHA clusters
- Section VI. Configuration and upgrade reference
- Appendix A. Installation scripts
- Appendix B. SFHA services and ports
- Appendix C. Configuration files
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix G. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
Setting up non-SCSI-3 fencing in virtual environments manually
To manually set up I/O fencing in a non-SCSI-3 PR compliant setup
- Configure I/O fencing either in majority-based fencing mode with no coordination points or in server-based fencing mode only with CP servers as coordination points.
- Make sure that the SFHA cluster is online and check that the fencing mode is customized mode or majority mode.
# vxfenadm -d
- Make sure that the cluster attribute UseFence is set to SCSI-3.
# haclus -value UseFence
- On each node, edit the /etc/vxenviron file as follows:
data_disk_fencing=off
On each node, edit the /etc/sysconfig/vxfen file as follows:
vxfen_vxfnd_tmt=25
- On each node, edit the /etc/vxfenmode file as follows:
loser_exit_delay=55 vxfen_script_timeout=25
Refer to the sample /etc/vxfenmode file.
On each node, set the value of the LLT sendhbcap timer parameter value as follows:
Run the following command:
lltconfig -T sendhbcap:3000
Add the following line to the /etc/llttab file so that the changes remain persistent after any reboot:
set-timer senhbcap:3000
On any one node, edit the VCS configuration file as follows:
Make the VCS configuration file writable:
# haconf -makerw
For each resource of the type DiskGroup, set the value of the MonitorReservation attribute to 0 and the value of the Reservation attribute to NONE.
# hares -modify <dg_resource> MonitorReservation 0
# hares -modify <dg_resource> Reservation "NONE"
Run the following command to verify the value:
# hares -list Type=DiskGroup MonitorReservation!=0
# hares -list Type=DiskGroup Reservation!="NONE"
The command should not list any resources.
Modify the default value of the Reservation attribute at type-level.
# haattr -default DiskGroup Reservation "NONE"
Make the VCS configuration file read-only
# haconf -dump -makero
- Make sure that the UseFence attribute in the VCS configuration file main.cf is set to SCSI-3.
To make these VxFEN changes take effect, stop and restart VxFEN and the dependent modules
On each node, run the following command to stop VCS:
For systemd environments with supported Linux distributions:
# systemctl stop vcs
For other supported Linux distributions:
# /etc/init.d/vcs stop
After VCS takes all services offline, run the following command to stop VxFEN:
For systemd environments with supported Linux distributions:
# systemctl stop vxfen
For other supported Linux distributions:
# /etc/init.d/vxfen stop
On each node, run the following commands to restart VxFEN and VCS:
For systemd environments with supported Linux distributions:
# systemctl start vxfen
# systemctl start vcs
For other supported Linux distributions:
# /etc/init.d/vxfen start
# /etc/init.d/vcs start