Please enter search query.
Search <book_title>...
Cluster Server 7.4.1 Configuration and Upgrade Guide - Linux
Last Published:
2019-06-18
Product(s):
InfoScale & Storage Foundation (7.4.1)
Platform: Linux
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Completing the VCS configuration
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- Appendix D. Configuring LLT over TCP
- Manually configuring LLT over TCP using IPv4
- Manually configuring LLT over TCP using IPv6
- Appendix E. Migrating LLT links from IPv4 to IPv6 or dual-stack
- Appendix F. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
- Appendix G. Configuring the secure shell or the remote shell for communications
- Appendix H. Installation script options
- Appendix I. Troubleshooting VCS configuration
- Appendix J. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix K. Upgrading the Steward process
Setting the order of existing coordination points using the installer
To set the order of existing coordination points
- Start the installer with -fencing option.
# /opt/VRTS/install/installer -fencing
The installer starts with a copyright message and verifies the cluster information.
Note the location of log files that you can access if there is a problem with the configuration process.
- Confirm that you want to proceed with the I/O fencing configuration at the prompt.
The program checks that the local node running the script can communicate with remote nodes and checks whether VCS 7.4.1 is configured properly.
- Review the I/O fencing configuration options that the program presents. Type the number corresponding to the option that suggests to set the order of existing coordination points.
For example:
Select the fencing mechanism to be configured in this Application Cluster [1-7,q] 7
Installer will ask the new order of existing coordination points. Then it will call vxfenswap utility to commit the coordination points change.
Warning:
The cluster might panic if a node leaves membership before the coordination points change is complete.
- Review the current order of coordination points.
Current coordination points order: (Coordination disks/Coordination Point Server) Example, 1) /dev/vx/rdmp/emc_clariion0_65,/dev/vx/rdmp/emc_clariion0_66, /dev/vx/rdmp/emc_clariion0_62 2) [10.198.94.144]:443 3) [10.198.94.146]:443 b) Back to previous menu
- Enter the new order of the coordination points by the numbers and separate the order by space [1-3,b,q] 3 1 2.
New coordination points order: (Coordination disks/Coordination Point Server) Example, 1) [10.198.94.146]:443 2) /dev/vx/rdmp/emc_clariion0_65,/dev/vx/rdmp/emc_clariion0_66, /dev/vx/rdmp/emc_clariion0_62 3) [10.198.94.144]:443
- Is this information correct? [y,n,q] (y).
Preparing vxfenmode.test file on all systems... Running vxfenswap... Successfully completed the vxfenswap operation
- Do you want to send the information about this installation to us to help improve installation in the future? [y,n,q,?] (y).
- Do you want to view the summary file? [y,n,q] (n).
- Verify that the value of vxfen_honor_cp_order specified in the
/etc/vxfenmode
file is set to 1.For example, vxfen_mode=customized vxfen_mechanism=cps port=443 scsi3_disk_policy=dmp cps1=[10.198.94.146] vxfendg=vxfencoorddg cps2=[10.198.94.144] vxfen_honor_cp_order=1
- Verify that the coordination point order is updated in the output of the vxfenconfig -l command.
For example, I/O Fencing Configuration Information: ====================================== single_cp=0 [10.198.94.146]:443 {e7823b24-1dd1-11b2-8814-2299557f1dc0} /dev/vx/rdmp/emc_clariion0_65 60060160A38B1600386FD87CA8FDDD11 /dev/vx/rdmp/emc_clariion0_66 60060160A38B1600396FD87CA8FDDD11 /dev/vx/rdmp/emc_clariion0_62 60060160A38B16005AA00372A8FDDD11 [10.198.94.144]:443 {01f18460-1dd2-11b2-b818-659cbc6eb360}