Please enter search query.
Search <book_title>...
Cluster Server 7.3.1 Configuration and Upgrade Guide - Solaris
Last Published:
2019-04-17
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Overview of tasks to configure VCS using the product installer
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring VCS in secure mode
- Setting up trust relationships for your VCS cluster
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the VCS configuration
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
- Section III. Manual configuration
- Manually configuring VCS
- About configuring VCS manually
- Configuring LLT manually
- Configuring GAB manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Starting LLT, GAB, and VCS after manual configuration
- About configuring cluster using VCS Cluster Configuration wizard
- Before configuring a VCS cluster using the VCS Cluster Configuration wizard
- Launching the VCS Cluster Configuration wizard
- Configuring a cluster by using the VCS cluster configuration wizard
- Adding a system to a VCS cluster
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the VCS cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the VCS cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- About upgrading to VCS 7.3.1
- Upgrading VCS in secure enterprise environments
- Supported upgrade paths
- Considerations for upgrading secure VCS 6.x clusters to VCS 7.3.1
- Considerations for upgrading VCS to 7.3.1 on systems configured with an Oracle resource
- Considerations for upgrading secure VCS clusters to VCS 7.3.1
- Considerations for upgrading CP servers
- Considerations for upgrading CP clients
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
- Performing a VCS upgrade using the installer
- Before upgrading VCS using the script-based installer
- Upgrading VCS using the product installer
- Upgrading to 2048 bit key and SHA256 signature certificates
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Deleting certificates of non-root users after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing WAC communication in global clusters after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing CP server and CP client communication after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing trust with Steward after upgrading to 2048 bit key and SHA256 signature certificates
- Upgrading Steward to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a rolling upgrade of VCS
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Finishing the phased upgrade
- Performing an automated VCS upgrade using response files
- Upgrading VCS using Live Upgrade and Boot Environment upgrade
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Adding nodes using the VCS installer
- Manually adding a node to a cluster
- Setting up the hardware
- Installing the VCS software manually when adding a node
- Setting up the node to run in secure mode
- Configuring LLT and GAB when adding a node to the cluster
- Configuring I/O fencing on the new node
- Adding the node to the existing cluster
- Starting VCS and verifying the cluster
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Verifying the status of nodes and service groups
- Deleting the departing node from VCS configuration
- Modifying configuration files on each remaining node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Manually configuring LLT over UDP using IPv6
- LLT over UDP sample /etc/llttab
- Appendix D. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling and disabling rsh for Solaris
- Appendix E. Installation script options
- Appendix F. Troubleshooting VCS configuration
- Restarting the installer after a failed network connection
- Cannot launch the cluster view link
- Starting and stopping processes for the Veritas InfoScale products
- Installer cannot create UUID for the cluster
- LLT startup script displays errors
- The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
- Issues during fencing startup on VCS cluster nodes set up for server-based fencing
- Appendix G. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix H. Reconciling major/minor numbers for NFS shared disks
- Appendix I. Upgrading the Steward process
Checking the major and minor number for VxVM volumes
The following sections describe checking and changing, if necessary, the major and minor numbers for the VxVM volumes that cluster systems use.
To check major and minor numbers on VxVM volumes
- Place the VCS command directory in your path. For example:
# export PATH=$PATH:/usr/sbin:/sbin:/opt/VRTS/bin
- To list the devices, use the ls -lL block_device command on each node:
# ls -lL /dev/vx/dsk/shareddg/vol3
On Node A, the output may resemble:
brw------- 1 root root 32,43000 Mar 22 16:4 1 /dev/vx/dsk/shareddg/vol3On Node B, the output may resemble:
brw------- 1 root root 36,43000 Mar 22 16:4 1 /dev/vx/dsk/shareddg/vol3 - Import the associated shared disk group on each node.
- Use the following command on each node exporting an NFS file system. The command displays the major numbers for vxio and vxspec that Veritas Volume Manager uses . Note that other major numbers are also displayed, but only vxio and vxspec are of concern for reconciliation:
# grep vx /etc/name_to_major
Output on Node A:
vxdmp 30 vxio 32 vxspec 33 vxfen 87 vxglm 91Output on Node B:
vxdmp 30 vxio 36 vxspec 37 vxfen 87 vxglm 91 - To change Node B's major numbers for vxio and vxspec to match those of Node A, use the command:
haremajor -vx major_number_vxio major_number_vxspec
For example, enter:
# haremajor -vx 32 33
If the command succeeds, proceed to step 8. If this command fails, you receive a report similar to the following:
Error: Preexisting major number 32 These are available numbers on this system: 128... Check /etc/name_to_major on all systems for available numbers. - If you receive this report, use the haremajor command on Node A to change the major number (32/33) to match that of Node B (36/37). For example, enter:
# haremajor -vx 36 37
If the command fails again, you receive a report similar to the following:
Error: Preexisting major number 36 These are available numbers on this node: 126... Check /etc/name_to_major on all systems for available numbers. - If you receive the second report, choose the larger of the two available numbers (in this example, 128). Use this number in the haremajor command to reconcile the major numbers. Type the following command on both nodes:
# haremajor -vx 128 129
- Reboot each node on which haremajor was successful.
- If the minor numbers match, proceed to reconcile the major and minor numbers of your next NFS block device.
- If the block device on which the minor number does not match is a volume, consult the vxdg(1M) manual page. The manual page provides instructions on reconciling the Veritas Volume Manager minor numbers, and gives specific reference to the reminor option.
Node where the vxio driver number have been changed require rebooting.