Cluster Server 7.3.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Overview of tasks to configure VCS using the product installer
- Starting the software configuration
- Specifying systems for configuration
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring VCS in secure mode
- Setting up trust relationships for your VCS cluster
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Completing the VCS configuration
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Setting up non-SCSI-3 I/O fencing in virtual environments using installer
- Setting up majority-based I/O fencing using installer
- Enabling or disabling the preferred fencing policy
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Configuring I/O fencing using response files
- Response file variables to configure disk-based I/O fencing
- Sample response file for configuring disk-based I/O fencing
- Response file variables to configure server-based I/O fencing
- Sample response file for configuring server-based I/O fencing
- Response file variables to configure non-SCSI-3 I/O fencing
- Sample response file for configuring non-SCSI-3 I/O fencing
- Response file variables to configure majority-based I/O fencing
- Sample response file for configuring majority-based I/O fencing
- Section III. Manual configuration
- Manually configuring VCS
- About configuring VCS manually
- Configuring LLT manually
- Configuring GAB manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Starting LLT, GAB, and VCS after manual configuration
- About configuring cluster using VCS Cluster Configuration wizard
- Before configuring a VCS cluster using the VCS Cluster Configuration wizard
- Launching the VCS Cluster Configuration wizard
- Configuring a cluster by using the VCS cluster configuration wizard
- Adding a system to a VCS cluster
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Preparing the CP servers manually for use by the VCS cluster
- Generating the client key and certificates manually on the client nodes
- Configuring server-based fencing on the VCS cluster manually
- Configuring CoordPoint agent to monitor coordination points
- Verifying server-based I/O fencing configuration
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- About upgrading to VCS 7.3.1
- Upgrading VCS in secure enterprise environments
- Supported upgrade paths
- Considerations for upgrading secure VCS 6.x clusters to VCS 7.3.1
- Considerations for upgrading VCS to 7.3.1 on systems configured with an Oracle resource
- Considerations for upgrading secure VCS clusters to VCS 7.3.1
- Considerations for upgrading CP servers
- Considerations for upgrading CP clients
- Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
- Performing a VCS upgrade using the installer
- Before upgrading VCS using the script-based installer
- Upgrading VCS using the product installer
- Upgrading to 2048 bit key and SHA256 signature certificates
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Deleting certificates of non-root users after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing WAC communication in global clusters after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing CP server and CP client communication after upgrading to 2048 bit key and SHA256 signature certificates
- Re-establishing trust with Steward after upgrading to 2048 bit key and SHA256 signature certificates
- Upgrading Steward to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a rolling upgrade of VCS
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Moving the service groups to the second subcluster
- Upgrading the operating system on the first subcluster
- Upgrading the first subcluster
- Preparing the second subcluster
- Activating the first subcluster
- Upgrading the operating system on the second subcluster
- Upgrading the second subcluster
- Finishing the phased upgrade
- Performing an automated VCS upgrade using response files
- Upgrading VCS using Live Upgrade and Boot Environment upgrade
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Adding nodes using the VCS installer
- Manually adding a node to a cluster
- Setting up the hardware
- Installing the VCS software manually when adding a node
- Setting up the node to run in secure mode
- Configuring LLT and GAB when adding a node to the cluster
- Configuring I/O fencing on the new node
- Adding the node to the existing cluster
- Starting VCS and verifying the cluster
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Verifying the status of nodes and service groups
- Deleting the departing node from VCS configuration
- Modifying configuration files on each remaining node
- Removing the node configuration from the CP server
- Removing security credentials from the leaving node
- Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Broadcast address in the /etc/llttab file
- The link command in the /etc/llttab file
- The set-addr command in the /etc/llttab file
- Selecting UDP ports
- Configuring the netmask for LLT
- Configuring the broadcast address for LLT
- Sample configuration: direct-attached links
- Sample configuration: links crossing IP routers
- Manually configuring LLT over UDP using IPv6
- LLT over UDP sample /etc/llttab
- Appendix D. Configuring the secure shell or the remote shell for communications
- About configuring secure shell or remote shell communication modes before installing products
- Manually configuring passwordless ssh
- Setting up ssh and rsh connection using the installer -comsetup command
- Setting up ssh and rsh connection using the pwdutil.pl utility
- Restarting the ssh session
- Enabling and disabling rsh for Solaris
- Appendix E. Installation script options
- Appendix F. Troubleshooting VCS configuration
- Restarting the installer after a failed network connection
- Cannot launch the cluster view link
- Starting and stopping processes for the Veritas InfoScale products
- Installer cannot create UUID for the cluster
- LLT startup script displays errors
- The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
- Issues during fencing startup on VCS cluster nodes set up for server-based fencing
- Appendix G. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix H. Reconciling major/minor numbers for NFS shared disks
- Appendix I. Upgrading the Steward process
Setting up non-SCSI-3 fencing in virtual environments manually
To manually set up I/O fencing in a non-SCSI-3 PR compliant setup
- Configure I/O fencing either in majority-based fencing mode with no coordination points or in server-based fencing mode only with CP servers as coordination points.
- Make sure that the VCS cluster is online and check that the fencing mode is customized mode or majority mode.
# vxfenadm -d
- Make sure that the cluster attribute UseFence is set to SCSI-3.
# haclus -value UseFence
- On each node, edit the /etc/vxenviron file as follows:
data_disk_fencing=off
On each node, edit the /kernel/drv/vxfen.conf file as follows:
vxfen_vxfnd_tmt=25
- On each node, edit the /etc/vxfenmode file as follows:
loser_exit_delay=55 vxfen_script_timeout=25
Refer to the sample /etc/vxfenmode file.
On each node, set the value of the LLT sendhbcap timer parameter value as follows:
Run the following command:
lltconfig -T sendhbcap:3000
Add the following line to the /etc/llttab file so that the changes remain persistent after any reboot:
set-timer senhbcap:3000
On any one node, edit the VCS configuration file as follows:
Make the VCS configuration file writable:
# haconf -makerw
For each resource of the type DiskGroup, set the value of the MonitorReservation attribute to 0 and the value of the Reservation attribute to NONE.
# hares -modify <dg_resource> MonitorReservation 0
# hares -modify <dg_resource> Reservation "NONE"
Run the following command to verify the value:
# hares -list Type=DiskGroup MonitorReservation!=0
# hares -list Type=DiskGroup Reservation!="NONE"
The command should not list any resources.
Modify the default value of the Reservation attribute at type-level.
# haattr -default DiskGroup Reservation "NONE"
Make the VCS configuration file read-only
# haconf -dump -makero
- Make sure that the UseFence attribute in the VCS configuration file main.cf is set to SCSI-3.
To make these VxFEN changes take effect, stop and restart VxFEN and the dependent modules
On each node, run the following command to stop VCS:
# svcadm disable -t vcs
After VCS takes all services offline, run the following command to stop VxFEN:
# svcadm disable -t vxfen
On each node, run the following commands to restart VxFEN and VCS:
# svcadm enable vxfen