Please enter search query.
Search <book_title>...
InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Configuration and Upgrade Guide - Solaris
Last Published:
2025-04-18
Product(s):
InfoScale & Storage Foundation (9.0)
Platform: Solaris
- Section I. Introduction to SFCFSHA
- Introducing Storage Foundation Cluster File System High Availability
- Section II. Configuration of SFCFSHA
- Preparing to configure
- Preparing to configure SFCFSHA clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring the CP server manually
- Configuring SFCFSHA
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Performing an automated SFCFSHA configuration using response files
- Performing an automated I/O fencing configuration using response files
- Configuring CP server using response files
- Manually configuring SFCFSHA clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the SFCFSHA cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Section III. Upgrade of SFCFSHA
- Planning to upgrade SFCFSHA
- Preparing to upgrade SFCFSHA
- Performing a full upgrade of SFCFSHA using the installer
- Performing a rolling upgrade of SFCFSHA
- Performing a phased upgrade of SFCFSHA
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated SFCFSHA upgrade using response files
- Upgrading Volume Replicator
- Upgrading VirtualStore
- Upgrading SFCFSHA using Boot Environment upgrade
- Performing post-upgrade tasks
- Planning to upgrade SFCFSHA
- Section IV. Post-configuration tasks
- Section V. Configuration of disaster recovery environments
- Section VI. Adding and removing nodes
- Adding a node to SFCFSHA clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding a node using response files
- Configuring server-based fencing on the new node
- Removing a node from SFCFSHA clusters
- Adding a node to SFCFSHA clusters
- Section VII. Configuration and Upgrade reference
- Appendix A. Installation scripts
- Appendix B. Configuration files
- Appendix C. Configuring the secure shell or the remote shell for communications
- Appendix D. High availability agent information
- Appendix E. Sample SFCFSHA cluster setup diagrams for CP server-based I/O fencing
- Appendix F. Reconciling major/minor numbers for NFS shared disks
- Appendix G. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Using the UDP layer of IPv6 for LLT
- Manually configuring LLT over UDP using IPv6
Modifying the main.cf file
Save a copy of the main.cf file and modify the configuration information in the main.cf file.
To modify the main.cf
file
- On any node, make a copy of the current main.cf file.
For example:
# cp /etc/VRTSvcs/conf/config/main.cf etc/VRTSvcs/conf/main.save
- Choose one node from the cluster to execute step 3 through step 9.
- On the node you selected in step 2, run the following commands sequentially:
# haconf -makerw # hares -unlink vxfsckd qlogckd # hares -unlink qlogckd cvm_clus # hares -link vxfsckd cvm_clus # hares -delete qlogckd # haconf -dump -makero
- On all the nodes in the cluster, run the following commands sequentially:
# ps -ef | grep qlogckd # kill -9 pid_of_qlogckd # modinfo | grep -i qlog # modunload -i module_id_of_qlog
- On the node you selected in step 2, stop VCS on all nodes:
# /opt/VRTS/bin/hastop -all -force
- On the node you selected in step 2 and if you have configured the VCS Cluster Manager (web console), complete the following to modify the
/etc/VRTSvcs/conf/config/main.cf
file.Remove VRTSweb:
Process VRTSweb ( PathName = "/opt/VRTSvcs/bin/haweb" Arguments = "10.129.96.64 8181" )
Replace it with:
VRTSWebApp VCSweb ( Critical =0 AppName = vcs InstallDir = "/opt/VRTSweb/VERITAS" TimeForOnline = 5 )
Add the NIC resource in the ClusterService group. For example, where the name of the NIC resource is named csgnic and the public NIC device is hme0, add:
NIC csgnic ( Device = hme0 )
Add new dependencies for the new resources in the ClusterService group. For example, using the names of the VRTSWebApp, NotifierMngr, IP, and NIC resources, enter lines that resemble:
VCSweb requires webip ntfr requires csgnic webip requires csgnic
- On the node you selected in step 2, remove qlogckd from the
/etc/VRTSvcs/conf/config/main.cf
file. For example:CFSQlogckd qlogckd ( Critical = 0 )
Make sure you remove all dependencies on qlogckd from the
main.cf
file. - On the node you selected in step 2, verify the syntax of the
/etc/VRTSvcs/conf/config/main.cf
file:# cd /etc/VRTSvcs/conf/config # /opt/VRTS/bin/hacf -verify .
- On the node you selected in step 2, start VCS:
# /opt/VRTS/bin/hastart
- On the remaining nodes in the cluster, start VCS:
# /opt/VRTS/bin/hastart
- If VVR is configured, freeze the service groups and stop the applications.
See Freezing the service groups and stopping all the applications.