Storage Foundation for Sybase ASE CE 7.4.1 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Sybase ASE CE
- Preparing to configure SF Sybase CE
- Configuring SF Sybase CE
- Configuring the SF Sybase CE components using the script-based installer
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Performing an automated SF Sybase CE configuration
- Performing an automated I/O fencing configuration using response files
- Configuring a cluster under VCS control using a response file
- Section II. Post-installation and configuration tasks
- Section III. Upgrade of SF Sybase CE
- Planning to upgrade SF Sybase CE
- Performing a full upgrade of SF Sybase CE using the product installer
- Performing an automated full upgrade of SF Sybase CE using response files
- Performing a phased upgrade of SF Sybase CE
- Performing a phased upgrade of SF Sybase CE from version 6.2.1 and later release
- Performing a rolling upgrade of SF Sybase CE
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Preparing to configure the Sybase instances under VCS control
- Installing, configuring, and upgrading Sybase ASE CE
- Section V. Adding and removing nodes
- Adding a node to SF Sybase CE clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding the new instance to the Sybase ASE CE cluster
- Removing a node from SF Sybase CE clusters
- Adding a node to SF Sybase CE clusters
- Section VI. Configuration of disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Sample installation and configuration values
- Appendix C. Tunable files for installation
- Appendix D. Configuration files
- Sample main.cf files for Sybase ASE CE configurations
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. High availability agent information
Modifying the VCS configuration files on existing nodes
Modify the configuration files on the remaining nodes of the cluster to remove references to the deleted nodes.
Tasks for modifying the cluster configuration files:
Edit the /etc/llthosts file
Edit the /etc/gabtab file
Modify the VCS configuration to remove the node
To edit the /etc/llthosts file
- On each of the existing nodes, edit the
/etc/llthosts
file to remove lines that contain references to the removed nodes.For example, if system3 is the node removed from the cluster, remove the line "2 system3" from the file:
0 sys1 1 sys2 2 system3
Change to:
0 sys1 1 sys2
- Modify the following command in the
/etc/gabtab
file to reflect the number of systems after the node is removed:/sbin/gabconfig -c -nN
where N is the number of remaining nodes in the cluster.
For example, with two nodes remaining, the file resembles:
/sbin/gabconfig -c -n2
Modify the VCS configuration file main.cf to remove all references to the deleted node.
Use one of the following methods to modify the configuration:
Edit the
/etc/VRTSvcs/conf/config/main.cf
fileThis method requires application down time.
Use the command line interface
This method allows the applications to remain online on all remaining nodes.
The following procedure uses the command line interface and modifies the sample VCS configuration to remove references to the deleted node. Run the steps in the procedure from one of the existing nodes in the cluster. The procedure allows you to change the VCS configuration while applications remain online on the remaining nodes.
To modify the cluster configuration using the command line interface (CLI)
- Back up the
/etc/VRTSvcs/conf/config/main.cf
file.# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.3node.bak
- Change the cluster configuration to read-write mode:
# haconf -makerw
- Remove the node from the AutoStartList attribute of the service group by specifying the remaining nodes in the desired order:
# hagrp -modify cvm AutoStartList sys1 sys2
- Remove the deleted node from the system list of any other parent service groups to CVM that exist on the cluster before removing CVM. For example, to delete the node system3:
# hagrp -modify syb_grp SystemList -delete system3 # hagrp -modify Sybase SystemList -delete system3 # hagrp -modify cvm SystemList -delete system3 # hares -modify cvm_clus CVMNodeId -delete system3
- If you have a local VxFS configuration, will also need to remove the diskgroup of node to be removed from binmnt.
# hares -modify sybase_install_dg DiskGroup -delete \ sybase_new_diskgroup
- Remove the node from the SystemList attribute of the service group:
If the system is part of the SystemList of a parent group, it must be deleted from the parent group first.
- Remove the node from the CVMNodeId attribute of the service group:
# hares -modify cvm_clus CVMNodeId -delete system3
- Remove the deleted node from the NodeList attribute of all CFS mount resources:
# hares -modify CFSMount NodeList -delete system3
- Remove the deleted node from the cluster system list:
# hasys -delete system3
- Save the new configuration to disk:
# haconf -dump -makero
- Verify that the node is removed from the VCS configuration.
# grep -i system3 /etc/VRTSvcs/conf/config/main.cf
If the node is not removed, use the VCS commands as described in this procedure to remove the node.