Storage Foundation for Sybase ASE CE 7.4.1 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Sybase ASE CE
- Preparing to configure SF Sybase CE
- Configuring SF Sybase CE
- Configuring the SF Sybase CE components using the script-based installer
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Performing an automated SF Sybase CE configuration
- Performing an automated I/O fencing configuration using response files
- Configuring a cluster under VCS control using a response file
- Section II. Post-installation and configuration tasks
- Section III. Upgrade of SF Sybase CE
- Planning to upgrade SF Sybase CE
- Performing a full upgrade of SF Sybase CE using the product installer
- Performing an automated full upgrade of SF Sybase CE using response files
- Performing a phased upgrade of SF Sybase CE
- Performing a phased upgrade of SF Sybase CE from version 6.2.1 and later release
- Performing a rolling upgrade of SF Sybase CE
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Preparing to configure the Sybase instances under VCS control
- Installing, configuring, and upgrading Sybase ASE CE
- Section V. Adding and removing nodes
- Adding a node to SF Sybase CE clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding the new instance to the Sybase ASE CE cluster
- Removing a node from SF Sybase CE clusters
- Adding a node to SF Sybase CE clusters
- Section VI. Configuration of disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Sample installation and configuration values
- Appendix C. Tunable files for installation
- Appendix D. Configuration files
- Sample main.cf files for Sybase ASE CE configurations
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. High availability agent information
Step 1: Performing pre-upgrade tasks on the first half of the cluster
Perform the following pre-upgrade steps on the first half of the cluster.
To perform the pre-upgrade tasks on the first half of the cluster
- Back up the following configuration files:
main.cf, types.cf, CVMTypes.cf, CFSTypes.cf, SybaseTypes.cf, /etc/llttab, /etc/llthosts, /etc/gabtab, /etc/vxfentab, /etc/vxfendg, /etc/vxfenmode
For example:
# cp /etc/VRTSvcs/conf/config/main.cf \ /etc/VRTSvcs/conf/config/main.cf.save # cp /etc/VRTSvcs/conf/config/types.cf \ /etc/VRTSvcs/conf/config/types.cf.save # cp /etc/VRTSvcs/conf/config/SybaseTypes.cf \ /etc/VRTSvcs/conf/config/SybaseTypes.cf.save
The installer verifies that recent backups of configuration files in the VxVM private region are saved in
/etc/vx/cbr/bk
.If not, the following warning message is displayed: Warning: Backup /etc/vx/cbr/bk directory.
- Stop all applications that are not configured under VCS but dependent on Sybase ASE CE or resources controlled by VCS. Use native application commands to stop the application.
- Stop the applications configured under VCS. Take the Sybase database group offline.
# hagrp -offline sybase_group -sys sys1 # hagrp -offline sybase_group -sys sys2
- Stop the Sybase Binaries service group (binmnt group).
# hagrp -offline binmnt -sys sys1
# hagrp -offline binmnt -sys sys2
- If the Sybase database is managed by VCS, set the AutoStart value to 0 to prevent the service group from starting automatically when VCS starts:
# haconf -makerw # hagrp -modify sybasece AutoStart 0 # haconf -dump -makero
- Unmount the CFS file systems that are not managed by VCS.
Make sure that no processes are running which make use of mounted shared file system. To verify that no processes use the CFS mount point:
# mount | grep vxfs | grep cluster
# fuser -cu /mount_point
Unmount the CFS file system:
# umount /mount_point
- Stop the parallel service groups and switch over failover service groups on each of the nodes in the first half of the cluster:
# hastop -local
- Unmount the VxFS file systems that are not managed by VCS.
Make sure that no processes are running which make use of mounted shared file system. To verify that no processes use the VxFS mount point:
# mount | grep vxfs
# fuser -cu /mount_point
Unmount the VxFS file system:
# umount /mount_point
- Verify that no VxVM volumes (other than VxVM boot volumes) remain open. Stop any open volumes that are not managed by VCS.
# vxvol -g diskgroup stopall # vxprint -Aht -e v_open
- If a cache area is online, you must take the cache area offline before upgrading the VxVM RPM. On the nodes in the first subcluster, use the following command to take the cache area offline:
# sfcache offline cachename
- Stop all the ports as follows:
For 6.0 and later versions:
For RHEL 7, SLES 12, and supported RHEL distributions:
# systemctl stop vxfen # systemctl stop gab # systemctl stop llt
For earlier versions of RHEL, SLES, and supported RHEL distributions:
# /opt/VRTSvcs/vxfen/bin/vxfen stop # /etc/init.d/gab stop # /etc/init.d/llt stop