Storage Foundation for Sybase ASE CE 7.4.1 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Sybase ASE CE
- Preparing to configure SF Sybase CE
- Configuring SF Sybase CE
- Configuring the SF Sybase CE components using the script-based installer
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Performing an automated SF Sybase CE configuration
- Performing an automated I/O fencing configuration using response files
- Configuring a cluster under VCS control using a response file
- Section II. Post-installation and configuration tasks
- Section III. Upgrade of SF Sybase CE
- Planning to upgrade SF Sybase CE
- Performing a full upgrade of SF Sybase CE using the product installer
- Performing an automated full upgrade of SF Sybase CE using response files
- Performing a phased upgrade of SF Sybase CE
- Performing a phased upgrade of SF Sybase CE from version 6.2.1 and later release
- Performing a rolling upgrade of SF Sybase CE
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Preparing to configure the Sybase instances under VCS control
- Installing, configuring, and upgrading Sybase ASE CE
- Section V. Adding and removing nodes
- Adding a node to SF Sybase CE clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding the new instance to the Sybase ASE CE cluster
- Removing a node from SF Sybase CE clusters
- Adding a node to SF Sybase CE clusters
- Section VI. Configuration of disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Sample installation and configuration values
- Appendix C. Tunable files for installation
- Appendix D. Configuration files
- Sample main.cf files for Sybase ASE CE configurations
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. High availability agent information
Response file variables to configure disk-based I/O fencing
Table: Response file variables specific to configuring disk-based I/O fencing lists the response file variables that specify the required information to configure disk-based I/O fencing for SF Sybase CE.
Table: Response file variables specific to configuring disk-based I/O fencing
Variable | List or Scalar | Description |
---|---|---|
CFG{opt}{fencing} | Scalar | Performs the I/O fencing configuration. (Required) |
CFG{fencing_option} | Scalar | Specifies the I/O fencing configuration mode.
(Required) |
CFG{fencing_dgname} | Scalar | Specifies the disk group for I/O fencing. (Optional) Note: You must define the fencing_dgname variable to use an existing disk group. If you want to create a new disk group, you must use both the fencing_dgname variable and the fencing_newdg_disks variable. |
CFG{fencing_newdg_disks} | List | Specifies the disks to use to create a new disk group for I/O fencing. (Optional) Note: You must define the fencing_dgname variable to use an existing disk group. If you want to create a new disk group, you must use both the fencing_dgname variable and the fencing_newdg_disks variable. |
CFG{fencing_cpagent_monitor_freq} | Scalar | Specifies the frequency at which the Coordination Point Agent monitors for any changes to the Coordinator Disk Group constitution. Note: Coordination Point Agent can also monitor changes to the Coordinator Disk Group constitution such as a disk being accidently deleted from the Coordinator Disk Group. The frequency of this detailed monitoring can be tuned with the LevelTwoMonitorFreq attribute. For example, if you set this attribute to 5, the agent will monitor the Coordinator Disk Group constitution every five monitor cycles. If LevelTwoMonitorFreq attribute is not set, the agent will not monitor any changes to the Coordinator Disk Group. 0 means not to monitor the Coordinator Disk Group constitution. |
CFG {fencing_config_cpagent} | Scalar | Enter '1' or '0' depending upon whether you want to configure the Coordination Point agent using the installer or not. Enter "0" if you do not want to configure the Coordination Point agent using the installer. Enter "1" if you want to use the installer to configure the Coordination Point agent. |
CFG {fencing_cpagentgrp} | Scalar | Name of the service group which will have the Coordination Point agent resource as part of it. Note: This field is obsolete if the field is given a value of '0'. |
CFG{fencing_auto_refresh_reg} | Scalar | Enable the auto refresh of coordination points variable in case registration keys are missing on any of CP servers. |