Storage Foundation for Sybase ASE CE 7.4 Administrator's Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.4)
Platform: Linux
  1. Overview of Storage Foundation for Sybase ASE CE
    1. About Storage Foundation for Sybase ASE CE
      1.  
        Benefits of SF Sybase CE
    2.  
      How SF Sybase CE works (high-level perspective)
    3. About SF Sybase CE components
      1. Communication infrastructure
        1.  
          Data flow
        2.  
          Communication requirements
      2. Cluster interconnect communication channel
        1.  
          Low Latency Transport
        2.  
          Group Membership Services/Atomic Broadcast
      3.  
        Low-level communication: port relationship between GAB and processes
      4. Cluster Volume Manager (CVM)
        1.  
          CVM architecture
        2.  
          CVM communication
        3.  
          CVM recovery
        4.  
          Configuration differences with VxVM
      5. Cluster File System (CFS)
        1.  
          CFS architecture
        2.  
          CFS communication
        3.  
          CFS file system benefits
        4.  
          CFS configuration differences
        5.  
          CFS recovery
        6.  
          Comparing raw volumes and CFS for data files
      6. Cluster Server (VCS)
        1.  
          VCS architecture
        2. VCS communication
          1.  
            About the IMF notification module
        3. About resource monitoring
          1.  
            How intelligent resource monitoring works
        4.  
          Cluster configuration files
      7. About I/O fencing in SF Sybase CE environment
        1.  
          About preferred fencing
        2. About preventing data corruption with I/O fencing
          1.  
            About SCSI-3 Persistent Reservations
          2.  
            About I/O fencing operations
          3. About I/O fencing components
            1.  
              About data disks
            2.  
              About coordination points
          4.  
            How I/O fencing works in different event scenarios
      8.  
        Sybase ASE CE components
    4. About optional features in SF Sybase CE
      1.  
        Typical configuration of SF Sybase CE clusters in secure mode
      2.  
        Typical configuration of VOM-managed SF Sybase CE clusters
      3.  
        About SF Sybase CE global cluster setup for disaster recovery
    5.  
      How the agent makes Sybase highly available
    6.  
      About Veritas InfoScale Operations Manager
  2. Administering SF Sybase CE and its components
    1. Administering SF Sybase CE
      1.  
        Setting the environment variables for SF Sybase CE
      2. Starting or stopping SF Sybase CE on each node
        1.  
          Starting SF Sybase CE using the script-based installer
        2.  
          Starting SF Sybase CE manually on each node
        3.  
          Stopping SF Sybase CE using the script-based installer
        4.  
          Stopping SF Sybase CE manually on each node
      3.  
        Applying operating system updates on SF Sybase CE nodes
      4.  
        Adding storage to an SF Sybase CE cluster
      5.  
        Recovering from storage failure
      6.  
        Enhancing the performance of SF Sybase CE clusters
      7.  
        Verifying the nodes in an SF Sybase CE cluster
    2. Administering VCS
      1.  
        Viewing available Veritas device drivers
      2.  
        Starting and stopping VCS
      3.  
        Environment variables to start and stop VCS modules
      4.  
        Adding and removing LLT links
      5.  
        Configuring aggregated interfaces under LLT
      6.  
        Displaying the cluster details and LLT version for LLT links
      7.  
        Configuring destination-based load balancing for LLT
      8.  
        Enabling and disabling intelligent resource monitoring for agents manually
      9.  
        Administering the AMF kernel driver
    3. Administering I/O fencing
      1.  
        About administering I/O fencing
      2. About the vxfentsthdw utility
        1.  
          General guidelines for using the vxfentsthdw utility
        2.  
          About the vxfentsthdw command options
        3. Testing the coordinator disk group using the -c option of vxfentsthdw
          1.  
            Removing and replacing a failed disk
        4.  
          Performing non-destructive testing on the disks using the -r option
        5.  
          Testing the shared disks using the vxfentsthdw -m option
        6.  
          Testing the shared disks listed in a file using the vxfentsthdw -f option
        7.  
          Testing all the disks in a disk group using the vxfentsthdw -g option
        8.  
          Testing a disk with existing keys
      3. About the vxfenadm utility
        1.  
          About the I/O fencing registration key format
        2.  
          Displaying the I/O fencing registration keys
        3.  
          Verifying that the nodes see the same disk
      4. About the vxfenclearpre utility
        1.  
          Removing preexisting keys
      5. About the vxfenswap utility
        1.  
          Replacing I/O fencing coordinator disks when the cluster is online
        2.  
          Replacing the coordinator disk group in a cluster that is online
        3.  
          Refreshing lost keys on coordinator disks
      6.  
        Enabling or disabling the preferred fencing policy
      7.  
        About I/O fencing log files
    4. Administering CVM
      1.  
        Establishing CVM cluster membership manually
      2. Changing the CVM master manually
        1.  
          Errors during CVM master switching
      3.  
        Importing a shared disk group manually
      4.  
        Deporting a shared disk group manually
      5.  
        Verifying if CVM is running in an SF Sybase CE cluster
      6.  
        Verifying CVM membership state
      7.  
        Verifying the state of CVM shared disk groups
      8.  
        Verifying the activation mode
    5. Administering CFS
      1.  
        Adding CFS file systems to a VCS configuration
      2.  
        Uses of cfsmount to mount and cfsumount to unmount CFS file system
      3.  
        Resizing CFS file systems
      4.  
        Verifying the status of CFS file system nodes and their mount points
    6. Administering the Sybase agent
      1.  
        Sybase agent functions
      2.  
        Monitoring options for the Sybase agent
      3.  
        Using the IPC Cleanup feature for the Sybase agent
      4.  
        Configuring the service group Sybase using the command line
      5.  
        Bringing the Sybase service group online
      6.  
        Taking the Sybase service group offline
      7.  
        Modifying the Sybase service group configuration
      8.  
        Viewing the agent log for Sybase
  3. Troubleshooting SF Sybase CE
    1. About troubleshooting SF Sybase CE
      1. Gathering information from an SF Sybase CE cluster for support analysis
        1.  
          Gathering configuration information using SORT Data Collector
        2.  
          Gathering VCS information for support analysis
        3.  
          Gathering LLT and GAB information for support analysis
        4.  
          Gathering IMF information for support analysis
      2. SF Sybase CE log files
        1.  
          Collecting important CVM logs
      3.  
        About SF Sybase CE kernel and driver messages
      4. VCS message logging
        1.  
          GAB message logging
        2.  
          About debug log tags usage
        3.  
          Enabling debug logs for agents
        4.  
          Enabling debug logs for the VCS engine
        5.  
          Enabling debug logs for IMF
        6.  
          Message catalogs
      5. Troubleshooting tips
        1.  
          Sybase installation error log
        2.  
          Veritas log files
        3.  
          OS system log
        4.  
          GAB port membership
    2.  
      Restarting the installer after a failed network connection
    3.  
      Installer cannot create UUID for the cluster
    4. Troubleshooting I/O fencing
      1.  
        The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
      2.  
        Node is unable to join cluster while another node is being ejected
      3.  
        System panics to prevent potential data corruption
      4.  
        Cluster ID on the I/O fencing key of coordinator disk does not match the local cluster's ID
      5. Fencing startup reports preexisting split-brain
        1.  
          Clearing preexisting split-brain condition
      6.  
        Registered keys are lost on the coordinator disks
      7.  
        Replacing defective disks when the cluster is offline
    5. Troubleshooting Cluster Volume Manager in SF Sybase CE clusters
      1.  
        Restoring communication between host and disks after cable disconnection
      2.  
        Shared disk group cannot be imported in SF Sybase CE cluster
      3.  
        Error importing shared disk groups in SF Sybase CE cluster
      4.  
        Unable to start CVM in SF Sybase CE cluster
      5.  
        CVM group is not online after adding a node to the SF Sybase CE cluster
      6.  
        CVMVolDg not online even though CVMCluster is online in SF Sybase CE cluster
      7.  
        Shared disks not visible in SF Sybase CE cluster
    6. Troubleshooting interconnects
      1.  
        Network interfaces change their names after reboot
      2.  
        Example entries for mandatory devices
    7. Troubleshooting Sybase ASE CE
      1.  
        Sybase private networks
      2.  
        Sybase instances under VCS control
      3.  
        Node does not reboot
      4.  
        Sybase instance not starting
  4. Prevention and recovery strategies
    1. Prevention and recovery strategies
      1.  
        Verification of GAB ports in SF Sybase CE cluster
      2.  
        Examining GAB seed membership
      3.  
        Manual GAB membership seeding
      4.  
        Evaluating VCS I/O fencing ports
      5.  
        Verifying normal functioning of VCS I/O fencing
      6. Managing SCSI-3 PR keys in SF Sybase CE cluster
        1.  
          Evaluating the number of SCSI-3 PR keys on a coordinator LUN, if there are multiple paths to the LUN from the hosts
        2.  
          Detecting accidental SCSI-3 PR key removal from coordinator LUNs
      7.  
        Identifying a faulty coordinator LUN
      8.  
        Starting shared volumes manually
      9.  
        Listing all the CVM shared disks
      10.  
        I/O Fencing kernel logs
  5. Tunable parameters
    1. About GAB tunable parameters
      1.  
        About GAB load-time or static tunable parameters
      2.  
        About GAB run-time or dynamic tunable parameters
    2. About LLT tunable parameters
      1.  
        About LLT timer tunable parameters
      2.  
        About LLT flow control tunable parameters
      3.  
        Setting LLT timer tunable parameters
    3. About VXFEN tunable parameters
      1.  
        Configuring the VXFEN module parameters
  6. Appendix A. Error messages
    1.  
      About error messages
    2.  
      VxVM error messages
    3. VXFEN driver error messages
      1.  
        VXFEN driver informational message
      2.  
        Node ejection informational messages

Replacing I/O fencing coordinator disks when the cluster is online

Review the procedures to add, remove, or replace one or more coordinator disks in a cluster that is operational.

Warning:

The cluster might panic if any node leaves the cluster membership before the vxfenswap script replaces the set of coordinator disks.

To replace a disk in a coordinator disk group when the cluster is online

  1. Make sure system-to-system communication is functioning properly.
  2. Determine the value of the FaultTolerance attribute.

    # hares -display coordpoint -attribute FaultTolerance -localclus

  3. Estimate the number of coordination points you plan to use as part of the fencing configuration.
  4. Set the value of the FaultTolerance attribute to 0.

    Note:

    It is necessary to set the value to 0 because later in the procedure you need to reset the value of this attribute to a value that is lower than the number of coordination points. This ensures that the Coordpoint Agent does not fault.

  5. Check the existing value of the LevelTwoMonitorFreq attribute.
    #hares -display coordpoint -attribute LevelTwoMonitorFreq -localclus

    Note:

    Make a note of the attribute value before you proceed to the next step. After migration, when you re-enable the attribute you want to set it to the same value.

    You can also run the hares -display coordpoint to find out whether the LevelTwoMonitorFreq value is set.

  6. Disable level two monitoring of CoordPoint agent.
    # hares -modify coordpoint LevelTwoMonitorFreq 0
  7. Make sure that the cluster is online.
    # vxfenadm -d
    I/O Fencing Cluster Information:
    ================================
    Fencing Protocol Version: 201
    Fencing Mode: vxfen_mode
    Fencing SCSI3 Disk Policy: dmp
    Cluster Members:
    		* 0 (system1)
    		1 (system2)
    RFSM State Information:
    		node 0 in state 8 (running)
    		node 1 in state 8 (running)
  8. Import the coordinator disk group.

    The file /etc/vxfendg includes the name of the disk group (typically, vxfencoorddg) that contains the coordinator disks, so use the command:

    # vxdg -tfC import 'cat /etc/vxfendg'

    where:

    -t specifies that the disk group is imported only until the node restarts.

    -f specifies that the import is to be done forcibly, which is necessary if one or more disks is not accessible.

    -C specifies that any import locks are removed.

  9. If your setup uses VRTSvxvm version, then skip to step 10. You need not set coordinator=off to add or remove disks. For other VxVM versions, perform this step:

    Where version is the specific release version.

    Turn off the coordinator attribute value for the coordinator disk group.

    # vxdg -g vxfencoorddg set coordinator=off
  10. To remove disks from the coordinator disk group, use the VxVM disk administrator utility vxdiskadm.
  11. Perform the following steps to add new disks to the coordinator disk group:

    • Add new disks to the node.

    • Initialize the new disks as VxVM disks.

    • Check the disks for I/O fencing compliance.

    • Add the new disks to the coordinator disk group and set the coordinator attribute value as "on" for the coordinator disk group.

    See the Storage Foundation for Sybase ASE CE Configuration Guide for detailed instructions.

    Note that though the disk group content changes, the I/O fencing remains in the same state.

  12. From one node, start the vxfenswap utility. You must specify the disk group to the utility.

    The utility performs the following tasks:

    • Backs up the existing /etc/vxfentab file.

    • Creates a test file /etc/vxfentab.test for the disk group that is modified on each node.

    • Reads the disk group you specified in the vxfenswap command and adds the disk group to the /etc/vxfentab.test file on each node.

    • Verifies that the serial number of the new disks are identical on all the nodes. The script terminates if the check fails.

    • Verifies that the new disks can support I/O fencing on each node.

  13. If the disk verification passes, the utility reports success and asks if you want to commit the new set of coordinator disks.
  14. Confirm whether you want to clear the keys on the coordination points and proceed with the vxfenswap operation.

    Do you want to clear the keys on the coordination points 
    and proceed with the vxfenswap operation? [y/n] (default: n) y
  15. Review the message that the utility displays and confirm that you want to commit the new set of coordinator disks. Else skip to step 16.
    Do you wish to commit this change? [y/n] (default: n) y

    If the utility successfully commits, the utility moves the /etc/vxfentab.test file to the /etc/vxfentab file.

  16. If you do not want to commit the new set of coordinator disks, answer n.

    The vxfenswap utility rolls back the disk replacement operation.

  17. If coordinator flag was set to off in step 9, then set it on.
    # vxdg -g vxfencoorddg set coordinator=on
  18. Deport the diskgroup.
    # vxdg deport vxfencoorddg 
  19. Re-enable the LevelTwoMonitorFreq attribute of the CoordPoint agent.You may want to use the value that was set before disabling the attribute.
    # hares -modify coordpoint LevelTwoMonitorFreq Frequencyvalue

    where Frequencyvalue is the value of the attribute.

  20. Set the FaultTolerance attribute to a value that is lower than 50% of the total number of coordination points.

    For example, if there are four (4) coordination points in your configuration, then the attribute value must be lower than two (2).If you set it to a higher value than two (2) the CoordPoint agent faults.