Veritas InfoScale™ 7.3.1 Troubleshooting Guide - Solaris

Last Published:
Product(s): InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
  1. Introduction
    1.  
      About troubleshooting Veritas InfoScale Storage Foundation and High Availability Solutions products
    2.  
      About Veritas Services and Operations Readiness Tools (SORT)
    3. About unique message identifiers
      1.  
        Using Veritas Operations Readiness Tools to find a Unique Message Identifier description and solution
    4. About collecting application and daemon core data for debugging
      1.  
        Letting vxgetcore find debugging data automatically (the easiest method)
      2.  
        Running vxgetcore when you know the location of the core file
      3.  
        Letting vxgetcore prompt you for information
  2. Section I. Troubleshooting Veritas File System
    1. Diagnostic messages
      1. File system response to problems
        1.  
          Recovering a disabled file system
      2.  
        About kernel messages
  3. Section II. Troubleshooting Veritas Volume Manager
    1. Recovering from hardware failure
      1.  
        About recovery from hardware failure
      2.  
        Listing unstartable volumes
      3.  
        Displaying volume and plex states
      4.  
        The plex state cycle
      5.  
        Recovering an unstartable mirrored volume
      6.  
        Recovering an unstartable volume with a disabled plex in the RECOVER state
      7.  
        Forcibly restarting a disabled volume
      8.  
        Clearing the failing flag on a disk
      9.  
        Reattaching failed disks
      10.  
        Recovering from a failed plex attach or synchronization operation
      11. Failures on RAID-5 volumes
        1.  
          System failures
        2.  
          Disk failures
        3.  
          Default startup recovery process for RAID-5
        4. Recovery of RAID-5 volumes
          1.  
            Resynchronizing parity on a RAID-5 volume
          2.  
            Reattaching a failed RAID-5 log plex
          3.  
            Recovering a stale subdisk in a RAID-5 volume
        5.  
          Recovery after moving RAID-5 subdisks
        6. Unstartable RAID-5 volumes
          1.  
            Forcibly starting a RAID-5 volume with stale subdisks
      12.  
        Recovering from an incomplete disk group move
      13.  
        Restarting volumes after recovery when some nodes in the cluster become unavailable
      14. Recovery from failure of a DCO volume
        1.  
          Recovering a version 0 DCO volume
        2.  
          Recovering an instant snap DCO volume (version 20 or later)
    2. Recovering from instant snapshot failure
      1.  
        Recovering from the failure of vxsnap prepare
      2.  
        Recovering from the failure of vxsnap make for full-sized instant snapshots
      3.  
        Recovering from the failure of vxsnap make for break-off instant snapshots
      4.  
        Recovering from the failure of vxsnap make for space-optimized instant snapshots
      5.  
        Recovering from the failure of vxsnap restore
      6.  
        Recovering from the failure of vxsnap refresh
      7.  
        Recovering from copy-on-write failure
      8.  
        Recovering from I/O errors during resynchronization
      9.  
        Recovering from I/O failure on a DCO volume
      10.  
        Recovering from failure of vxsnap upgrade of instant snap data change objects (DCOs)
    3. Recovering from failed vxresize operation
      1.  
        Recovering from a failed vxresize shrink operation
    4. Recovering from boot disk failure
      1.  
        VxVM and boot disk failure
      2.  
        Possible root, swap, and usr configurations
      3.  
        Booting from an alternate boot disk on Solaris SPARC systems
      4.  
        The boot process on Solaris SPARC systems
      5. Hot-relocation and boot disk failure
        1.  
          Unrelocation of subdisks to a replacement boot disk
      6. Recovery from boot failure
        1.  
          Boot device cannot be opened
        2.  
          Cannot boot from unusable or stale plexes
        3.  
          Invalid UNIX partition
        4. Incorrect entries in /etc/vfstab
          1.  
            Damaged root (/) entry in /etc/vfstab
          2.  
            Damaged /usr entry in /etc/vfstab
        5. Missing or damaged configuration files
          1.  
            Restoring a copy of the system configuration file
          2.  
            Restoring /etc/system if a copy is not available on the root disk
      7. Repair of root or /usr file systems on mirrored volumes
        1.  
          Recovering a root disk and root mirror from a backup
      8. Replacement of boot disks
        1.  
          Re-adding a failed boot disk
        2.  
          Replacing a failed boot disk
      9. Recovery by reinstallation
        1.  
          General reinstallation information
        2. Reinstalling the system and recovering VxVM
          1.  
            Prepare the system for reinstallation
          2.  
            Reinstall the operating system
          3.  
            Reinstalling Veritas Volume Manager
          4.  
            Recovering the Veritas Volume Manager configuration
          5.  
            Cleaning up the system configuration
    5. Managing commands, tasks, and transactions
      1.  
        Command logs
      2.  
        Task logs
      3.  
        Transaction logs
      4.  
        Association of command, task, and transaction logs
      5.  
        Associating CVM commands issued from slave to master node
      6.  
        Command completion is not enabled
    6. Backing up and restoring disk group configurations
      1.  
        About disk group configuration backup
      2.  
        Backing up a disk group configuration
      3. Restoring a disk group configuration
        1.  
          Resolving conflicting backups for a disk group
      4.  
        Backing up and restoring Flexible Storage Sharing disk group configuration data
    7. Troubleshooting issues with importing disk groups
      1.  
        Clearing the udid_mismatch flag for non-clone disks
    8. Recovering from CDS errors
      1.  
        CDS error codes and recovery actions
    9. Logging and error messages
      1.  
        About error messages
      2. How error messages are logged
        1.  
          Configuring logging in the startup script
      3. Types of messages
        1.  
          Messages
      4. Using VxLogger for kernel-level logging
        1.  
          Configuring tunable settings for kernel-level logging
      5.  
        Collecting log information for troubleshooting
    10. Troubleshooting Veritas Volume Replicator
      1.  
        Recovery from RLINK connect problems
      2. Recovery from configuration errors
        1. Errors during an RLINK attach
          1.  
            Data volume errors during an RLINK attach
          2.  
            Volume set errors during an RLINK attach
        2. Errors during modification of an RVG
          1.  
            Missing data volume error during modifcation of an RVG
          2.  
            Data volume mismatch error during modification of an RVG
          3.  
            Data volume name mismatch error during modification of an RVG
          4. Volume set configuration errors during modification of an RVG
            1.  
              Volume set name mismatch error
            2.  
              Volume index mismatch error
            3.  
              Component volume mismatch error
      3. Recovery on the Primary or Secondary
        1.  
          About recovery from a Primary-host crash
        2. Recovering from Primary data volume error
          1.  
            Example - Recovery with detached RLINKs
          2.  
            Example - Recovery with minimal repair
          3.  
            Example - Recovery by migrating the primary
          4.  
            Example - Recovery from temporary I/O error
        3. Primary SRL volume error cleanup and restart
          1.  
            About RVG PASSTHRU mode
        4.  
          Primary SRL volume error at reboot
        5.  
          Primary SRL volume overflow recovery
        6. Primary SRL header error cleanup and recovery
          1.  
            Recovering from SRL header error
        7. Secondary data volume error cleanup and recovery
          1.  
            Recovery using a Secondary Storage Checkpoint
          2.  
            Cleanup using a Primary Storage Checkpoint
        8.  
          Secondary SRL volume error cleanup and recovery
        9.  
          Secondary SRL header error cleanup and recovery
        10.  
          Secondary SRL header error at reboot
    11. Troubleshooting issues in cloud deployments
      1.  
        In an Azure environment, exporting a disk for Flexible Storage Sharing (FSS) may fail with "Disk not supported for FSS operation" error
  4. Section III. Troubleshooting Dynamic Multi-Pathing
    1. Dynamic Multi-Pathing troubleshooting
      1.  
        Displaying extended attributes after upgrading to DMP
      2.  
        Recovering from errors when you exclude or include paths to DMP
      3.  
        Downgrading the array support
      4.  
        System un-bootable after turning on dmp_native_support tunable
  5. Section IV. Troubleshooting Storage Foundation Cluster File System High Availability
    1. Troubleshooting Storage Foundation Cluster File System High Availability
      1.  
        About troubleshooting Storage Foundation Cluster File System High Availability
      2. Troubleshooting CFS
        1.  
          Incorrect order in root user's <library> path
        2.  
          CFS commands might hang when run by a non-root user
      3. Troubleshooting fenced configurations
        1.  
          Example of a preexisting network partition (split-brain)
        2. Recovering from a preexisting network partition (split-brain)
          1.  
            Example Scenario I
          2.  
            Example Scenario II
          3.  
            Example Scenario III
      4. Troubleshooting Cluster Volume Manager in Veritas InfoScale products clusters
        1.  
          CVM group is not online after adding a node to the Veritas InfoScale products cluster
        2.  
          Shared disk group cannot be imported in Veritas InfoScale products cluster
        3.  
          Unable to start CVM in Veritas InfoScale products cluster
        4.  
          Removing preexisting keys
        5.  
          CVMVolDg not online even though CVMCluster is online in Veritas InfoScale products cluster
        6.  
          Shared disks not visible in Veritas InfoScale products cluster
  6. Section V. Troubleshooting Cluster Server
    1. Troubleshooting and recovery for VCS
      1. VCS message logging
        1.  
          Log unification of VCS agent's entry points
        2.  
          Enhancing First Failure Data Capture (FFDC) to troubleshoot VCS resource's unexpected behavior
        3.  
          GAB message logging
        4.  
          Enabling debug logs for agents
        5.  
          Enabling debug logs for IMF
        6.  
          Enabling debug logs for the VCS engine
        7.  
          About debug log tags usage
        8. Gathering VCS information for support analysis
          1.  
            Verifying the metered or forecasted values for CPU, Mem, and Swap
        9.  
          Gathering LLT and GAB information for support analysis
        10.  
          Gathering IMF information for support analysis
        11.  
          Message catalogs
      2. Troubleshooting the VCS engine
        1.  
          HAD diagnostics
        2.  
          HAD is not running
        3.  
          HAD restarts continuously
        4.  
          DNS configuration issues cause GAB to kill HAD
        5.  
          Seeding and I/O fencing
        6.  
          Preonline IP check
      3. Troubleshooting Low Latency Transport (LLT)
        1.  
          LLT startup script displays errors
        2.  
          LLT detects cross links usage
        3.  
          LLT link status messages
        4.  
          Unexpected db_type warning while stopping LLT that is configured over UDP
      4. Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
        1.  
          Delay in port reopen
        2.  
          Node panics due to client process failure
      5. Troubleshooting VCS startup
        1.  
          "VCS: 10622 local configuration missing" and "VCS: 10623 local configuration invalid"
        2.  
          "VCS:11032 registration failed. Exiting"
        3.  
          "Waiting for cluster membership."
      6.  
        Troubleshooting Intelligent Monitoring Framework (IMF)
      7. Troubleshooting service groups
        1.  
          VCS does not automatically start service group
        2.  
          System is not in RUNNING state
        3.  
          Service group not configured to run on the system
        4.  
          Service group not configured to autostart
        5.  
          Service group is frozen
        6.  
          Failover service group is online on another system
        7.  
          A critical resource faulted
        8.  
          Service group autodisabled
        9.  
          Service group is waiting for the resource to be brought online/taken offline
        10.  
          Service group is waiting for a dependency to be met.
        11.  
          Service group not fully probed.
        12.  
          Service group does not fail over to the forecasted system
        13.  
          Service group does not fail over to the BiggestAvailable system even if FailOverPolicy is set to BiggestAvailable
        14.  
          Restoring metering database from backup taken by VCS
        15.  
          Initialization of metering database fails
      8. Troubleshooting resources
        1.  
          Service group brought online due to failover
        2.  
          Waiting for service group states
        3.  
          Waiting for child resources
        4.  
          Waiting for parent resources
        5.  
          Waiting for resource to respond
        6. Agent not running
          1.  
            Invalid agent argument list.
        7.  
          The Monitor entry point of the disk group agent returns ONLINE even if the disk group is disabled
      9. Troubleshooting I/O fencing
        1.  
          Node is unable to join cluster while another node is being ejected
        2.  
          The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
        3.  
          Manually removing existing keys from SCSI-3 disks
        4. System panics to prevent potential data corruption
          1.  
            How I/O fencing works in different event scenarios
        5.  
          Cluster ID on the I/O fencing key of coordinator disk does not match the local cluster's ID
        6. Fencing startup reports preexisting split-brain
          1.  
            Clearing preexisting split-brain condition
        7.  
          Registered keys are lost on the coordinator disks
        8.  
          Replacing defective disks when the cluster is offline
        9.  
          The vxfenswap utility exits if rcp or scp commands are not functional
        10. Troubleshooting CP server
          1.  
            Troubleshooting issues related to the CP server service group
          2.  
            Checking the connectivity of CP server
        11. Troubleshooting server-based fencing on the Veritas InfoScale products cluster nodes
          1.  
            Issues during fencing startup on VCS nodes set up for server-based fencing
        12. Issues during online migration of coordination points
          1.  
            Vxfen service group activity after issuing the vxfenswap command
      10. Troubleshooting notification
        1.  
          Notifier is configured but traps are not seen on SNMP console.
      11. Troubleshooting and recovery for global clusters
        1.  
          Disaster declaration
        2.  
          Lost heartbeats and the inquiry mechanism
        3. VCS alerts
          1.  
            Types of alerts
          2.  
            Managing alerts
          3.  
            Actions associated with alerts
          4.  
            Negating events
          5.  
            Concurrency violation at startup
      12.  
        Troubleshooting the steward process
      13. Troubleshooting licensing
        1.  
          Validating license keys
        2. Licensing error messages
          1.  
            [Licensing] Insufficient memory to perform operation
          2.  
            [Licensing] No valid VCS license keys were found
          3.  
            [Licensing] Unable to find a valid base VCS license key
          4.  
            [Licensing] License key cannot be used on this OS platform
          5.  
            [Licensing] VCS evaluation period has expired
          6.  
            [Licensing] License key can not be used on this system
          7.  
            [Licensing] Unable to initialize the licensing framework
          8.  
            [Licensing] QuickStart is not supported in this release
          9.  
            [Licensing] Your evaluation period for the feature has expired. This feature will not be enabled the next time VCS starts
      14.  
        Verifying the metered or forecasted values for CPU, Mem, and Swap
  7. Section VI. Troubleshooting SFDB
    1. Troubleshooting SFDB
      1.  
        About troubleshooting Storage Foundation for Databases (SFDB) tools

Recovery from RLINK connect problems

This section describes the errors that may be encountered when connecting RLINKs. To be able to troubleshoot RLINK connect problems, it is important to understand the RLINK connection process.

Connecting the Primary and Secondary RLINKs is a two-step operation. The first step, which attaches the RLINK, is performed by issuing the vradmin startrep command. The second step, which connects the RLINKs, is performed by the kernels on the Primary and Secondary hosts.

When the vradmin startrep command is issued, VVR performs a number of checks to ensure that the operation is likely to succeed, and if it does, the command changes the state of the RLINKs from DETACHED/STALE to ENABLED/ACTIVE. The command then returns success.

If the command is successful, the kernel on the Primary is notified that the RLINK is enabled and it begins to send messages to the Secondary requesting it to connect. Under normal circumstances, the Secondary receives this message and connects. The state of the RLINKs then changes from ENABLED/ACTIVE to CONNECT/ACTIVE.

If the RLINK does not change to the CONNECT/ACTIVE state within a short time, there is a problem preventing the connection. This section describes a number of possible causes. An error message indicating the problem may be displayed on the console.

  • If the following error displays on the console:

    VxVM VVR vxrlink INFO V-5-1-5298 Unable to establish connection
     with remote host <remote_host>, retrying

    Make sure that the vradmind daemon is running on the Primary and the Secondary hosts; otherwise, start the vradmind daemon by issuing the following command:

    # /usr/sbin/vxstart_vvr

    For an RLINK in a shared disk group, make sure that the virtual IP address of the RLINK is enabled on the logowner.

  • If there is no self-explanatory error message, issue the following command on both the Primary and Secondary hosts:

    # vxprint -g diskgroup -l rlink_name

    In the output, check the following:

    The remote_host of each host is the same as local_host of the other host.

    The remote_dg of each host is the same as the disk group of the RVG on the other host.

    The remote_dg_dgid of each host is the same as the dgid (disk group ID) of the RVG on the other host as displayed in the output of the vxprint -l diskgroup command.

    The remote_rlink of each host is the same as the name of the corresponding RLINK on the other host.

    The remote_rlink_rid of each host is the same as the rid of the corresponding RLINK on the other host.

    Make sure that the network is working as expected. Network problems might affect VVR, such as prevention of RLINKs from connecting or low performance. Possible problems could be high latency, low bandwidth, high collision counts, and excessive dropped packets.

  • For an RLINK in a private disk group, issue the following command on each host.

    For an RLINK in a shared disk group, use vxprint -Vl | grep logowner to find the logowner node, then issue the following command on the logowner on the Primary and Secondary.

    # ping -s remote_host

    Note:

    This command is only valid when ICMP ping is allowed between the VVR Primary and the VVR Secondary.

    After 10 iterations, type Ctrl-C. There should be no packet loss or very little packet loss. To ensure that the network can transmit large packets, issue the following command on each host for an RLINK in a private disk group.

    For an RLINK in a shared disk group, issue the following command on the logowner on the Primary and Secondary:

    # ping -I 2 remote_host 8192

    The packet loss should be about the same as for the earlier ping command.

  • Issue the vxiod command on each host to ensure that there are active I/O daemons. If the output is 0 volume I/O daemons running, activate I/O daemons by issuing the following command:

    # vxiod set 10
  • VVR uses well-known ports to establish communications with other hosts.

    Issue the following command to display the port number:

    # vxprint -g diskgroup -l rlink_name

    Issue the following command to ensure that the heartbeat port number in the output matches the port displayed by vxprint command:

    # vrport

    Confirm that the state of the heartbeat port is Idle by issuing the following command:

    # netstat -an -P udp

    The output looks similar to this:

    UDP: IPv4
        Local Address         Remote Address        State
        --------------------  --------------------  -------
        *.port-number                                  Idle
  • Check for VVR ports on the Primary and Secondary sites.

    Run the vrport utility and verify that ports are same at both ends.

    Check whether the required VVR ports are open. Check for UDP 4145, TCP 4145, TCP 8199, and the anonymous port. Enter the following commands:

    # netstat -an -P udp | grep 4145
    *.4145                         Idle
    *.4145                                                     Idle
    # netstat -an -P tcp | grep 4145
    *.4145          *.*                0      0 49152      0 LISTEN
    *.4145                          *.*                             0      0 49152      0 LISTEN
    # netstat -an -P tcp | grep 8199
    *.8199          *.*                0      0 49152      0 LISTEN
    10.180.162.41.32990  10.180.162.42.8199   49640      0 49640      0 ESTABLISHED
    *.8199                          *.*                             0      0 49152      0 LISTEN    
     

    Perform a telnet test to check for open ports. For example, to determine if port 4145 is open, enter the following:

    # telnet <remote> 4145
  • Use the netstat command to check if vradmind daemons can connect between the Primary site and the Secondary site.

    # netstat -an -P tcp | grep 8199 | grep ESTABLISHED
     10.180.162.41.32990  10.180.162.42.8199   49640      0 49640      0 ESTABLISHED

    If there is no established connection, check if the /etc/hosts file has entries for the Primary and Secondary sites. Add all participating system names and IP addresses to the /etc/hosts files on each system or add the information to the name server database of your name service.

  • On Solaris 11, you must manually edit the /etc/hosts file to remove the hostname from the lines for loopback addresses.

    For example:

    ::1 seattle localhost
    127.0.0.1 seattle loghost localhost

    needs to be changed to:

    ::1 localhost
    127.0.0.1 loghost localhost
    129.148.174.232 seattle