Veritas InfoScale™ 7.3.1 Troubleshooting Guide - Solaris
- Introduction
- Section I. Troubleshooting Veritas File System
- Section II. Troubleshooting Veritas Volume Manager
- Recovering from hardware failure
- Failures on RAID-5 volumes
- Recovery from failure of a DCO volume
- Recovering from instant snapshot failure
- Recovering from failed vxresize operation
- Recovering from boot disk failure
- Hot-relocation and boot disk failure
- Recovery from boot failure
- Repair of root or /usr file systems on mirrored volumes
- Replacement of boot disks
- Recovery by reinstallation
- Managing commands, tasks, and transactions
- Backing up and restoring disk group configurations
- Troubleshooting issues with importing disk groups
- Recovering from CDS errors
- Logging and error messages
- Troubleshooting Veritas Volume Replicator
- Recovery from configuration errors
- Errors during an RLINK attach
- Errors during modification of an RVG
- Recovery on the Primary or Secondary
- Recovering from Primary data volume error
- Primary SRL volume error cleanup and restart
- Primary SRL header error cleanup and recovery
- Secondary data volume error cleanup and recovery
- Troubleshooting issues in cloud deployments
- Recovering from hardware failure
- Section III. Troubleshooting Dynamic Multi-Pathing
- Section IV. Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting CFS
- Troubleshooting fenced configurations
- Troubleshooting Cluster Volume Manager in Veritas InfoScale products clusters
- Troubleshooting Storage Foundation Cluster File System High Availability
- Section V. Troubleshooting Cluster Server
- Troubleshooting and recovery for VCS
- VCS message logging
- Gathering VCS information for support analysis
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting I/O fencing
- System panics to prevent potential data corruption
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the Veritas InfoScale products cluster nodes
- Issues during online migration of coordination points
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting licensing
- Licensing error messages
- VCS message logging
- Troubleshooting and recovery for VCS
- Section VI. Troubleshooting SFDB
Troubleshooting Intelligent Monitoring Framework (IMF)
Review the following logs to isolate and troubleshoot Intelligent Monitoring Framework (IMF) related issues:
System console log for the given operating system
VCS engine Log : /var/VRTSvcs/log/engine_A.log
Agent specific log : /var/VRTSvcs/log/<agentname>_A.log
AMF in-memory trace buffer : View the contents using the amfconfig -p dbglog command
See Enabling debug logs for IMF.
See Gathering IMF information for support analysis.
Table: IMF-related issues and recommended actions lists the most common issues for intelligent resource monitoring and provides instructions to troubleshoot and fix the issues.
Table: IMF-related issues and recommended actions
Issue | Description and recommended action |
---|---|
Intelligent resource monitoring has not reduced system utilization | If the system is busy even after intelligent resource monitoring is enabled, troubleshoot as follows:
|
Enabling the agent's intelligent monitoring does not provide immediate performance results | The actual intelligent monitoring for a resource starts only after a steady state is achieved. So, it takes some time before you can see positive performance effect after you enable IMF. This behavior is expected. For more information on when a steady state is reached, see the following topic: |
Agent does not perform intelligent monitoring despite setting the IMF mode to 3 | For the agents that use AMF driver for IMF notification, if intelligent resource monitoring has not taken effect, do the following:
|
AMF module fails to unload despite changing the IMF mode to 0 | Even after you change the value of the Mode key to zero, the agent still continues to have a hold on the AMF driver until you kill the agent. To unload the AMF module, all holds on it must get released. If the AMF module fails to unload after changing the IMF mode value to zero, do the following:
|
When you try to enable IMF for an agent, the haimfconfig -enable -agent <agent_name> command returns a message that IMF is enabled for the agent. However, when VCS and the respective agent is running, the haimfconfig -display command shows the status for agent_name as DISABLED. | A few possible reasons for this behavior are as follows:
|