InfoScale™ 9.0 Troubleshooting Guide - Linux
- Introduction
- Section I. Troubleshooting Veritas File System
- Section II. Troubleshooting Veritas Volume Manager
- Recovering from hardware failure
- Failures on RAID-5 volumes
- Recovery from failure of a DCO volume
- Recovering from instant snapshot failure
- Recovering from failed vxresize operation
- Recovering from boot disk failure
- VxVM boot disk recovery
- Recovery by reinstallation
- Managing commands, tasks, and transactions
- Backing up and restoring disk group configurations
- Troubleshooting issues with importing disk groups
- Recovering from CDS errors
- Logging and error messages
- Troubleshooting Veritas Volume Replicator
- Recovery from configuration errors
- Errors during an RLINK attach
- Errors during modification of an RVG
- Recovery on the Primary or Secondary
- Recovering from Primary data volume error
- Primary SRL volume error cleanup and restart
- Primary SRL header error cleanup and recovery
- Secondary data volume error cleanup and recovery
- Recovering from hardware failure
- Section III. Troubleshooting Dynamic Multi-Pathing
- Section IV. Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting CFS
- Troubleshooting fenced configurations
- Troubleshooting Cluster Volume Manager in Arctera InfoScale products clusters
- Troubleshooting interconnects
- Troubleshooting Storage Foundation Cluster File System High Availability
- Section V. Troubleshooting Cluster Server
- Troubleshooting and recovery for VCS
- VCS message logging
- Gathering VCS information for support analysis
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting issues with systemd unit service files
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting I/O fencing
- System panics to prevent potential data corruption
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the Arctera InfoScale products cluster nodes
- Issues during online migration of coordination points
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting licensing
- Licensing error messages
- VCS message logging
- Troubleshooting and recovery for VCS
- Section VI. Troubleshooting SFDB
VxVM volumes listed in /etc/fstab may not get mounted automatically at boot time
VxVM volumes mentioned in /etc/fstab
may not get mounted at boot time. On systemd-enabled Linux platforms, some block devices may be discovered late during the boot process. If an OS user tries to mount block devices that are not yet available, the system may hang during the boot process or the mount for such devices may get skipped.
Workaround: To avoid such failures in systemd environments, if you add any VxVM volumes in /etc/fstab
, you should also specify the _netdev option and reload sytsemd
before a reboot. Doing so ensures that the proper shutdown and boot sequences are followed for those VxVM volumes.
To add VxVM volumes to /etc/fstab
so that they are automatically mounted on reboot
- Add the _netdev option to
/etc/fstab
along with the mount path.For example:
# /dev/vx/dsk/testdg/testvol /testvol vxfs _netdev 0 0
- Reload the systemd daemon.
# systemctl daemon-reload
Note:
Perform these steps immediately after you edit the mount path in /etc/fstab
and before any subsequent reboot occurs.