Veritas InfoScale™ 7.3.1 Troubleshooting Guide - Solaris
- Introduction
- Section I. Troubleshooting Veritas File System
- Section II. Troubleshooting Veritas Volume Manager
- Recovering from hardware failure
- Failures on RAID-5 volumes
- Recovery from failure of a DCO volume
- Recovering from instant snapshot failure
- Recovering from failed vxresize operation
- Recovering from boot disk failure
- Hot-relocation and boot disk failure
- Recovery from boot failure
- Repair of root or /usr file systems on mirrored volumes
- Replacement of boot disks
- Recovery by reinstallation
- Managing commands, tasks, and transactions
- Backing up and restoring disk group configurations
- Troubleshooting issues with importing disk groups
- Recovering from CDS errors
- Logging and error messages
- Troubleshooting Veritas Volume Replicator
- Recovery from configuration errors
- Errors during an RLINK attach
- Errors during modification of an RVG
- Recovery on the Primary or Secondary
- Recovering from Primary data volume error
- Primary SRL volume error cleanup and restart
- Primary SRL header error cleanup and recovery
- Secondary data volume error cleanup and recovery
- Troubleshooting issues in cloud deployments
- Recovering from hardware failure
- Section III. Troubleshooting Dynamic Multi-Pathing
- Section IV. Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting CFS
- Troubleshooting fenced configurations
- Troubleshooting Cluster Volume Manager in Veritas InfoScale products clusters
- Troubleshooting Storage Foundation Cluster File System High Availability
- Section V. Troubleshooting Cluster Server
- Troubleshooting and recovery for VCS
- VCS message logging
- Gathering VCS information for support analysis
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting I/O fencing
- System panics to prevent potential data corruption
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the Veritas InfoScale products cluster nodes
- Issues during online migration of coordination points
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting licensing
- Licensing error messages
- VCS message logging
- Troubleshooting and recovery for VCS
- Section VI. Troubleshooting SFDB
Restoring /etc/system if a copy is not available on the root disk
If /etc/system is damaged or missing, and a saved copy of this file is not available on the root disk, the system cannot be booted with the Veritas Volume Manager rootability feature turned on.
The following procedure assumes the device name of the root disk to be c0t0d0s2, and that the root (/) file system is on partition s0.
To boot the system without Veritas Volume Manager rootability and restore the configuration files
- Boot the operating system into single-user mode from its installation CD-ROM using the following command at the boot prompt:
ok boot cdrom -s
- Mount/dev/dsk/c0t0d0s0 on a suitable mount point such as /a or /mnt:
# mount /dev/dsk/c0t0d0s0 /a
- If a backup copy of/etc/system is available, restore this as the file /a/etc/system. If a backup copy is not available, create a new /a/etc/system file. Ensure that /a/etc/system contains the following entries that are required by VxVM:
set vxio:vol_rootdev_is_volume=1 forceload: drv/driver ... forceload: drv/vxio forceload: drv/vxspec forceload: drv/vxdmp rootdev:/pseudo/vxio@0:0
Lines of the form forceload: drv/driver are used to forcibly load the drivers that are required for the root mirror disks. Example driver names are pci, sd, ssd, dad and ide. To find out the names of the drivers, use the ls command to obtain a long listing of the special files that correspond to the devices used for the root disk, for example:
# ls -al /dev/dsk/c0t0d0s2
This produces output similar to the following (with irrelevant detail removed):
lrwxrwxrwx ... /dev/dsk/c0t0d0s2 -> ../../devices/pci@1f,0/pci@1/pci@1/SUNW,isptwo@4/sd@0,0:c
This example would require lines to force load both the pci and the sd drivers:
forceload: drv/pci forceload: drv/sd
- Shut down and reboot the system from the same root partition on which the configuration files were restored.