Veritas InfoScale™ 7.3.1 Troubleshooting Guide - Solaris
- Introduction
- Section I. Troubleshooting Veritas File System
- Section II. Troubleshooting Veritas Volume Manager
- Recovering from hardware failure
- Failures on RAID-5 volumes
- Recovery from failure of a DCO volume
- Recovering from instant snapshot failure
- Recovering from failed vxresize operation
- Recovering from boot disk failure
- Hot-relocation and boot disk failure
- Recovery from boot failure
- Repair of root or /usr file systems on mirrored volumes
- Replacement of boot disks
- Recovery by reinstallation
- Managing commands, tasks, and transactions
- Backing up and restoring disk group configurations
- Troubleshooting issues with importing disk groups
- Recovering from CDS errors
- Logging and error messages
- Troubleshooting Veritas Volume Replicator
- Recovery from configuration errors
- Errors during an RLINK attach
- Errors during modification of an RVG
- Recovery on the Primary or Secondary
- Recovering from Primary data volume error
- Primary SRL volume error cleanup and restart
- Primary SRL header error cleanup and recovery
- Secondary data volume error cleanup and recovery
- Troubleshooting issues in cloud deployments
- Recovering from hardware failure
- Section III. Troubleshooting Dynamic Multi-Pathing
- Section IV. Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting Storage Foundation Cluster File System High Availability
- Troubleshooting CFS
- Troubleshooting fenced configurations
- Troubleshooting Cluster Volume Manager in Veritas InfoScale products clusters
- Troubleshooting Storage Foundation Cluster File System High Availability
- Section V. Troubleshooting Cluster Server
- Troubleshooting and recovery for VCS
- VCS message logging
- Gathering VCS information for support analysis
- Troubleshooting the VCS engine
- Troubleshooting Low Latency Transport (LLT)
- Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
- Troubleshooting VCS startup
- Troubleshooting service groups
- Troubleshooting resources
- Troubleshooting I/O fencing
- System panics to prevent potential data corruption
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the Veritas InfoScale products cluster nodes
- Issues during online migration of coordination points
- Troubleshooting notification
- Troubleshooting and recovery for global clusters
- Troubleshooting licensing
- Licensing error messages
- VCS message logging
- Troubleshooting and recovery for VCS
- Section VI. Troubleshooting SFDB
Cleaning up the system configuration
After reinstalling VxVM, you must clean up the system configuration.
To clean up the system configuration
- Remove any volumes associated with rootability. This must be done if the root disk (and any other disk involved in the system boot process) was under Veritas Volume Manager control.
The following volumes must be removed:
rootvol
Contains the root file system.
swapvol
Contains the swap area.
standvol (if present)
Contains the stand file system.
usr (if present)
contains the usr file system.
To remove the root volume, use the vxedit command:
# vxedit -fr rm rootvol
Repeat this command, using swapvol, standvol, and usr in place of rootvol, to remove the swap, stand, and usr volumes.
- After completing the rootability cleanup, you must determine which volumes need to be restored from backup. The volumes to be restored include those with all mirrors (all copies of the volume) residing on disks that have been reinstalled or removed. These volumes are invalid and must be removed, recreated, and restored from backup. If only some mirrors of a volume exist on reinstalled or removed disks, these mirrors must be removed. The mirrors can be re-added later.
Establish which VM disks have been removed or reinstalled using the following command:
# vxdisk list
This displays a list of system disk devices and the status of these devices. For example, for a reinstalled system with three disks and a reinstalled root disk, the output of the vxdisk list command is similar to this:
DEVICE TYPE DISK GROUP STATUS c0t0d0s2 sliced - - error c0t1d0s2 sliced disk02 mydg online c0t2d0s2 sliced disk03 mydg online - - disk01 mydg failed was:c0t0d0s2
The display shows that the reinstalled root device, c0t0d0s2, is not associated with a VM disk and is marked with a status of error. The disks disk02 and disk03 were not involved in the reinstallation and are recognized by VxVM and associated with their devices (c0t1d0s2 and c0t2d0s2). The former disk01, which was the VM disk associated with the replaced disk device, is no longer associated with the device (c0t0d0s2).
If other disks (with volumes or mirrors on them) had been removed or replaced during reinstallation, those disks would also have a disk device listed in error state and a VM disk listed as not associated with a device.
- After you know which disks have been removed or replaced, locate all the mirrors on failed disks using the following command:
# vxprint [-g diskgroup] -sF "%vname" -e 'sd_disk = "disk"'
where disk is the access name of a disk with a failed status. Be sure to enclose the disk name in quotes in the command. Otherwise, the command returns an error message. The vxprint command returns a list of volumes that have mirrors on the failed disk. Repeat this command for every disk with a failed status.
The following is sample output from running this command:
# vxprint -g mydg -sF "%vname" -e 'sd_disk = "disk01"' v01
- Check the status of each volume and print volume information using the following command:
# vxprint -th volume
where volume is the name of the volume to be examined. The vxprint command displays the status of the volume, its plexes, and the portions of disks that make up those plexes. For example, a volume named v01 with only one plex resides on the reinstalled disk named disk01. The vxprint -th v01 command produces the following output:
V NAME RVG/VSET/COKSTATE STATE LENGTH READPOL PREFPLEXUTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v01 - DISABLED ACTIVE 24000 SELECT - fsgen pl v01-01 v01 DISABLED NODEVICE 24000 CONCAT - RW sd disk01-06 v01-01 disk01 245759 24000 0 c1t5d1 ENA
The only plex of the volume is shown in the line beginning with pl. The STATE field for the plex named v01-01 is NODEVICE. The plex has space on a disk that has been replaced, removed, or reinstalled. The plex is no longer valid and must be removed.
- Because v01-01 was the only plex of the volume, the volume contents are irrecoverable except by restoring the volume from a backup. The volume must also be removed. If a backup copy of the volume exists, you can restore the volume later. Keep a record of the volume name and its length, as you will need it for the backup procedure.
Remove irrecoverable volumes (such as v01) using the following command:
# vxedit -r rm v01
- It is possible that only part of a plex is located on the failed disk. If the volume has a striped plex associated with it, the volume is divided between several disks. For example, the volume named v02 has one striped plex striped across three disks, one of which is the reinstalled disk disk01. The vxprint -th v02 command produces the following output:
V NAME RVG/VSET/COKSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v02 - DISABLED ACTIVE 30720 SELECT v02-01 fsgen pl v02-01 v02 DISABLED NODEVICE 30720 STRIPE 3/128 RW sd disk02-02v02-01 disk01 424144 10240 0/0 c1t5d2 ENA sd disk01-05v02-01 disk01 620544 10240 1/0 c1t5d3 DIS sd disk03-01v02-01 disk03 620544 10240 2/0 c1t5d4 ENA
The display shows three disks, across which the plex v02-01 is striped (the lines starting with sd represent the stripes). One of the stripe areas is located on a failed disk. This disk is no longer valid, so the plex named v02-01 has a state of NODEVICE. Since this is the only plex of the volume, the volume is invalid and must be removed. If a copy of v02 exists on the backup media, it can be restored later. Keep a record of the volume name and length of any volume you intend to restore from backup.
Remove invalid volumes (such as v02) using the following command:
# vxedit -r rm v02
- A volume that has one mirror on a failed disk can also have other mirrors on disks that are still valid. In this case, the volume does not need to be restored from backup, since the data is still valid on the valid disks.
The output of the vxprint -th command for a volume with one plex on a failed disk (disk01) and another plex on a valid disk (disk02) is similar to the following:
V NAME RVG/VSET/COKSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v v03 - DISABLED ACTIVE 0720 SELECT - fsgen pl v03-01 v03 DISABLED ACTIVE 30720 CONCAT - RW sd disk02-01 v03-01 disk01 620544 30720 0 c1t5d5 ENA pl v03-02 v03 DISABLED NODEVICE 30720 CONCAT - RW sd disk01-04 v03-02 disk03 262144 30720 0 c1t5d6 DIS
This volume has two plexes, v03-01 and v03-02. The first plex (v03-01) does not use any space on the invalid disk, so it can still be used. The second plex (v03-02) uses space on invalid disk disk01 and has a state of NODEVICE. Plex v03-02 must be removed. However, the volume still has one valid plex containing valid data. If the volume needs to be mirrored, another plex can be added later. Note the name of the volume to create another plex later.
To remove an invalid plex, use the vxplex command to dissociate and then remove the plex from the volume. For example, to dissociate and remove the plex v03-02, use the following command:
# vxplex -o rm dis v03-02
- After you remove all invalid volumes and plexes, you can clean up the disk configuration. Each disk that was removed, reinstalled, or replaced (as determined from the output of the vxdisk list command) must be removed from the configuration.
To remove a disk, use the vxdg command. For example, to remove the failed disk disk01, use the following command:
# vxdg rmdisk disk01
If the vxdg command returns an error message, invalid mirrors exist.
Repeat step 2 through step 7 until all invalid volumes and mirrors are removed.
- After you remove all the invalid disks, you can add the replacement or reinstalled disks to Veritas Volume Manager control. If the root disk was originally under Veritas Volume Manager control or you now wish to put the root disk under Veritas Volume Manager control, add this disk first.
To add the root disk to Veritas Volume Manager control, use the vxdiskadm command:
# vxdiskadm
From the vxdiskadm main menu, select menu item 2 (Encapsulate a disk). Follow the instructions and encapsulate the root disk for the system.
- When the encapsulation is complete, reboot the system to multi-user mode.
- After the root disk is encapsulated, any other disks that were replaced should be added using the vxdiskadm command. If the disks were reinstalled during the operating system reinstallation, they should be encapsulated; otherwise, they can be added.
- After all the disks have been added to the system, any volumes that were completely removed as part of the configuration cleanup can be recreated and their contents restored from backup. The volume recreation can be done by using the vxassist command or the graphical user interface.
For example, to recreate the volumes v01 and v02, use the following command:
# vxassist -g dg01 make v01 24000 # vxassist -g dg02 make v02 30720 layout=stripe nstripe=3
After the volumes are created, they can be restored from backup using normal backup/restore procedures.
- Recreate any plexes for volumes that had plexes removed as part of the volume cleanup. To replace the plex removed from volume v03, use the following command:
# vxassist -g dg03 mirror v03
After you have restored the volumes and plexes lost during reinstallation, recovery is complete and your system is configured as it was prior to the failure.
- Start up hot-relocation, if required, by either rebooting the system or manually start the relocation watch daemon, vxrelocd (this also starts the vxnotify process).
Warning:
Hot-relocation should only be started when you are sure that it will not interfere with other reconfiguration procedures.
To determine if hot-relocation has been started, use the following command to search for its entry in the process table:
# ps -ef | grep vxrelocd
See the Storage Foundation Administrator's Guide.
See the vxrelocd(1M) manual page.