Storage Foundation for Oracle® RAC 7.4.1 Administrator's Guide - Linux
- Section I. SF Oracle RAC concepts and administration
- Overview of Storage Foundation for Oracle RAC
- About Storage Foundation for Oracle RAC
- Component products and processes of SF Oracle RAC
- About Virtual Business Services
- Administering SF Oracle RAC and its components
- Administering SF Oracle RAC
- Starting or stopping SF Oracle RAC on each node
- Administering VCS
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Administering the CP server
- Administering CFS
- Administering CVM
- Changing the CVM master manually
- Administering Flexible Storage Sharing
- Backing up and restoring disk group configuration data
- Administering SF Oracle RAC global clusters
- Administering SF Oracle RAC
- Overview of Storage Foundation for Oracle RAC
- Section II. Performance and troubleshooting
- Troubleshooting SF Oracle RAC
- About troubleshooting SF Oracle RAC
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the SF Oracle RAC cluster nodes
- Issues during online migration of coordination points
- Troubleshooting Cluster Volume Manager in SF Oracle RAC clusters
- Troubleshooting CFS
- Troubleshooting interconnects
- Troubleshooting Oracle
- Troubleshooting ODM in SF Oracle RAC clusters
- Prevention and recovery strategies
- Tunable parameters
- Troubleshooting SF Oracle RAC
- Section III. Reference
Displaying exported disks and network shared disk groups
The vxdisk list and vxprint commands list network shared disks identifying the disks that are remote to the host from which the command is run. The vxdisk list command also provides an option to list disks while filtering out all remote disks.
To display exported disks, use the vxdisk list command:
# vxdisk list DEVICE TYPE DISK GROUP STATUS disk_01 auto:cdsdisk - - online exported disk_02 auto:cdsdisk - - online exported vm240v6_disk_01 auto:cdsdisk - - online remote vm240v6_disk_02 auto:cdsdisk - - online remote
The disk name includes a prefix that indicates the host to which the disk is attached. For example, for disk vm240v6_disk_01, vm240v6 is the host prefix. The exported status flag denotes disks that have been exported for FSS. The remote flag denotes disks that are not local to the host on which the command is run.
If the accessname argument is specified, disk connectivity information is displayed in the long listing output. This information is available only if the node on which the command is run is part of a CVM cluster.
The -o local option of the vxdisk list command filters out all remote disks.
For example:
# vxdisk -o local list DEVICE TYPE DISK GROUP STATUS disk_01 auto:cdsdisk - - online exported disk_02 auto:cdsdisk - - online
The -o fullshared option displays all disks that are shared across all active nodes.
The -o partialshared option displays all disks that are partially shared. Partially shared disks are connected to more than one node but not all the active nodes in the cluster.
Alternately, you can use the vxprint command to display remote disks in a disk group:
# vxprint Disk group: sdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg sdg sdg - - - - - - dm disk_1 vm240v6_disk_1 - 2027264 - REMOTE - - dm disk_4 vm240v6_disk_4 - 2027264 - REMOTE - - dm disk_5 disk5 - 2027264 - - - -
The vxdg list command displays hosts in the cluster that contribute their local disks to the disk group and storage enclosures from which disks have been added to the disk group. The hosts contributing their local disks to the disk groups and storage enclosures from which disks have been added to the disk group are listed under the storage-sources field.
Example output from this command is as follows:
Group: mydg dgid: 1343697721.24.vm240v5 import-id: 33792.24 flags: shared cds version: 190 alignment: 8192 (bytes) detach-policy:local ioship: on fss: on local-activation: shared-write storage-sources: vm240v5 vm240v6 emc0