Storage Foundation for Oracle® RAC 7.4.1 Administrator's Guide - Linux
- Section I. SF Oracle RAC concepts and administration
- Overview of Storage Foundation for Oracle RAC
- About Storage Foundation for Oracle RAC
- Component products and processes of SF Oracle RAC
- About Virtual Business Services
- Administering SF Oracle RAC and its components
- Administering SF Oracle RAC
- Starting or stopping SF Oracle RAC on each node
- Administering VCS
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Administering the CP server
- Administering CFS
- Administering CVM
- Changing the CVM master manually
- Administering Flexible Storage Sharing
- Backing up and restoring disk group configuration data
- Administering SF Oracle RAC global clusters
- Administering SF Oracle RAC
- Overview of Storage Foundation for Oracle RAC
- Section II. Performance and troubleshooting
- Troubleshooting SF Oracle RAC
- About troubleshooting SF Oracle RAC
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the SF Oracle RAC cluster nodes
- Issues during online migration of coordination points
- Troubleshooting Cluster Volume Manager in SF Oracle RAC clusters
- Troubleshooting CFS
- Troubleshooting interconnects
- Troubleshooting Oracle
- Troubleshooting ODM in SF Oracle RAC clusters
- Prevention and recovery strategies
- Tunable parameters
- Troubleshooting SF Oracle RAC
- Section III. Reference
Setting the host prefix
The host prefix provides a way to intuitively identify the origin of the disk, for example, the host to which the disk is physically connected. The vxdctl command sets the host prefix. By setting the instance for the host prefix, you can set an alternate host identifier that satisfies the disk access name length constraint.
Note:
Disks from an array enclosure are not displayed with a host prefix.
In the case of direct attached storage (DAS) disks, a host prefix is added to the disk at the time of disk discovery itself to avoid disk naming conflicts.
In the case of array based enclosures connected through the SAN or Fibre Channel (FC), it is possible to connect different enclosures of the same type (same vendor, same model) to different nodes exclusively and thus create a DAS-type topology. In such cases, there is also the possibility of the same VxVM device name getting assigned to different disks on different nodes. If such disks are exported for use in an FSS environment, the same confusion over names can arise. In such cases however, the hostprefix will not be attached to the exported disks. The naming conflicts are resolved by adding a serial number to the VxVM device name for disks with same name. However, it is recommended that the name of the enclosure be changed for easy identification and readability.
The difference of naming behavior between DAS disks and enclosure based disks exists due to the following reasons. It is possible for the enclosure to be connected to only one node at any given instance of a CVM cluster. However, with a node join there could be two (or more) nodes connected to the enclosure at another instance. With nodes dynamically joining and leaving the cluster, connectivity to array enclosures can also change dynamically. Therefore, it is not reliable to use the connectivity information to decide if the topology is SAN or DAS, to decide whether the host-prefix needs to be added or not. As a result, CVM does not add a hostprefix to VxVM devices based on enclosure connectivity. Instead when a naming conflict occurs, a serial number is added to the VxVM device name. On the other hand DAS disks can be attached to only one node at a time, and thus it is safe to add a hostprefix by default (without waiting for the naming conflict to occur).
By default Cluster Volume Manager (CVM) uses the host name from the Cluster Server (VCS) configuration file as the host prefix. If the hostid in /etc/vx/volboot
file is greater than 15 characters, and if a shorter host prefix is not set using vxdctl, Cluster Manager node IDs (CMID) are used as prefixes.
For more information, see the vxdctl
(1M) manual page.
The following command sets/modifies the logical name for the host as the failure domain:
# vxdctl set hostprefix=logicalname
To unset the logical name for the host as the failure domain, use the following command:
# vxdctl unset hostprefix
The vxdctl list command displays the logical name set as the host prefix.