Storage Foundation for Oracle® RAC 7.3.1 Administrator's Guide - Linux
- Section I. SF Oracle RAC concepts and administration
- Overview of Storage Foundation for Oracle RAC
- About Storage Foundation for Oracle RAC
- Component products and processes of SF Oracle RAC
- About Virtual Business Services
- Administering SF Oracle RAC and its components
- Administering SF Oracle RAC
- Starting or stopping SF Oracle RAC on each node
- Administering VCS
- Administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- Administering the CP server
- Administering CFS
- Administering CVM
- Changing the CVM master manually
- Administering Flexible Storage Sharing
- Backing up and restoring disk group configuration data
- Administering SF Oracle RAC global clusters
- Administering SF Oracle RAC
- Overview of Storage Foundation for Oracle RAC
- Section II. Performance and troubleshooting
- Troubleshooting SF Oracle RAC
- About troubleshooting SF Oracle RAC
- Troubleshooting I/O fencing
- Fencing startup reports preexisting split-brain
- Troubleshooting CP server
- Troubleshooting server-based fencing on the SF Oracle RAC cluster nodes
- Issues during online migration of coordination points
- Troubleshooting Cluster Volume Manager in SF Oracle RAC clusters
- Troubleshooting CFS
- Troubleshooting interconnects
- Troubleshooting Oracle
- Troubleshooting ODM in SF Oracle RAC clusters
- Prevention and recovery strategies
- Tunable parameters
- Troubleshooting SF Oracle RAC
- Section III. Reference
About the volume layout for Flexible Storage Sharing disk groups
By default, a volume in disk groups with the FSS attribute set is mirrored across hosts. This default layout ensures that data is available if any one host becomes unavailable. Associated instant data change object (DCO) log volumes are also created by default.
The following volume attributes are assumed by default:
mirror=host
nmirror=2
logtype=dco
ndcomirror=2
dcoversion=30
You can specify the hosts on which to allocate the volume by using the host disk class.
See Using the host disk class and allocating storage.
Traditionally with disk groups without the FSS attribute set, the default volume layout is concatenated. However, you can still choose to create concatenated volumes in disk groups with the FSS attribute set by explicitly using the layout=concat option of the vxassist command.
By default, the mirrored volume is allocated across hosts. If host-specific storage is not available to meet this criteria, then the volume is allocated on external storage, with the default layout as concatenated as with traditional disk groups.
Existing disk classes, such as dm, can be used with FSS. The host prefix in a disk access name indicates the host to which the disk is connected.
For example, you can create a volume with one plex on a local disk (disk1), and another plex on a remote disk(hostA_disk2) where the host prefix (hostA) for the remote disk represents another host in the cluster:
# vxassist -g mydg make vol1 10g layout=mirror dm:disk1 dm:hostA_disk2
See Administering mirrored volumes using vxassist.
You can also use the vxdisk list accessname command to display connectivity information, such as information about which hosts are connected to the disk.