Storage Foundation Cluster File System High Availability 7.2 Administrator's Guide - Solaris
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Veritas File System
- About Storage Foundation Cluster File System (SFCFS)
- How Dynamic Multi-Pathing works
- How DMP works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About single network link and reliability
- About I/O fencing
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- How Cluster Volume Manager works
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Storage disconnectivity and CVM disk detach policies
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Managing DMP devices for the ZFS root pool
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- Administering CFS
- About the mount, fsclustadm, and fsadm commands
- When the CFS primary node fails
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- About setting cluster node preferences for master failover
- About changing the CVM master manually
- Importing disk groups as shared
- Administering Flexible Storage Sharing
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Using Clustered NFS
- Understanding how Clustered NFS works
- Configure and unconfigure Clustered NFS
- Reconciling major and minor numbers for NFS shared disks
- Administering Clustered NFS
- Samples for configuring a Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Oracle Managed Files
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Veritas InfoScale 4k sector device support solution
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Deduplicating data on Solaris SPARC
- Compressing files
- About compressing files
- Use cases for compressing files
- Section X. Administering storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- Appendix C. Veritas File System disk layout
- Appendix D. Command reference
- Appendix E. Creating a starter database
Examples of use and require constraints
The following examples show use and require constraints for storage allocation.
Example 1 - require constraint
This example shows the require constraint in a disk group that has disks from two arrays: emc_clariion0 and ams_wms0. Both arrays are connected through the same HBA hostportid (06-08-02), but the arrays have different arraytype (A/A and A/A-A respectively).
The following output shows the disk group information:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm emc_clariion0_1 emc_clariion0_1 - 4120320 - - - - dm emc_clariion0_2 emc_clariion0_2 - 4120320 - - - - dm emc_clariion0_3 emc_clariion0_3 - 4120320 - - - -
To allocate both the data and the log on the disks that are attached to the particular HBA and that have the array type A/A:
# vxassist -g testdg make v1 1G logtype=dco dcoversion=20 \ require=hostportid:06-08-02,arraytype:A/A
The following output shows the results of the above command. The command allocated disk space for the data and the log on emc_clariion0 array disks, which satisfy all the storage specifications in the require constraint:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm emc_clariion0_1 emc_clariion0_1 - 4120320 - - - - dm emc_clariion0_2 emc_clariion0_2 - 4120320 - - - - dm emc_clariion0_3 emc_clariion0_3 - 4120320 - - - - v v1 fsgen ENABLED 2097152 - ACTIVE - - pl v1-01 v1 ENABLED 2097152 - ACTIVE - - sd emc_clariion0_0-01 v1-01 ENABLED 2097152 0 - - - dc v1_dco v1 - - - - - - v v1_dcl gen ENABLED 67840 - ACTIVE - - pl v1_dcl-01 v1_dcl ENABLED 67840 - ACTIVE - - sd emc_clariion0_0-02 v1_dcl-01 ENABLED 67840 0 - - -
Example 2 - use constraint
This example shows the use constraint in a disk group that has disks from three arrays: ams_wms0, emc_clariion0, and hitachi_vsp0.
The following output shows the disk group information:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm hitachi_vsp0_3 hitachi_vsp0_3 - 4120320 - - - -
To allocate both the data and the log on the disks that belong to the array ams_wms0 or the array emc_clariion0:
# vxassist -g testdg make v1 3G logtype=dco dcoversion=20 \ use=array:ams_wms0,array:emc_clariion0
The following output shows the results of the above command. The command allocated disk space for the data and the log on disks that satisfy the arrays specified in the use constraint.
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm hitachi_vsp0_3 hitachi_vsp0_3 - 4120320 - - - - v v1 fsgen ENABLED 6291456 - ACTIVE - - pl v1-01 v1 ENABLED 6291456 - ACTIVE - - sd ams_wms0_359-01 v1-01 ENABLED 2027264 0 - - - sd ams_wms0_360-01 v1-01 ENABLED 143872 2027264 - - - sd emc_clariion0_0-01 v1-01 ENABLED 4120320 2171136 - - - dc v1_dco v1 - - - - - - v v1_dcl gen ENABLED 67840 - ACTIVE - - pl v1_dcl-01 v1_dcl ENABLED 67840 - ACTIVE - - sd ams_wms0_360-02 v1_dcl-01 ENABLED 67840 0 - - -
Example 3: datause and logrequire combination
This example shows the combination of a datause constraint and a logrequire constraint. The disk group has disks from three arrays: ams_wms0, emc_clariion0, and hitachi_vsp0, which have different array types.
The following output shows the disk group information:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm emc_clariion0_1 emc_clariion0_1 - 4120320 - - - - dm emc_clariion0_2 emc_clariion0_2 - 4120320 - - - - dm emc_clariion0_3 emc_clariion0_3 - 4120320 - - - - dm hitachi_vsp0_3 hitachi_vsp0_3 - 4120320 - - - -
To allocate data on disks from ams_wms0 or emc_clariion0 array, and to allocate log on disks from arraytype A/A-A:
# vxassist -g testdg make v1 1G logtype=dco dcoversion=20 \ datause=array:ams_wms0,array:emc_clariion0 logrequire=arraytype:A/A-A
The following output shows the results of the above command. The command allocated disk space for the data and the log independently. The data space is allocated on emc_clariion0 disks that satisfy the datause constraint. The log space is allocated on ams_wms0 disks that are A/A-A arraytype and that satisfy the logrequire constraint:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm emc_clariion0_1 emc_clariion0_1 - 4120320 - - - - dm emc_clariion0_2 emc_clariion0_2 - 4120320 - - - - dm emc_clariion0_3 emc_clariion0_3 - 4120320 - - - - dm hitachi_vsp0_3 hitachi_vsp0_3 - 4120320 - - - - v v1 fsgen ENABLED 2097152 - ACTIVE - - pl v1-01 v1 ENABLED 2097152 - ACTIVE - - sd emc_clariion0_0-01 v1-01 ENABLED 2097152 0 - - - dc v1_dco v1 - - - - - - v v1_dcl gen ENABLED 67840 - ACTIVE - - pl v1_dcl-01 v1_dcl ENABLED 67840 - ACTIVE - - sd ams_wms0_359-01 v1_dcl-01 ENABLED 67840 0 - - -
Example 4 - use and require combination
This example shows the combination of a use constraint and a require constraint. The disk group has disks from three arrays: ams_wms0, emc_clariion0, and hitachi_vsp0. Only the disks from ams_wms0 array are multi-pathed.
The following output shows the disk group information:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm emc_clariion0_1 emc_clariion0_1 - 4120320 - - - - dm emc_clariion0_2 emc_clariion0_2 - 4120320 - - - - dm emc_clariion0_3 emc_clariion0_3 - 4120320 - - - - dm hitachi_vsp0_3 hitachi_vsp0_3 - 4120320 - - - -
To allocate data and log space on disks from emc_clariion0 or ams_wms0 array, and disks that are multi-pathed:
# vxassist -g testdg make v1 1G logtype=dco dcoversion=20 \ use=array:emc_clariion0,array:ams_wms0 require=multipathed:yes
The following output shows the results of the allocation. The data and log space is on ams_wms0 disks, which satisfy the use as well as the require constraints:
# vxprint -g testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg testdg testdg - - - - - - dm ams_wms0_359 ams_wms0_359 - 2027264 - - - - dm ams_wms0_360 ams_wms0_360 - 2027264 - - - - dm ams_wms0_361 ams_wms0_361 - 2027264 - - - - dm ams_wms0_362 ams_wms0_362 - 2027264 - - - - dm emc_clariion0_0 emc_clariion0_0 - 4120320 - - - - dm emc_clariion0_1 emc_clariion0_1 - 4120320 - - - - dm emc_clariion0_2 emc_clariion0_2 - 4120320 - - - - dm emc_clariion0_3 emc_clariion0_3 - 4120320 - - - - dm hitachi_vsp0_3 hitachi_vsp0_3 - 4120320 - - - - v v1 fsgen ENABLED 2097152 - ACTIVE - - pl v1-01 v1 ENABLED 2097152 - ACTIVE - - sd ams_wms0_359-01 v1-01 ENABLED 2027264 0 - - - sd ams_wms0_360-01 v1-01 ENABLED 69888 2027264 - - - dc v1_dco v1 - - - - - - v v1_dcl gen ENABLED 67840 - ACTIVE - - pl v1_dcl-01 v1_dcl ENABLED 67840 - ACTIVE - - sd ams_wms0_360-02 v1_dcl-01 ENABLED 67840 0 - - -