Storage Foundation Cluster File System High Availability 7.2 Administrator's Guide - Solaris
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Veritas File System
- About Storage Foundation Cluster File System (SFCFS)
- How Dynamic Multi-Pathing works
- How DMP works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About single network link and reliability
- About I/O fencing
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- How Cluster Volume Manager works
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Storage disconnectivity and CVM disk detach policies
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Managing DMP devices for the ZFS root pool
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- Administering CFS
- About the mount, fsclustadm, and fsadm commands
- When the CFS primary node fails
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- About setting cluster node preferences for master failover
- About changing the CVM master manually
- Importing disk groups as shared
- Administering Flexible Storage Sharing
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Using Clustered NFS
- Understanding how Clustered NFS works
- Configure and unconfigure Clustered NFS
- Reconciling major and minor numbers for NFS shared disks
- Administering Clustered NFS
- Samples for configuring a Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Oracle Managed Files
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Veritas InfoScale 4k sector device support solution
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Deduplicating data on Solaris SPARC
- Compressing files
- About compressing files
- Use cases for compressing files
- Section X. Administering storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- Appendix C. Veritas File System disk layout
- Appendix D. Command reference
- Appendix E. Creating a starter database
Migrating to thin provisioning
The SmartMove™ feature enables migration from traditional LUNs to thinly provisioned LUNs, removing unused space in the process.
To migrate to thin provisioning
- Check if the SmartMove feature is enabled.
# vxdefault list KEYWORD CURRENT-VALUE DEFAULT-VALUE usefssmartmove all all ...
If the output shows that the current value is none, configure SmartMove for all disks or thin disks.
See Configuring SmartMove .
- Add the new, thin LUNs to the existing disk group. Enter the following commands:
# vxdisksetup -i da_name # vxdg -g datadg adddisk da_name
where
da_name
is the disk access name in VxVM. - To identify LUNs with the thinonly or thinrclm attributes, enter:
# vxdisk -o thin list
- Add the new, thin LUNs as a new plex to the volume. On a thin LUN, when you create a mirrored volume or add a mirror to an existing LUN, VxVM creates a Data Change Object (DCO) by default. The DCO helps prevent the thin LUN from becoming thick, by eliminating the need for full resynchronization of the mirror.
NOTE: The VxFS file system must be mounted to get the benefits of the SmartMove feature.
The following methods are available to add the LUNs:
Use the default settings for the vxassist command:
# vxassist -g datadg mirror datavol da_name
Specify the vxassist command options for faster completion. The -b option copies blocks in the background. The following command improves I/O throughput:
# vxassist -b -oiosize=1m -t thinmig -g datadg mirror \ datavol da_name
To view the status of the command, use the vxtask command:
# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 211 ATCOPY/R 10.64% 0/20971520/2232320 PLXATT vol1 vol1-02 xivdg smartmove 212 ATCOPY/R 09.88% 0/20971520/2072576 PLXATT vol1 vol1-03 xivdg smartmove 219 ATCOPY/R 00.27% 0/20971520/57344 PLXATT vol1 vol1-04 xivdg smartmove
# vxtask monitor 211 TASKID PTID TYPE/STATE PCT PROGRESS 211 ATCOPY/R 50.00% 0/20971520/10485760 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.02% 0/20971520/10489856 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.04% 0/20971520/10493952 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.06% 0/20971520/10498048 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.08% 0/20971520/10502144 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.10% 0/20971520/10506240 PLXATT vol1 vol1-02 xivdg smartmove
Specify the vxassist command options to reduce the effect on system performance. The following command takes longer to complete:
# vxassist -oslow -g datadg mirror datavol da_name
- Optionally, test the performance of the new LUNs before removing the old LUNs.
To test the performance, use the following steps:
Determine which plex corresponds to the thin LUNs:
# vxprint -g datadg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg datadg datadg - - - - - - dm THINARRAY0_02 THINARRAY0_02 - 83886080 - - - - dm STDARRAY1_01 STDARRAY1_01 - 41943040 - -OHOTUSE - - v datavol fsgen ENABLED 41943040 - ACTIVE - - pl datavol-01 datavol ENABLED 41943040 - ACTIVE - - sd STDARRAY1_01-01 datavol-01 ENABLED 41943040 0 - - - pl datavol-02 datavol ENABLED 41943040 - ACTIVE - - sd THINARRAY0_02-01 datavol-02 ENABLED 41943040 0 - - -
The example output indicates that the thin LUN corresponds to plex datavol-02.
Direct all reads to come from those LUNs:
# vxvol -g datadg rdpol prefer datavol datavol-02
- Remove the original non-thin LUNs.
Note:
The ! character is a special character in some shells. This example shows how to escape it in a bash shell.
# vxassist -g datadg remove mirror datavol \!STDARRAY1_01 # vxdg -g datadg rmdisk STDARRAY1_01 # vxdisk rm STDARRAY1_01
- Grow the file system and volume to use all of the larger thin LUN:
# vxresize -g datadg -x datavol 40g da_name