Storage Foundation Cluster File System High Availability 8.0 Administrator's Guide - Linux
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Veritas File System
- About Veritas Replicator
- How Dynamic Multi-Pathing works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About single network link and reliability
- About I/O fencing
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- How Cluster Volume Manager works
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Storage disconnectivity and CVM disk detach policies
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Application isolation in CVM environments with disk group sub-clustering
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- Administering CFS
- About the mount, fsclustadm, and fsadm commands
- When the CFS primary node fails
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- About setting cluster node preferences for master failover
- About changing the CVM master manually
- Importing disk groups as shared
- Administering Flexible Storage Sharing
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Using Clustered NFS
- Understanding how Clustered NFS works
- Configure and unconfigure Clustered NFS
- Administering Clustered NFS
- Samples for configuring a Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering iSCSI with SFCFSHA
- Administering datastores with SFCFSHA
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Managing application I/O workloads using maximum IOPS settings
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Oracle Managed Files
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- About SmartMove
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Veritas InfoScale 4k sector device support solution
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Deduplicating data
- Compressing files
- About compressing files
- Use cases for compressing files
- Section X. Administering and protecting storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Erasure coding in Veritas InfoScale storage environments
- Erasure coding deployment scenarios
- Customized failure domain
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Sample supported root disk layouts for encapsulation
- Encapsulating and mirroring the root disk
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Support for protection against ransomware
- Non-modifiable storage checkpoints
- Soft WORM storage
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- Appendix C. Command reference
- Appendix D. Creating a starter database
Creating WORM-enabled entities
The following procedure uses examples to describe how soft WORM-enabled file systems and files can be created. Storage checkpoints can also be similarly created.
To create a soft WORM-enabled file
- Soft WORM-enable and existing file system using the command:
# /opt/VRTS/bin/fsadm -o softworm <path_of_mountpoint>
Alternatively, create a new soft WORM-enabled file system.
Sample command and output:
# mkfs -t vxfs -o softworm /dev/vx/rdsk/testdg/vol1
version 17 layout 125829120 sectors, 62914560 blocks of size 1024, log size 65536 blocks rcq size 4096 blocks largefiles supported maxlink supported SOFTWORM supported maxts supported
- Mount the file system.
Sample command:
# mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1
- Verify that the file system is soft WORM-enabled.
# /opt/VRTS/bin/fsadm /mnt1
largefiles,maxlink,softworm,maxts
The presence of the softworm string in the output indicates that the file system is WORM-enabled.
- Create a file.
Sample command and output:
# dd if=/dev/urandom of=/mnt1/a1 bs=1024 count=10
10+0 records in 10+0 records out 10240 bytes (10 kB, 10 KiB) copied, 0.000653719 s, 15.7 MB/s
- Set a retention period on the file by performing the following tasks sequentially:
Change the access time of the file so that it has the same value as the retention period, by using the touch command.
# touch -at YYYYMMDDhhmm.ss <file_name>
For example, if a file named
a1
must be retained till 27th June 2036 09:49:50 pm, run:# touch -at 203606270949.50 /mnt1/a1
Mark the file as read-only by changing its permissions.
For example, to make the
a1
file read-only, run:# chmod -w /mnt1/a1
- Reduce the retention period of the file.
For example, if a file named
a1
must be retained till 27th June 2033 09:49:50 pm, run:# touch -at "203306270949.50" /mnt1/a1
- Optionally, upgrade the file from soft WORM-enabled to WORM-enabled.
Sample command:
# /opt/VRTS/bin/fsadm -o worm <path_of_mountpoint>
Note:
You cannot change a WORM-enabled file to a soft WORM-enabled file.