InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Administrator's Guide - Linux
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Veritas File System
- About Veritas Replicator
- How Dynamic Multi-Pathing works
- How Volume Manager works
- How Volume Manager works with the operating system
- How Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About single network link and reliability
- About I/O fencing
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- How Cluster Volume Manager works
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Storage disconnectivity and CVM disk detach policies
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Application isolation in CVM environments with disk group sub-clustering
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- Administering CFS
- About the mount, fsclustadm, and fsadm commands
- When the CFS primary node fails
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- About setting cluster node preferences for master failover
- About changing the CVM master manually
- Importing disk groups as shared
- Administering Flexible Storage Sharing
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Enabling S3 server
- Using Clustered NFS
- Understanding how Clustered NFS works
- Configure and unconfigure Clustered NFS
- Administering Clustered NFS
- Samples for configuring a Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering iSCSI with SFCFSHA
- Administering datastores with SFCFSHA
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Managing application I/O workloads using maximum IOPS settings
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Oracle Managed Files
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- About SmartMove
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- InfoScale 4K sector device support solution
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Compressing files
- About compressing files
- Use cases for compressing files
- Section X. Administering and protecting storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Encrypting existing volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Erasure coding in Veritas InfoScale storage environments
- Erasure coding deployment scenarios
- Customized failure domain
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Sample supported root disk layouts for encapsulation
- Encapsulating and mirroring the root disk
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Support for protection against ransomware
- Non-modifiable storage checkpoints
- Soft WORM storage
- Secure file system
- Secure file system for Oracle Single Instance
- Secure file system for PostgreSQL database
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- Appendix C. Command reference
- Appendix D. Creating a starter database
- Appendix E. Executive Order logging
Write Once, Read Many (WORM) storage
InfoScale provides the facility to create a WORM storage system that you can use to retain critical data for a certain period and to ensure that the data cannot be modified. When a file is committed to WORM storage, the data in the file can be read but cannot be overwritten or erased within the specified retention period. Thus, committing a file to WORM storage prevents any accidental or intentional erasure of its data. The retention period for a WORM-enabled file specifies the duration within which the file cannot be deleted after it is committed to the WORM storage. The storage system allows the file to be deleted only after the retention period has expired.
You can enable WORM and set a different retention period for every individual file. However, per-file WORM enablement is supported only on WORM-enabled file systems, so ensure that the file system on the server is WORM-enabled. You can identify whether a file system is WORM-enabled by using one of the following commands:
To confirm WORM-enablement of a file system using a device path
# mkfs -t vxfs -m absolutePathOfVolume
To confirm WORM-enablement of a file system using a mount point
# /opt/VRTS/bin/fsadm nameOfMountPoint
The output of these commands displays the worm keyword to specify that the file system is WORM-enabled.
When you WORM-enable a file, you specify the retention period. Accordingly, the access time of the file is internally set to a date in the future. The maximum value allowed for the retention period is up to 2038-01-19.
Once enabled, you cannot disable WORM for a specific file system. The file system remains WORM-enabled forever.
You can WORM-enable a file system in one of the following ways:
Use the -o worm option while creating the file system.
Use the fsadm command to enable WORM for an existing file system that is mounted.
You can WORM-enable a file in one of the following ways:
Write a utility to WORM-enable a file by using VxFS APIs.
Set the access time (atime) of a file to the required retention time and then set the read-only attribute of the file.
To WORM-enable a file by manually setting its access time
- Change the access time of the file so that it has the same value as the retention period, by running the following command:
# touch -at YYYYMMDDhhmm.ss nameOfFile
For example, if a file named
foo
must be retained till 14th July 2035 10:37:42 pm, run:# touch -at 203507141037.42 foo
- Mark the file as read-only by changing its permissions.
For example, to make the
foo
file read-only, run:# chmod -w foo
When these steps are completed successfully, foo
becomes WORM-enabled, with 14th July 2035 10:37:42 pm as its retention period.