InfoScale™ 9.0 Storage Foundation Cluster File System High Availability Administrator's Guide - Linux
- Section I. Introducing Storage Foundation Cluster File System High Availability
- Overview of Storage Foundation Cluster File System High Availability
- About Veritas File System
- About Veritas Replicator
- How Dynamic Multi-Pathing works
- How Volume Manager works
- How Volume Manager works with the operating system
- How Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- How Storage Foundation Cluster File System High Availability works
- About Storage Foundation Cluster File System High Availability architecture
- About Veritas File System features supported in cluster file systems
- About single network link and reliability
- About I/O fencing
- About preventing data corruption with I/O fencing
- About I/O fencing components
- About server-based I/O fencing
- About secure communication between the SFCFSHA cluster and CP server
- How Cluster Volume Manager works
- Overview of clustering
- Cluster Volume Manager (CVM) tolerance to storage connectivity failures
- Storage disconnectivity and CVM disk detach policies
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Multiple host failover configurations
- About Flexible Storage Sharing
- Application isolation in CVM environments with disk group sub-clustering
- Overview of Storage Foundation Cluster File System High Availability
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation Cluster File System High Availability
- Administering Storage Foundation Cluster File System High Availability and its components
- Administering CFS
- About the mount, fsclustadm, and fsadm commands
- When the CFS primary node fails
- About Snapshots on SFCFSHA
- Administering VCS
- Administering CVM
- About setting cluster node preferences for master failover
- About changing the CVM master manually
- Importing disk groups as shared
- Administering Flexible Storage Sharing
- Administering ODM
- About administering I/O fencing
- About the vxfentsthdw utility
- Testing the coordinator disk group using the -c option of vxfentsthdw
- About the vxfenadm utility
- About the vxfenclearpre utility
- About the vxfenswap utility
- About administering the coordination point server
- About migrating between disk-based and server-based fencing configurations
- Migrating between fencing configurations using response files
- About the vxfentsthdw utility
- Administering SFCFSHA global clusters
- Enabling S3 server
- Using Clustered NFS
- Understanding how Clustered NFS works
- Configure and unconfigure Clustered NFS
- Administering Clustered NFS
- Samples for configuring a Clustered NFS
- Using Common Internet File System
- Deploying Oracle with Clustered NFS
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering iSCSI with SFCFSHA
- Administering datastores with SFCFSHA
- Administering Storage Foundation Cluster File System High Availability and its components
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Managing application I/O workloads using maximum IOPS settings
- Section VI. Veritas Extension for Oracle Disk Manager
- Using Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager
- About Oracle Disk Manager and Oracle Managed Files
- Using Cached ODM
- Using Veritas Extension for Oracle Disk Manager
- Section VII. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VIII. Optimizing storage with Storage Foundation Cluster File System High Availability
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- About SmartMove
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- InfoScale 4K sector device support solution
- Understanding storage optimization solutions in Storage Foundation Cluster File System High Availability
- Section IX. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Compressing files
- About compressing files
- Use cases for compressing files
- Section X. Administering and protecting storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Encrypting existing volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Erasure coding in Veritas InfoScale storage environments
- Erasure coding deployment scenarios
- Customized failure domain
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Sample supported root disk layouts for encapsulation
- Encapsulating and mirroring the root disk
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Support for protection against ransomware
- Non-modifiable storage checkpoints
- Soft WORM storage
- Secure file system
- Secure file system for Oracle Single Instance
- Secure file system for PostgreSQL database
- Managing volumes and disk groups
- Section XI. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- About LLT tunable parameters
- About GAB tunable parameters
- About VXFEN tunable parameters
- Appendix C. Command reference
- Appendix D. Creating a starter database
- Appendix E. Executive Order logging
Creating erasure coded volumes in FSS environments
The procedure assumes that, there are n nodes (hosts), namely N1, N2, ... Nn, contributing storage in the cluster, each node contributing disks d1, d2, ... dn respectively.
To create erasure coded volumes in FSS environments
- Initialize the disks on each node that contributes storage for EC volume (if not already initialized) and export the disk to make it available cluster-wide.
Note:
You need not export the disks, if you are using the Storage Access Layer (SAL) capabilities to auto-export the disks.
# vxdisk export <disk_name>
- Create an FSS disk group, namely dg1, using the required set of disks from all the cluster nodes.
# vxdg -s -o fss init dg1 da1 da2 ... dan
- Create an erasure coded volume, namely vol1, striped across storage from k nodes with a fault-tolerance of m in the FSS disk group.
# vxassist -g dg1 make vol1 <vol_size> layout=ecoded ncol=<k> nparity=<m>
If you want to specify host which should contribute storage for the volume, you can specify them as follows:
# vxassist -g dg1 make vol1 <vol_size> layout=ecoded ncol=k nparity=m host:N1 host:N2 ....host:Nn
Specify the Stripe Group and Stripe Confine Group while creating an erasure coded volume. See Using Stripe Group and Stripe Confined Group while creating erasure coded volume.
The following is a sample output of creation of erasure coded volume in a 4 node FSS cluster:
# vxprint Disk group: dg1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg1 dg1 - - - - - - dm vmr720-18vm3_vmdk0_0 vmr720-18vm3_vmdk0_0 - 4128464 - - - - dm vmr720-18vm4_vmdk0_0 vmr720-18vm4_vmdk0_0 - 4128464 - REMOTE - - dm vmr720-18vm5_vmdk0_0 vmr720-18vm5_vmdk0_0 - 4128464 - REMOTE - - dm vmr720-18vm6_vmdk0_0 vmr720-18vm6_vmdk0_0 - 4128464 - REMOTE - -
To create a 2 GB erasure coded volume say vol1 for general purpose use-case (like transactional workload), which is tolerant to node/disk failures and has data striped across 3 nodes/disks, run the following command:
# vxassist -g dg1 make vol1 2g layout=ecoded nparity=1 ncols=3
The following is a sample output of creation of erasure coded volume in a 4 node FSS cluster.
# vxprint Disk group: dg1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg1 dg1 - - - - - - dm vmr720-18vm3_vmdk0_0 vmr720-18vm3_vmdk0_0 - 4128464 - - - - dm vmr720-18vm4_vmdk0_0 vmr720-18vm4_vmdk0_0 - 4128464 - REMOTE - - dm vmr720-18vm5_vmdk0_0 vmr720-18vm5_vmdk0_0 - 4128464 - REMOTE - - dm vmr720-18vm6_vmdk0_0 vmr720-18vm6_vmdk0_0 - 4128464 - REMOTE - - v vol1 fsgen ENABLED 4194432 - SYNC - - pl vol1-01 vol1 ENABLED 4194432 - ACTIVE - - sd vmr720-18vm3_vmdk0_0-02 vol1-01 ENABLED 2097152 0 ECLOG - - sd vmr720-18vm3_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - sd vmr720-18vm4_vmdk0_0-02 vol1-01 ENABLED 2097152 0 ECLOG - - sd vmr720-18vm4_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - sd vmr720-18vm5_vmdk0_0-02 vol1-01 ENABLED 2097152 0 ECLOG - - sd vmr720-18vm5_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - sd vmr720-18vm6_vmdk0_0-02 vol1-01 ENABLED 2097152 0 ECLOG - - sd vmr720-18vm6_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - dc vol1_dco vol1 - - - - - - v vol1_dcl gen ENABLED 67840 - ACTIVE - - pl vol1_dcl-01 vol1_dcl ENABLED 67840 - ACTIVE - - sd vmr720-18vm3_vmdk0_0-03vol1_dcl-01 ENABLED 67840 0 - - - pl vol1_dcl-02 vol1_dcl ENABLED 67840 - ACTIVE - - sd vmr720-18vm4_vmdk0_0-03 vol1_dcl-02 ENABLED 67840 0 - - - # vxprint -g dg1 -F% stripe_aligned vol1 off
To create a 2 GB erasure coded volume vol1 for object-store use-case, which is tolerant to node/disk failure and has data striped across 3 nodes/disks, run the following command:
# vxassist -g dg1 make vol1 2g layout=ecoded nparity=1 ncols=3 stripe_aligned=yes
The following is a sample output of creation of erasure coded volume in a 4 node FSS cluster.
# vxprint Disk group: dg1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg1 dg1 - - - - - - dm vmr720-18vm3_vmdk0_0 vmr720-18vm3_vmdk0_0 - 4128464 - - - - dm vmr720-18vm4_vmdk0_0 vmr720-18vm4_vmdk0_0 - 4128464 - REMOTE - - dm vmr720-18vm5_vmdk0_0 vmr720-18vm5_vmdk0_0 - 4128464 - REMOTE - - dm vmr720-18vm6_vmdk0_0 vmr720-18vm6_vmdk0_0 - 4128464 - REMOTE - - v vol1 fsgen ENABLED 4194432 - ACTIVE - - pl vol1-01 vol1 ENABLED 4194432 - ACTIVE - - sd vmr720-18vm3_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - sd vmr720-18vm4_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - sd vmr720-18vm5_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - sd vmr720-18vm6_vmdk0_0-01 vol1-01 ENABLED 1398144 0 - - - dc vol1_dco vol1 - - - - - - v vol1_dcl gen ENABLED 67840 - ACTIVE - - pl vol1_dcl-01 vol1_dcl ENABLED 67840 - ACTIVE - - sd vmr720-18vm3_vmdk0_0-02 vol1_dcl-01 ENABLED 67840 0 - - - pl vol1_dcl-02 vol1_dcl ENABLED 67840 - ACTIVE - - sd vmr720-18vm4_vmdk0_0-02 vol1_dcl-02 ENABLED 67840 0 - - -
#vxprint -g dg1 -F%stripe_aligned vol1 on
You can verify the layout of the volume by running the following command on plex of the volume:
# vxprint -g dg1 -F% layout vol1-01 ECODED