InfoScale™ 9.0 Storage Foundation Administrator's Guide - Linux
- Section I. Introducing Storage Foundation
- Overview of Storage Foundation
- How Dynamic Multi-Pathing works
- How Volume Manager works
- How Volume Manager works with the operating system
- How Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering sites and remote mirrors
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Managing application I/O workloads using maximum IOPS settings
- Section VI. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VII. Optimizing storage with Storage Foundation
- Understanding storage optimization solutions in Storage Foundation
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- InfoScale 4K sector device support solution
- Section VIII. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Compressing files
- About compressing files
- Use cases for compressing files
- Section IX. Administering and protecting storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Encrypting existing volumes
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Creating a disk group
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Sample supported root disk layouts for encapsulation
- Encapsulating and mirroring the root disk
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Support for protection against ransomware
- Non-modifiable storage checkpoints
- Soft WORM storage
- Secure file system
- Secure file system for Oracle Single Instance
- Secure file system for PostgreSQL database
- Managing volumes and disk groups
- Section X. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- Appendix C. Command reference
- Appendix D. Executive Order logging
Recovering PostgreSQL data
In a scenario where the entire data file mount point is compromised and the application is down, a full recovery is needed. The following steps describe how to perform a full recovery using the command line. You can also perform a recovery using the InfoScale Operations Manager (VIOM) GUI. Refer to the VIOM documentation for more details.
To recovery application aware data using the CLI
- Stop the PostgreSQL application.
- Select last valid checkpoint from # vxschadm show checkpoint /dbmnt.
Sample command:
# vxschadm show checkpoint /dbmnt Application type: postgresql +-------------------+-----------------+-----------------+----------------------+ | CHECKPOINT_NAME | CREATE_TIME | RETENTION | FLAGS | +-------------------+-----------------+-----------------+----------------------+ | secfs_0503252118 | 05-Mar-25,21:18 | - | removable,softworm | +-------------------+-----------------+-----------------+----------------------+ | secfs_0503252116 | 05-Mar-25,21:16 | - | removable,softworm | +-------------------+-----------------+-----------------+----------------------+ | secfs_0503252114 | 05-Mar-25,21:14 | - | removable,softworm | +-------------------+-----------------+-----------------+----------------------+
- Validate the SecureFS checkpoint that you want to use for the recovery.
Run the following command:
# vxschadm validate checkpoint <checkpointname> <mountpoint>
Sample command:
# vxschadm validate checkpoint secfs_0503252118 /db
The command displays an output similar to the following:
UX:vxfs vxschadm: INFO: V-3-20000: Checkpoint (secfs_0503252118) validation for automatic recovery is passed for SecureFS application (postgresql) deployed on mount point (/db).
- Perform a PostgreSQL database recovery from a valid recovery target.
Run the following command:
# vxschadm recover start checkpoint <checkpointname> <mountpoint>
Sample command:
# vxschadm recover start checkpoint secfs_0503252118 /db
The command displays an output similar to the following:
UX:vxfs vxschadm: INFO: V-3-20000: Checkpoint (secfs_0503252118) automatic recovery is started for SecureFS protected application (postgresql).
- Verify the recovery status.
Run the following command:
# vxschadm recover status checkpoint <checkpointname> <mountpoint>
Sample command:
# vxschadm recover status checkpoint secfs_0503252118 /db
The command displays an output similar to the following:
------------------------------------------------------------ version: v1 type: checkpoint stage: started recovery point: secfs_0503252118 hostname: xxxxxxxxxxxxxxxxxxxxxxxx stime: Wed Mar 5 21:24:30 2025 mtime: Wed Mar 5 21:24:35 2025 etime: -- app: postgresql error (0): mount points: 5 ------------------------------------------------------------ /db (199:27000) status: TP: 1%, ETA: 0:00:00 error: /tblspc1 (199:27002) status: TP: 10%, ETA: 0:00:16 error: /tblspc2 (199:27003) status: TP: 0%, ETA: 0:00:00 error: /tblspc5 (199:27004) status: Recovery completed. error: /arcmnt (199:27001) status: TP: 1%, ETA: 0:00:02 error:
- After the recovery is completed, run any additional steps that are required to start the application.