InfoScale™ Operations Manager 9.0 User's Guide
- Section I. Getting started
- Introducing Arctera InfoScale Operations Manager
- Using the Management Server console
- About selecting the objects
- About searching for objects
- Examples for using Arctera InfoScale Operations Manager
- Example: Cluster Server troubleshooting using Arctera InfoScale Operations Manager
- Example: Ensuring the correct level of protection for volumes controlled by Storage Foundation
- Example: Improving the availability and the disaster recovery readiness of a service group through fire drills
- Examples: Identifying and reducing storage waste using Arctera InfoScale Operations Manager
- Section II. Managing Arctera InfoScale Operations Manager
- Managing user access
- Creating an Organization
- Modifying the name of an Organization
- Setting up fault monitoring
- Creating rules in a perspective
- Editing rules in a perspective
- Deleting rules in a perspective
- Enabling rules in a perspective
- Disabling rules in a perspective
- Suppressing faults in a perspective
- Using reports
- Running a report
- Subscribing for a report
- Sending a report through email
- Managing user access
- Section III. Managing hosts
- Overview
- Working with the uncategorized hosts
- Managing File Replicator (VFR) operations
- Managing disk groups and disks
- Creating disk groups
- Importing disk groups
- Adding disks to disk groups
- Resizing disks in disk groups
- Renaming disks in disk groups
- Splitting disk groups
- Moving disk groups
- Joining disk groups
- Initializing disks
- Replacing disks
- Recovering disks
- Bringing disks online
- Setting disk usage
- Evacuating disks
- Running or scheduling Trim
- Managing volumes
- Creating Storage Foundation volumes
- Encrypting existing volumes
- Deleting volumes
- Moving volumes
- Renaming volumes
- Adding mirrors to volumes
- Removing the mirrors of volumes
- Creating instant volume snapshots
- Creating space optimized snapshots for volumes
- Creating mirror break-off snapshots for volumes
- Dissociating snapshots
- Reattaching snapshots
- Resizing volumes
- Restoring data from the snapshots of volumes
- Refreshing the snapshot of volumes
- Configuring a schedule for volume snapshot refresh
- Adding snapshot volumes to a refresh schedule
- Removing the schedule for volume snapshot refresh
- Setting volume usage
- Enabling FastResync on volumes
- Managing file systems
- Creating file systems
- Defragmenting file systems
- Unmounting non clustered file systems from hosts
- Mounting non clustered file systems on hosts
- Unmounting clustered file systems
- Mounting clustered file systems on hosts
- Remounting file systems
- Checking file systems
- Creating file system snapshots
- Remounting file system snapshot
- Mounting file system snapshot
- Unmounting file system snapshot
- Removing file system snapshot
- Monitoring capacity of file systems
- Managing SmartIO
- About managing SmartIO
- Creating a cache
- Modifying a cache
- Creating an I/O trace log
- Analyzing an I/O trace log
- Managing application IO thresholds
- Managing replications
- Configuring Storage Foundation replications
- Pausing the replication to a Secondary
- Resuming the replication of a Secondary
- Starting replication to a Secondary
- Stopping the replication to a Secondary
- Switching a Primary
- Taking over from an original Primary
- Associating a volume
- Removing a Secondary
- Monitoring replications
- Optimizing storage utilization
- Section IV. Managing high availability and disaster recovery configurations
- Overview
- Managing clusters
- Managing service groups
- Creating service groups
- Linking service groups in a cluster
- Bringing service groups online
- Taking service groups offline
- Switching service groups
- Managing systems
- Managing resources
- Invoking a resource action
- Managing global cluster configurations
- Running fire drills
- Running the disaster recovery fire drill
- Editing a fire drill schedule
- Using recovery plans
- Managing application configuration
- Multi Site Management
- Appendix A. List of high availability operations
- Section V. Monitoring Storage Foundation HA licenses in the data center
- Managing licenses
- About Arctera licensing and pricing
- Assigning a price tier to a host manually
- Creating a license deployment policy
- Modifying a license deployment policy
- Viewing deployment information
- Managing licenses
- Monitoring performance
- About Arctera InfoScale Operations Manager performance graphs
- Managing Business Applications
- About the makeBE script
- Managing extended attributes
- Managing policy checks
- About using custom signatures for policy checks
- Managing Dynamic Multipathing paths
- Disabling the DMP paths on the initiators of a host
- Re-enabling the DMP paths
- Managing CVM clusters
- Managing Flexible Storage Sharing
- Monitoring the virtualization environment
- About discovering the VMware Infrastructure using Arctera InfoScale Operations Manager
- About the multi-pathing discovery in the VMware environment
- About discovering Solaris zones
- About discovering logical domains in Arctera InfoScale Operations Manager
- About discovering LPARs and VIOs in Arctera InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Using Web services API
- Arctera InfoScale Operations Manager command line interface
- Appendix B. Command file reference
- Appendix C. Application setup requirements
- Application setup requirements for Oracle database discovery
- Application setup requirements for Oracle Automatic Storage Management (ASM) discovery
- Application setup requirements for IBM DB2 discovery
- Application setup requirements for Sybase Adaptive Server Enterprise (ASE) discovery
- Application setup requirements for Microsoft SQL Server discovery
About file compression in Arctera InfoScale Operations Manager
The compression feature in Storage Foundation enables customers to use host-based compression to optimize existing primary storage. Enabling compression at the file system layer results in storage savings and avoids complex and expensive appliances typically associated with primary compression. Use cases for compression include database archive logs and unstructured data.
Compression is performed without needing any application changes and with minimal overhead. Compression does not modify the file metadata, nor are inode numbers or file extensions changed. Compression is executed out-of-band, after the write.
In Arctera InfoScale Operations Manager, you set up file compression on a host at the file system (mount point) level by selecting directories for compression. You can compress directories on demand, and you can set up a schedule for running the compression process on the host. You can view a report of space saved by file compression. Once compression is enabled, directories and files will begin to have a mix of compressed and uncompressed data blocks. This is managed automatically by the file system, and uncompressed data is compressed during the next scheduled sweep.
Following are more details on how file compression works:
Only user data is compressible, not VxFS metadata.
Compression is a property of a file, not a directory. If you compress all files in a directory, for example, any files that you later copy into that directory do not automatically get compressed as a result of being copied into the directory.
A compressed file is a file with compressed extents. Writes to the compressed file cause the affected extents to get uncompressed; the result can be files with both compressed and uncompressed extents.
After a file is compressed, the inode number does not change, and file descriptors that are opened before the compression are still valid after the compression.
File compression can have the following interactions with applications:
In general, applications notice no difference between compressed and uncompressed files, although reads and writes to compressed extents are slower than reads and writes to uncompressed extents. When an application reads a compressed file, the file system does not perform its usual readahead to avoid the CPU load that this can require. However, when reading from the primary fileset, the file system uncompresses an entire compression block (default 1 MB) and leaves these pages in the page cache. Thus, sequential reads of the file usually only incur an extra cost when crossing a compression block boundary. The situation is different when reading from a file in a Storage Checkpoint; in this case, nothing goes into the page cache beyond the data actually requested. For optimal read performance of a compressed file accessed through a Storage Checkpoint, the application should use a read size that matches the compression block size.
When writing to compressed extents, ensure that you have sufficient disk space and disk quota limits for the new uncompressed extents since the write uncompresses the extents. If you do not have sufficient disk space, the write can fail with the ENOSPC or EDQUOT error.
An application that reads data from a compressed file and then copies the file elsewhere, such as tar, cpio, cp, or vi, does not preserve compression in the new data. The same is true of some backup programs.
Backup programs that read file data through the name space do not notice that the file is compressed. The backup program receives uncompressed data, and the compression is lost.
More Information