InfoScale™ Operations Manager 9.0 User's Guide
- Section I. Getting started
- Introducing Arctera InfoScale Operations Manager
- Using the Management Server console
- About selecting the objects
- About searching for objects
- Examples for using Arctera InfoScale Operations Manager
- Example: Cluster Server troubleshooting using Arctera InfoScale Operations Manager
- Example: Ensuring the correct level of protection for volumes controlled by Storage Foundation
- Example: Improving the availability and the disaster recovery readiness of a service group through fire drills
- Examples: Identifying and reducing storage waste using Arctera InfoScale Operations Manager
- Section II. Managing Arctera InfoScale Operations Manager
- Managing user access
- Creating an Organization
- Modifying the name of an Organization
- Setting up fault monitoring
- Creating rules in a perspective
- Editing rules in a perspective
- Deleting rules in a perspective
- Enabling rules in a perspective
- Disabling rules in a perspective
- Suppressing faults in a perspective
- Using reports
- Running a report
- Subscribing for a report
- Sending a report through email
- Managing user access
- Section III. Managing hosts
- Overview
- Working with the uncategorized hosts
- Managing File Replicator (VFR) operations
- Managing disk groups and disks
- Creating disk groups
- Importing disk groups
- Adding disks to disk groups
- Resizing disks in disk groups
- Renaming disks in disk groups
- Splitting disk groups
- Moving disk groups
- Joining disk groups
- Initializing disks
- Replacing disks
- Recovering disks
- Bringing disks online
- Setting disk usage
- Evacuating disks
- Running or scheduling Trim
- Managing volumes
- Creating Storage Foundation volumes
- Encrypting existing volumes
- Deleting volumes
- Moving volumes
- Renaming volumes
- Adding mirrors to volumes
- Removing the mirrors of volumes
- Creating instant volume snapshots
- Creating space optimized snapshots for volumes
- Creating mirror break-off snapshots for volumes
- Dissociating snapshots
- Reattaching snapshots
- Resizing volumes
- Restoring data from the snapshots of volumes
- Refreshing the snapshot of volumes
- Configuring a schedule for volume snapshot refresh
- Adding snapshot volumes to a refresh schedule
- Removing the schedule for volume snapshot refresh
- Setting volume usage
- Enabling FastResync on volumes
- Managing file systems
- Creating file systems
- Defragmenting file systems
- Unmounting non clustered file systems from hosts
- Mounting non clustered file systems on hosts
- Unmounting clustered file systems
- Mounting clustered file systems on hosts
- Remounting file systems
- Checking file systems
- Creating file system snapshots
- Remounting file system snapshot
- Mounting file system snapshot
- Unmounting file system snapshot
- Removing file system snapshot
- Monitoring capacity of file systems
- Managing SmartIO
- About managing SmartIO
- Creating a cache
- Modifying a cache
- Creating an I/O trace log
- Analyzing an I/O trace log
- Managing application IO thresholds
- Managing replications
- Configuring Storage Foundation replications
- Pausing the replication to a Secondary
- Resuming the replication of a Secondary
- Starting replication to a Secondary
- Stopping the replication to a Secondary
- Switching a Primary
- Taking over from an original Primary
- Associating a volume
- Removing a Secondary
- Monitoring replications
- Optimizing storage utilization
- Section IV. Managing high availability and disaster recovery configurations
- Overview
- Managing clusters
- Managing service groups
- Creating service groups
- Linking service groups in a cluster
- Bringing service groups online
- Taking service groups offline
- Switching service groups
- Managing systems
- Managing resources
- Invoking a resource action
- Managing global cluster configurations
- Running fire drills
- Running the disaster recovery fire drill
- Editing a fire drill schedule
- Using recovery plans
- Managing application configuration
- Multi Site Management
- Appendix A. List of high availability operations
- Section V. Monitoring Storage Foundation HA licenses in the data center
- Managing licenses
- About Arctera licensing and pricing
- Assigning a price tier to a host manually
- Creating a license deployment policy
- Modifying a license deployment policy
- Viewing deployment information
- Managing licenses
- Monitoring performance
- About Arctera InfoScale Operations Manager performance graphs
- Managing Business Applications
- About the makeBE script
- Managing extended attributes
- Managing policy checks
- About using custom signatures for policy checks
- Managing Dynamic Multipathing paths
- Disabling the DMP paths on the initiators of a host
- Re-enabling the DMP paths
- Managing CVM clusters
- Managing Flexible Storage Sharing
- Monitoring the virtualization environment
- About discovering the VMware Infrastructure using Arctera InfoScale Operations Manager
- About the multi-pathing discovery in the VMware environment
- About discovering Solaris zones
- About discovering logical domains in Arctera InfoScale Operations Manager
- About discovering LPARs and VIOs in Arctera InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Using Web services API
- Arctera InfoScale Operations Manager command line interface
- Appendix B. Command file reference
- Appendix C. Application setup requirements
- Application setup requirements for Oracle database discovery
- Application setup requirements for Oracle Automatic Storage Management (ASM) discovery
- Application setup requirements for IBM DB2 discovery
- Application setup requirements for Sybase Adaptive Server Enterprise (ASE) discovery
- Application setup requirements for Microsoft SQL Server discovery
Reclaiming thin storage - example
With Arctera InfoScale Operations Manager, you can use the Management Server console to gain visibility into and manage thin-reclamation capable devices.
Thin provisioning is a storage array feature that optimizes storage use by allocating and reclaiming the storage on demand. With thin provisioning, the array allocates storage to applications only when the storage is needed, from a pool of free storage. Thin provisioning solves the problem of under-utilization of available array capacity. Administrators do not have to estimate how much storage an application requires. Instead, thin provisioning lets administrators provision large thin or thin reclaim capable LUNs to a host. When the application writes data, the physical storage is allocated from the free pool on the array to the thin-provisioned LUNs.
The two types of thin provisioned LUNs are thin-capable or thin-reclaim capable. Both types of LUNs provide the capability to allocate storage as needed from the free pool. For example, storage is allocated when a file system creates or changes a file. However, this storage is not released to the free pool when files get deleted. Therefore, thin-provisioned LUNs can become 'thick' over time, as the file system starts to include unused free space where the data was deleted. Thin-reclaim capable LUNs address this problem with the ability to release the once-used storage to the pool of free storage. This operation is called thin storage reclamation.
The thin-reclaim capable LUNs do not perform the reclamation automatically. The administrator can initiate a reclamation manually, or with a scheduled reclamation operation.
The Management Server console provides visibility and management for thin reclamation in the context of the following objects that are discovered by Arctera InfoScale Operations Manager:
Server perspective: Lets you select file systems or disks for a host and perform thin reclamation. The space no longer used is reclaimed from the associated LUNs and made available.
To reclaim thin storage requires managed hosts with a supported Storage Foundation version. Thin reclamation is supported on the following Storage Foundation versions:
UNIX/Linux: 5.0 MP3, or later
Windows: 5.1 SP1, or later
In addition, the storage must meet the following requirements:
The storage must be visible to Storage Foundation as thin reclaimable.
For UNIX: The LUNs must be a part of a Arctera Volume Manager volume which has a mounted VxFS file system.
For Windows: The LUNs must be a part of a Storage Foundation for Windows dynamic volume which has a mounted NTFS file system.
Reclamation of thin pools requires in addition that the array support the thin-reclamation functionality and that Storage Insight Add-on is configured to discover the enclosure.
In the following example, a storage administrator reclaims storage from thin pools.
The administrator performs the following procedures to reclaim thin storage.
Run thin reclamation on thin pools
Note:
Although not used in this example, you can also select file systems or disks for a selected host and perform thin reclamation.
You can run the Top Thin Pools for Reclamation report to show the 10 thin pools with the most reclaimable space.
This report has the following requirements:
InfoScale 8.0 or later
Veritas File System (VxFS) disk layout version 9 or later
Arctera InfoScale Operations Manager managed host (VRTSsfmh) version 7.4.2 or later
You can view this information related to the enclosures for which your user group has at least guest role explicitly assigned or inherited from a parent Organization.
You can select one or more thin pools from a selected storage array and either schedule thin reclamation or perform it manually. Thin pools are available if the array supports them and if Storage Insight Add-on is configured for the selected enclosure. Make sure LUNs from these thin pools are consumed by hosts running Storage Foundation.
To perform this task, your user group must be assigned the Admin role on the enclosure. The permission on the enclosure may be explicitly assigned or inherited from a parent Organization.