InfoScale™ Operations Manager 9.0 User's Guide
- Section I. Getting started
- Introducing Arctera InfoScale Operations Manager
- Using the Management Server console
- About selecting the objects
- About searching for objects
- Examples for using Arctera InfoScale Operations Manager
- Example: Cluster Server troubleshooting using Arctera InfoScale Operations Manager
- Example: Ensuring the correct level of protection for volumes controlled by Storage Foundation
- Example: Improving the availability and the disaster recovery readiness of a service group through fire drills
- Examples: Identifying and reducing storage waste using Arctera InfoScale Operations Manager
- Section II. Managing Arctera InfoScale Operations Manager
- Managing user access
- Creating an Organization
- Modifying the name of an Organization
- Setting up fault monitoring
- Creating rules in a perspective
- Editing rules in a perspective
- Deleting rules in a perspective
- Enabling rules in a perspective
- Disabling rules in a perspective
- Suppressing faults in a perspective
- Using reports
- Running a report
- Subscribing for a report
- Sending a report through email
- Managing user access
- Section III. Managing hosts
- Overview
- Working with the uncategorized hosts
- Managing File Replicator (VFR) operations
- Managing disk groups and disks
- Creating disk groups
- Importing disk groups
- Adding disks to disk groups
- Resizing disks in disk groups
- Renaming disks in disk groups
- Splitting disk groups
- Moving disk groups
- Joining disk groups
- Initializing disks
- Replacing disks
- Recovering disks
- Bringing disks online
- Setting disk usage
- Evacuating disks
- Running or scheduling Trim
- Managing volumes
- Creating Storage Foundation volumes
- Encrypting existing volumes
- Deleting volumes
- Moving volumes
- Renaming volumes
- Adding mirrors to volumes
- Removing the mirrors of volumes
- Creating instant volume snapshots
- Creating space optimized snapshots for volumes
- Creating mirror break-off snapshots for volumes
- Dissociating snapshots
- Reattaching snapshots
- Resizing volumes
- Restoring data from the snapshots of volumes
- Refreshing the snapshot of volumes
- Configuring a schedule for volume snapshot refresh
- Adding snapshot volumes to a refresh schedule
- Removing the schedule for volume snapshot refresh
- Setting volume usage
- Enabling FastResync on volumes
- Managing file systems
- Creating file systems
- Defragmenting file systems
- Unmounting non clustered file systems from hosts
- Mounting non clustered file systems on hosts
- Unmounting clustered file systems
- Mounting clustered file systems on hosts
- Remounting file systems
- Checking file systems
- Creating file system snapshots
- Remounting file system snapshot
- Mounting file system snapshot
- Unmounting file system snapshot
- Removing file system snapshot
- Monitoring capacity of file systems
- Managing SmartIO
- About managing SmartIO
- Creating a cache
- Modifying a cache
- Creating an I/O trace log
- Analyzing an I/O trace log
- Managing application IO thresholds
- Managing replications
- Configuring Storage Foundation replications
- Pausing the replication to a Secondary
- Resuming the replication of a Secondary
- Starting replication to a Secondary
- Stopping the replication to a Secondary
- Switching a Primary
- Taking over from an original Primary
- Associating a volume
- Removing a Secondary
- Monitoring replications
- Optimizing storage utilization
- Section IV. Managing high availability and disaster recovery configurations
- Overview
- Managing clusters
- Managing service groups
- Creating service groups
- Linking service groups in a cluster
- Bringing service groups online
- Taking service groups offline
- Switching service groups
- Managing systems
- Managing resources
- Invoking a resource action
- Managing global cluster configurations
- Running fire drills
- Running the disaster recovery fire drill
- Editing a fire drill schedule
- Using recovery plans
- Managing application configuration
- Multi Site Management
- Appendix A. List of high availability operations
- Section V. Monitoring Storage Foundation HA licenses in the data center
- Managing licenses
- About Arctera licensing and pricing
- Assigning a price tier to a host manually
- Creating a license deployment policy
- Modifying a license deployment policy
- Viewing deployment information
- Managing licenses
- Monitoring performance
- About Arctera InfoScale Operations Manager performance graphs
- Managing Business Applications
- About the makeBE script
- Managing extended attributes
- Managing policy checks
- About using custom signatures for policy checks
- Managing Dynamic Multipathing paths
- Disabling the DMP paths on the initiators of a host
- Re-enabling the DMP paths
- Managing CVM clusters
- Managing Flexible Storage Sharing
- Monitoring the virtualization environment
- About discovering the VMware Infrastructure using Arctera InfoScale Operations Manager
- About the multi-pathing discovery in the VMware environment
- About discovering Solaris zones
- About discovering logical domains in Arctera InfoScale Operations Manager
- About discovering LPARs and VIOs in Arctera InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Using Web services API
- Arctera InfoScale Operations Manager command line interface
- Appendix B. Command file reference
- Appendix C. Application setup requirements
- Application setup requirements for Oracle database discovery
- Application setup requirements for Oracle Automatic Storage Management (ASM) discovery
- Application setup requirements for IBM DB2 discovery
- Application setup requirements for Sybase Adaptive Server Enterprise (ASE) discovery
- Application setup requirements for Microsoft SQL Server discovery
About file system deduplication
The deduplication feature in Storage Foundation enables customers to use file system deduplication to optimize existing primary storage. Enabling deduplication at the file system layer results in storage savings and avoids complex and expensive appliances typically associated with file deduplication.
Deduplication is performed without needing any application changes and with minimal overhead. Deduplication does not change the file extension, allowing users and applications to use files normally, without performance impact.
Before setting up deduplication for a file system, evaluate whether the nature of the data makes it a good candidate for deduplication.
The following are good candidates for deduplication:
Virtual machine boot image files (vmdk files)
User home directories
File systems with multiple copies of files
The following might not be the best candidates for deduplication, as they have little or no duplicate data:
Databases
Media files, such as JPEG, MP3, and MOV
The VxFS deduplication feature works as follows. It eliminates duplicate blocks used by your data by comparing blocks across the file system. When the deduplication feature finds a duplicate block, it removes the space used and instead creates a pointer to the common block. If the duplicate file is changed, thus making the files no longer share the same block, then that changed block is saved to disk instead of the pointer.
The deduplication process performs the following tasks:
Scans the file system for changes
Fingerprints the data
Identifies duplicates
Eliminates duplicates after verifying the duplicates
The space consumed by the deduplication database is a function of the amount of data in the file system and the deduplication chunk size. On Linux or Solaris, Arctera recommends a chunk size of 4k for SFCFSHA, where multiple copies of virtual machine images are accessed over NFS. For all other datasets, Arctera recommends a chunk size of 16k or higher. More information is available on deduplication chunk size.
The deduplication feature has the following limitations:
A full backup of a deduplicated Veritas File System (VxFS) file system can require as much space in the target as a file system that has not been deduplicated. For example, if you have 2 TB of data that occupies 1 TB worth of disk space in the file system after deduplication, this data requires 2 TB of space on the target to back up the file system, assuming that the backup target does not do any deduplication. Similarly, when you restore such a file system, you must have 2 TB on the file system to restore the complete data. However, this freshly restored file system can be deduplicated again to regain the space savings. After a full file system restore, Arctera recommends that you remove any existing deduplication configuration and that you reconfigure deduplication.
Deduplication is limited to a volume's primary fileset.
Deduplication does not support mounted clone and snapshot mounted file system.
After you restore data from a backup, you must deduplicate the restored data to regain any space savings provided by deduplication.
If you use the cross-platform data sharing feature to convert data from one platform to another, you must remove the deduplication configuration file and database and re-enable deduplication after the conversion.
You cannot use the FlashBackup feature of NetBackup in conjunction with the data deduplication feature, because FlashBackup does not support disk layout Version 8 and 9.