InfoScale™ Operations Manager 9.0 User's Guide
- Section I. Getting started
- Introducing Arctera InfoScale Operations Manager
- Using the Management Server console
- About selecting the objects
- About searching for objects
- Examples for using Arctera InfoScale Operations Manager
- Example: Cluster Server troubleshooting using Arctera InfoScale Operations Manager
- Example: Ensuring the correct level of protection for volumes controlled by Storage Foundation
- Example: Improving the availability and the disaster recovery readiness of a service group through fire drills
- Examples: Identifying and reducing storage waste using Arctera InfoScale Operations Manager
- Section II. Managing Arctera InfoScale Operations Manager
- Managing user access
- Creating an Organization
- Modifying the name of an Organization
- Setting up fault monitoring
- Creating rules in a perspective
- Editing rules in a perspective
- Deleting rules in a perspective
- Enabling rules in a perspective
- Disabling rules in a perspective
- Suppressing faults in a perspective
- Using reports
- Running a report
- Subscribing for a report
- Sending a report through email
- Managing user access
- Section III. Managing hosts
- Overview
- Working with the uncategorized hosts
- Managing File Replicator (VFR) operations
- Managing disk groups and disks
- Creating disk groups
- Importing disk groups
- Adding disks to disk groups
- Resizing disks in disk groups
- Renaming disks in disk groups
- Splitting disk groups
- Moving disk groups
- Joining disk groups
- Initializing disks
- Replacing disks
- Recovering disks
- Bringing disks online
- Setting disk usage
- Evacuating disks
- Running or scheduling Trim
- Managing volumes
- Creating Storage Foundation volumes
- Encrypting existing volumes
- Deleting volumes
- Moving volumes
- Renaming volumes
- Adding mirrors to volumes
- Removing the mirrors of volumes
- Creating instant volume snapshots
- Creating space optimized snapshots for volumes
- Creating mirror break-off snapshots for volumes
- Dissociating snapshots
- Reattaching snapshots
- Resizing volumes
- Restoring data from the snapshots of volumes
- Refreshing the snapshot of volumes
- Configuring a schedule for volume snapshot refresh
- Adding snapshot volumes to a refresh schedule
- Removing the schedule for volume snapshot refresh
- Setting volume usage
- Enabling FastResync on volumes
- Managing file systems
- Creating file systems
- Defragmenting file systems
- Unmounting non clustered file systems from hosts
- Mounting non clustered file systems on hosts
- Unmounting clustered file systems
- Mounting clustered file systems on hosts
- Remounting file systems
- Checking file systems
- Creating file system snapshots
- Remounting file system snapshot
- Mounting file system snapshot
- Unmounting file system snapshot
- Removing file system snapshot
- Monitoring capacity of file systems
- Managing SmartIO
- About managing SmartIO
- Creating a cache
- Modifying a cache
- Creating an I/O trace log
- Analyzing an I/O trace log
- Managing application IO thresholds
- Managing replications
- Configuring Storage Foundation replications
- Pausing the replication to a Secondary
- Resuming the replication of a Secondary
- Starting replication to a Secondary
- Stopping the replication to a Secondary
- Switching a Primary
- Taking over from an original Primary
- Associating a volume
- Removing a Secondary
- Monitoring replications
- Optimizing storage utilization
- Section IV. Managing high availability and disaster recovery configurations
- Overview
- Managing clusters
- Managing service groups
- Creating service groups
- Linking service groups in a cluster
- Bringing service groups online
- Taking service groups offline
- Switching service groups
- Managing systems
- Managing resources
- Invoking a resource action
- Managing global cluster configurations
- Running fire drills
- Running the disaster recovery fire drill
- Editing a fire drill schedule
- Using recovery plans
- Managing application configuration
- Multi Site Management
- Appendix A. List of high availability operations
- Section V. Monitoring Storage Foundation HA licenses in the data center
- Managing licenses
- About Arctera licensing and pricing
- Assigning a price tier to a host manually
- Creating a license deployment policy
- Modifying a license deployment policy
- Viewing deployment information
- Managing licenses
- Monitoring performance
- About Arctera InfoScale Operations Manager performance graphs
- Managing Business Applications
- About the makeBE script
- Managing extended attributes
- Managing policy checks
- About using custom signatures for policy checks
- Managing Dynamic Multipathing paths
- Disabling the DMP paths on the initiators of a host
- Re-enabling the DMP paths
- Managing CVM clusters
- Managing Flexible Storage Sharing
- Monitoring the virtualization environment
- About discovering the VMware Infrastructure using Arctera InfoScale Operations Manager
- About the multi-pathing discovery in the VMware environment
- About discovering Solaris zones
- About discovering logical domains in Arctera InfoScale Operations Manager
- About discovering LPARs and VIOs in Arctera InfoScale Operations Manager
- About Microsoft Hyper-V virtualization discovery
- Using Web services API
- Arctera InfoScale Operations Manager command line interface
- Appendix B. Command file reference
- Appendix C. Application setup requirements
- Application setup requirements for Oracle database discovery
- Application setup requirements for Oracle Automatic Storage Management (ASM) discovery
- Application setup requirements for IBM DB2 discovery
- Application setup requirements for Sybase Adaptive Server Enterprise (ASE) discovery
- Application setup requirements for Microsoft SQL Server discovery
Glossary
- Active/active configuration
A failover configuration where each system runs a service group. If either system fails, the other one takes over and runs both service groups. Also symmetric configuration.
- Active/passive configuration
A failover configuration consisting of one service group on a primary system, and one dedicated backup system. Also asymmetric configuration.
- addressable unit
Any storage resource in the network that is ready to be allocated for use by hosts and applications. Also AddrUnit or AU.
See also LUN
- allocated storage
The total amount of addressable storage in LUNs that is designated for use by specific hosts. A LUN is considered allocated when a host operating system has written a device handle for the LUN (in other words, claimed the LUN) or when the array has masked the LUN to a specific target.
Contrast with unallocated storage
- application
A program or group of programs designed to perform a specific task. Oracle Database and Arctera NetBackup are examples of applications.
- Authentication Service
See Symantec Product Authentication Service.
- bridge
A device that connects and passes packets between two segments of a storage network that use the same communications protocol.
See also router
- capacity
The amount of storage an object can allocate or use.
- claimed storage
Storage for which at least one host's operating system has created a device handle.
Contrast with unclaimed storage
- cluster
A set of hosts (each termed a node) that share a set of disks and are connected by a set of redundant heartbeat networks.
- cluster communication
Communication between clusters using either of the two core communication protocols defined by Veritas Cluster Server: GAB and LLT. The communication takes place by means of heartbeat signals sent between systems or fast kernel-to-kernel broadcasts.
- configured storage
Physical storage that has been formatted and is ready to be apportioned into RAID groups.
Contrast with unconfigured storage
- device handle
The name the operating system uses to identify a storage resource (known as an addressable unit or LUN), and the correct means (driver, system call) to access it. Also OS handle.
- disk group
A collection of disks that share a common configuration. A disk group configuration is a set of records containing detailed information on existing Arctera Volume Manager objects (such as disk and volume attributes) and their relationships. Each disk group has an administrator-assigned name and an internally defined unique ID. The root disk group (rootdg) is a special private disk group that always exists.
- DMP (Dynamic Multipathing)
A feature of Veritas Volume Manager that provides greater reliability and better performance by using path failover and load balancing for multiported disk arrays connected to host systems through multiple paths. DMP detects the various paths to a disk using a mechanism that is specific to each supported array type. DMP can also differentiate between different enclosures of a supported array type that are connected to the same host system.
- event
A notification that indicates when an action, such as an alert or a change in state, has occurred for one or more objects on the storage network.
- failover
A backup operation that automatically switches to a standby database, server, or network if the primary system fails or is temporarily shut down for servicing.
- file system
A means of organizing the addressable (LUN) storage of one or more physical or virtual disks to give users and applications a convenient way of organizing files. File systems appear to users and applications as directories arranged in a hierarchy.
- firmware
A set of software instructions set permanently in a device's memory.
- GBIC
Gigabit interface converter. A widely used transceiver module for Fibre Channel. A GBIC is modular and hot-swappable and can be either copper or optical.
- Global Service Group
A VCS service group that spans across two or more clusters. The ClusterList attribute for this group contains the list of clusters over which the group spans.
- Group Atomic Broadcast (GAB)
A communication mechanism of the VCS engine that manages cluster membership, monitors heartbeat communication, and distributes information throughout the cluster.
- hub
A common connection point for devices in the storage network. The hub may be unmanaged, IP-managed, or FC-managed. An unmanaged hub is passive in the sense that it serves simply as a conduit for data, moving the data from one storage resource to another. IP-managed and FC-managed hubs are intelligent, containing features an administrator can use to monitor the traffic passing through the hub and configure each port in the hub.
- logical unit number
See LUN.
- LUN (logical unit number)
A unique and discrete addressable unit or logical volume that may reside inside one or more simple or array storage devices. LUNs are exposed to the outside world through an addressing scheme presented to the host as SCSI LUN numbers. Each LUN has a unique device handle and represents a logical volume.
- node
An object in a network. In Veritas Cluster Server, node refers specifically to one of any number of hosts in a cluster. See also object.
- object
A single, unique addressable entity on a storage network. It is possible for objects to be present within objects. For example, while a tape array is an object, each individual tape drive within the array is also an object. A host is an object, and the HBA inside the host is also an object. Each object has one or more attributes and can be a member of one or more zones.
- Object Reference or OID (Object ID)
A key which uniquely identifies an object in the discovery data store. OIDs are represented in XML files as hexadecimal strings with a maximum length of 128 characters.
- physical fabric
The physical components of a fabric, including all switches and all other SAN objects. You can configure one or more virtual fabrics - each one isolated from the others - based on the hardware components in the physical fabric.
- policy
A set of rules, or configuration settings, that are applied across a number of objects in the storage network. You establish policies to help you monitor and manage the network. Each policy associates certain sets of conditions with storage resources and defines actions to be taken when these conditions are detected.
- RAID
Redundant Array of Independent Disks. A set of techniques for managing multiple disks for cost, data availability, and performance.
See also mirroringstriping
- resource
Any of the individual components that work together to provide services on a network. A resource may be a physical component such as a storage array or a switch, a software component such as Oracle8i or a Web server, or a configuration component such as an IP address or mounted file system.
- SAN
Acronym for "storage area network." A network linking servers or workstations to devices, typically over Fibre Channel, a versatile, high-speed transport. The storage area network (SAN) model places storage on its own dedicated network, removing data storage from both the server-to-disk SCSI bus and the main user network. The SAN includes one or more hosts that provide a point of interface with LAN users, as well as (in the case of large SANs) one or more fabric switches and SAN hubs to accommodate a large number of storage devices.
- SCSI
Small Computer Systems Interface. A hardware interface that allows for the connection of multiple peripheral devices to a single expansion board that plugs into the computer. The interface is widely used to connect personal computers to peripheral devices such as disk and media drives.
- seeding
A technique used to protect a cluster from a preexisting network partition. By default, when a system comes up, it is not seeded. Systems can be seeded automatically or manually. Only systems that have been seeded can run VCS. Systems are seeded automatically only when an unseeded system communicates with a seeded system or when all systems in the cluster are unseeded and able to communicate with each other.
See network partition
- service group
A collection of resources working together to provide application services to clients. It typically includes multiple resources, hardware- and software-based, working together to provide a single service.
- SMTP
Simple Mail Transfer Protocol, a commonly used protocol for sending email messages between servers.
- SnapMirror
A method of mirroring volumes and qtrees on NetApp unified storage devices. With SnapMirror, a user can schedule or initiate data transfers, request information about transfers, update a mirror, and manage mirrors.
See mirroring
- snapshot
A point-in-time image of a volume or file system that can be used as a backup.
- SNMP
The Simple Network Management Protocol for Internet network management and communications used to promote interoperability. SNMP depends on cooperating systems that must adhere to a common framework and a common language or protocol.
- striping
A layout technique that spreads data across several physical disks by mapping the data to successive media, known as stripes, in a cyclic pattern. Also RAID Level 0.
- switch
A network device to which nodes attach and which provides high-speed switching of node connections via link-level addressing.
- system
The physical hardware on which data and applications reside, and the connections between them.
- topology
The physical or logical arrangement of resources on the storage network and the connections between them.
- unused storage
Storage to which data has not been written.
Contrast with used storage
- virtual IP address
A unique IP address associated with a VCS cluster. This address can be used on any system in the cluster, along with other resources in the VCS cluster service group. A virtual IP address is different from a system's base IP address, which corresponds to the system's host name.
See also IP address
- virtualization
Representing one or more objects, services, or functions as a single abstract entity so that they can be managed or acted on collectively. An example of virtualization is the creation of a virtual fabric from a switch and associated storage resources as a means of controlling access and increasing scalability in the storage network.