Veritas InfoScale™ Operations Manager 8.0.2 Installation and Configuration Guide
- Section I. Installing and configuring Veritas InfoScale Operations Manager
- Planning your Veritas InfoScale Operations Manager installation
- Downloading Veritas InfoScale Operations Manager 8.0.2
- Typical Veritas InfoScale Operations Manager deployment configuration
- System requirements
- Installing, upgrading, and uninstalling Veritas InfoScale Operations Manager
- About installing Management Server
- About installing managed host
- About upgrading Management Server
- About backing up and restoring Veritas InfoScale Operations Manager data
- About upgrading managed hosts to Veritas InfoScale Operations Manager 8.0.2
- Configuring Veritas InfoScale Operations Manager in a high availability and disaster recovery environment
- Configuring the high availability feature in Veritas InfoScale Operations Manager
- Configuring a new Veritas InfoScale Operations Manager installation in high availability environment
- Configuring an existing Veritas InfoScale Operations Manager installation in high availability environment
- Configuring a new Veritas InfoScale Operations Manager installation in high availability environment
- Configuring Management Server in one-to-one DR environment
- Configuring Veritas InfoScale Operations Manager in high availability and disaster recovery environment
- About upgrading the high availability configurations
- About upgrading the high availability and disaster recovery configurations
- Configuring the high availability feature in Veritas InfoScale Operations Manager
- Installing and uninstalling Veritas InfoScale Operations Manager add-ons
- Uploading a Veritas InfoScale Operations Manager add-on to the repository
- Installing a Veritas InfoScale Operations Manager add-on
- Uninstalling a Veritas InfoScale Operations Manager add-on
- Removing a Veritas InfoScale Operations Manager add-on from the repository
- Canceling deployment request for a Veritas InfoScale Operations Manager add-on
- Installing a Veritas InfoScale Operations Manager add-on on a specific managed host
- Uninstalling a Veritas InfoScale Operations Manager add-on from a specific managed host
- Planning your Veritas InfoScale Operations Manager installation
- Section II. Setting up the Management Server environment
- Basic Veritas InfoScale Operations Manager tasks
- Adding and managing hosts
- Overview of host discovery
- Overview of agentless discovery
- About installing OpenSSH on a UNIX host
- Adding the managed hosts to Management Server using an agent configuration
- Adding the managed hosts to Management Server using an agentless configuration
- Adding Agentless hosts to the Management Server using Profile
- Editing the agentless host configuration
- Setting up user access
- Adding Lightweight Directory Access Protocol or Active Directory-based authentication on Management Server
- Configuring LDAP using CLI
- Setting up fault monitoring
- Creating rules in the Management Server perspective
- Editing rules in the Management Server perspective
- Deleting rules in the Management Server perspective
- Enabling rules in the Management Server perspective
- Disabling rules in the Management Server perspective
- Suppressing faults in the Management Server perspective
- Suppressing a fault definition in the Management Server perspective
- Setting up virtualization environment discovery
- Setting up near real-time discovery of VMware events
- Requirements for discovering the Solaris zones
- Adding a virtualization server
- Editing a virtualization discovery configuration
- Refreshing a virtualization discovery configuration
- Deploying hot fixes, packages, and patches
- Installing a Veritas InfoScale Operations Manager hot fix, package, or patch
- Configuring Management Server settings
- Configuring SNMP trap settings for alert notifications
- Setting up extended attributes
- Viewing information on the Management Server environment
- Appendix A. Troubleshooting
- Management Server (MS)
- Managed host (MH)
- Management Server (MS)
Configuring Service groups using CLI script
Veritas InfoScale Operations Manager (VIOM), supports CLI script to create SFM_SStore and SFM_Services groups before configure CMS HA on Linux platform.
Follow the procedure to configure Service groups (SFM_SStore and SFM_Services) using CLI script for CMS HA.
To create SFM_SStore and SFM_Services
- Install InfoScale 8.0.2 on cluster nodes (Primary and Secondary).
- Plumb VIP on primary node to configure.
For example: ifconfig ens192:1 xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx up
- Configure VIOM CMS on primary node using VHN and VIP.
Note:
Install VIOM binary and do not configure VIOM CMS on secondary node.
- Add secondary node (as a Agent) to the configured CMS.
- Create diskgroup using shared luns on primary node.
For example: # vxdg init testdg (disk_name)
- Create volume on created diskgroup.
For example: # vxassist -g testdg make testvol 10G
- Create VxFS filesystem on created volume.
For example: # mkfs -t vxfs /dev/vx/rdsk/testdg/testvol
- Create empty directory to mount VxFS file system on created volume.
For example: # mkdir /testvol
- Mount VxFS file system on created volume.
For example: # mount -t vxfs /dev/vx/dsk/testdg/testvol /testvol
After mounted file system, please note down device path, it will require to provide the inputs while creating service group. For example: # df -h /dev/vx/dsk/testdg/testvol
- Unplumb VIP on primary node.
For example: # ifconfig ens192:1 down
- Deport Diskgroup.
For example: # vxdg deport testdg
- Run the script.
For example: # [root@example1 /]# /opt/VRTSsfmcs/config/vcs/createha.pl
The following script will create the prerequisite VCS resources to configure VIOM in a High Availability configuration: Please ensure that you have addressed the following items prior to running the script: 1. The virtual IP to be chosen must be within the same IP subnet, and should not be plumbed. The VIP Will be brought online using the SFM_Services_IP resource 2. A VxVM disk group and volume for the VIOM data must have already been created, with a VxFS filesystem written to that volume 3. The filesystem should be unmounted, and the disk group should be deported prior to running the script as VCS will import it with cluster reservations. 4. The VIOM virtual IP and virtual hostname should be added as a new entry to /etc/hosts on each of the cluster nodes. VIP :xxx.xxx.xxx.xxx NetMask :xxx.xxx.xxx.xxx Nic :ens192 DiskGroup :testdg MountPoint :/testvol BlockDevice :/dev/vx/dsk/testdg/testvol VCS WARNING V-16-1-10364 Cluster already writable. VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources /opt/VRTSvcs/bin/hagrp -modify SFM_SStore SystemList example1 0 example2 1 VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors [root@example1 /]#
- Verify the created SFM_SStore and SFM_Services service groups.
# hagrp -state [root@example1 /]# hagrp -state #Group Attribute System Value SFM_SStore State example1 |ONLINE| SFM_SStore State example2 |OFFLINE| SFM_Services State example1 |ONLINE| SFM_Services State example2 |OFFLINE| cvm State example1 |ONLINE| cvm State example2 |ONLINE| [root@example1.com /]#