NetBackup IT Analytics Data Collector Notes and Troubleshooting
- Data Collector Troubleshooting
- Verify the Data Collector configuration
- Verify Connectivity
- Configuring web proxy updates
- Collecting missed events for Veritas Backup Exec
- Substituting ODBC for JDBC to connect to SQL server for Veritas Backup Exec
- Useful Data Collection scripts for capacity
- Host resources troubleshooting
- Host resources: Check the status of the WMI proxy server
- Host resources: Post-Installation verification
- Host resources: Check host connectivity using standard SSH
- Host resources: Check host connectivity
- Host resources: Check host connectivity using Host Resource Configuration file
- Host resources: Generating host resource configuration files
- Host resources: Check the execution of a command on a remote server
- Host resources Data Collection
- Host resources: Collection in stand-alone mode
- Configuring parameters for SSH
- Identifying Windows file system access errors (File Analytics)
- Collect from remote shares (File Analytics)
- Adding a certificate to the Java keystore
- Firewall Configuration: Default Ports
- CRON Expressions and Probe Schedules
- Clustering Data Collectors with VCS and Veritas NetBackup (RHEL 7)
- Clustering Data Collectors with VCS and Veritas NetBackup (Windows)
- Firewall Configuration: Default Ports
- Maintenance Scenarios for Message Relay Server Certificate Generation
Getting started with Data Collector clustering
Install the Veritas NetBackup Data Collector on shared volume attached to the active node.
On the same node, delete the start up scripts, by running the command:
# find /etc -name "*aptare*"
Manually delete the following files:
/etc/rc.d/rc0.d/K30aptare_agent /etc/rc.d/rc3.d/S80aptare_agent /etc/rc.d/rc3.d/K40aptare_agent /etc/rc.d/rc5.d/S80aptare_agent /etc/rc.d/rc5.d/K40aptare_agent
Update the /etc/hosts file with IP and hostname mapping of aptareagent portal on all the cluster nodes.
Using Veritas Infoscale Availability (VCS), create a separate service group for the agent. The following set of screen shots show the visual representation in VCS. There are multiple methods to set up the configuration. These instructions use the main.cf to setup.
See Main.cf.
Create a file
/opt/aptare_scripts/aptare_dc_monitor.shon all the cluster nodes with the following. Ensure execute permission is provided to the file for root user as owner.#!/bin/sh # APTARE_HOME should be set to the base path where aptare # data-colletor is installed APTARE_HOME="/aptare_vol" # Exit codes that VCS understands E_APTARE_IS_ONLINE=110 E_APTARE_IS_OFFLINE=100 E_APTARE_IS_UNKNOWN=99 SCRIPT="${APTARE_HOME}/mbs/bin/aptare_agent" if [ ! -f ${SCRIPT} ]; then exit $E_APTARE_IS_OFFLINE fi ${SCRIPT} status | grep 'WatchDog is running' >/dev/null 2>&1 ret=$? if [ "$ret" -eq "0" ]; then exit $E_APTARE_IS_ONLINE else /usr/bin/ps -ef | grep -v grep | grep -i aptare | grep UpgradeManager > /dev/null 2>&1 upgrademanager_running=$? if [ "${upgrademanager_running}" -eq "0" ]; then exit $E_APTARE_IS_UNKNOWN fi exit $E_APTARE_IS_OFFLINE fi