NetBackup IT Analytics System Administrator Guide
- Introduction
- Preparing for updates
- Backing up and restoring data
- Monitoring NetBackup IT Analytics
- Accessing NetBackup IT Analytics reports with the REST API
- Defining NetBackup estimated tape capacity
- Automating host group management
- Categorize host operating systems by platform and version
- Bulk load utilities
- Automate NetBackup utilities
- Scheduling utilities to run automatically
- Attribute management
- Importing generic backup data
- Backup job overrides
- Managing host data collection
- System configuration in the Portal
- Custom parameters
- Performance profile schedule customization
- LDAP and SSO authentication for Portal access
- Change Oracle database user passwords
- Integrate with CyberArk
- Tuning NetBackup IT Analytics
- Working with log files
- Portal and data collector log files - reduce logging
- Data collector log file naming conventions
- Portal log files
- Defining report metrics
- SNMP trap alerting
- SSL certificate configuration
- Configure virtual hosts for portal and / or data collection SSL
- Keystore on the portal server
- Portal properties: Format and portal customizations
- Data retention periods for SDK database objects
- Data aggregation
- Troubleshooting
- Appendix A. Kerberos based proxy user's authentication in Oracle
- Appendix B. Configure TLS-enabled Oracle database on NetBackup IT Analytics Portal and data receiver
- Appendix C. NetBackup IT Analytics for NetBackup on Kubernetes and appliances
Pre-requisites
The following are the pre-requisites to enable data aggregation in NetBackup IT Analytics:
To enable the data aggregation, follow the steps as below:
Navigate to Administration >> System Configuration. System Configuration dialog box is displayed.
Click Database Administration tab.
Select the Enable data aggregation check box option.
The database tables must be partitioned in order to aggregate millions of records (composite partitioning Interval-List). The existing data in the specified non-partitioned tables will be transferred to the partitioned schema.
In order for Data Aggregation to know which columns require what types of aggregation and levels of aggregation, a one-time script execution is required to load the meta data for these chosen tables.
Data aggregation will begin with the nightly purge cycle after all of the above pre-requisite steps have been fulfilled. If the table is selected for aggregation by partitioning and metadata entry, the data for that table will first be aggregated before being purged, based on level-wise retention parameters.
Note:
Veritas advise performing an environment assessment before enabling data aggregation. After the assessments, Veritas includes separate scripts to divide any current tables that will benefit from data aggregation. Contact Veritas Support.