NetBackup IT Analytics Certified Configuration Guide

Last Published:
Product(s): NetBackup IT Analytics (11.3)
  1. Introduction
    1.  
      NetBackup IT Analytics Overview
    2.  
      Purpose of this document
    3.  
      Software and hardware disclaimer
  2. Portal and database servers
    1.  
      Portal supported operating systems
    2.  
      Recommended portal configurations
    3.  
      Oracle Database and Memory Requirements
    4. Supported browsers and display resolution
      1.  
        Linux portal server: Exported and emailed reports
    5.  
      Supported third-party and open source products
  3. Data Collector server configurations
    1.  
      Data Collector supported operating systems
    2. Data Collector server memory and CPU guidelines
      1.  
        Customize the Linux file handle setting for large collections
      2.  
        Factors impacting Data Collector performance and memory requirements
    3.  
      Data Collector prerequisites
    4.  
      Firewall configuration: Default ports
  4. Capacity Manager configurations
    1.  
      Supported systems and access requirements
    2.  
      IBM Arrays: Modify profile
    3.  
      Creating a NetApp user with API privileges
    4.  
      Creating a NetApp cluster-mode user with API privileges
    5. Array/LUN performance Data Collection
      1.  
        Port performance metrics
    6. EMC Isilon array performance metrics
      1.  
        EMC Isilon Array Performance
      2.  
        EMC Isilon Disk Performance
      3.  
        EMC Isilon Node Performance
      4.  
        EMC Isilon OneFS Performance
      5.  
        EMC Isilon Protocol Performance
    7. NetApp Cluster-Mode performance metrics
      1.  
        NetApp Cluster-Mode Aggregate Performance
      2.  
        NetApp Cluster-Mode CIFS Performance
      3.  
        NetApp Cluster-Mode Disk Performance
      4.  
        NetApp Cluster-Mode Fiber Channel Protocol Logical Interface Performance
      5.  
        NetApp Cluster-Mode LUN Performance
      6.  
        NetApp Cluster-Mode NFS Performance
      7.  
        NetApp Cluster-Mode Processor Node Performance
      8.  
        NetApp Cluster-Mode Processor Performance
      9.  
        NetApp Cluster-Mode RAID Performance
      10.  
        NetApp Cluster-Mode SMB (Server Message Block) Performance
      11.  
        NetApp Cluster-Mode System Performance
      12.  
        NetApp Cluster-Mode Target Port Performance
      13.  
        NetApp Cluster-Mode Volume Performance
    8. EMC Symmetrix enhanced performance metrics
      1.  
        Create enhanced EMC Symmetrix Performance report templates
      2.  
        EMC Symmetrix Array Performance
      3.  
        EMC Symmetrix Backend Director Performance
      4.  
        EMC Symmetrix Frontend Director Performance
      5.  
        EMC Symmetrix Front-end Port Performance
      6.  
        EMC Symmetrix Storage Group Performance
      7.  
        EMC Symmetrix Database Performance
      8.  
        EMC Symmetrix Disk Group Performance
      9.  
        EMC Symmetrix Disk Performance
      10.  
        EMC Symmetrix Device Groups Performance
      11.  
        EMC Symmetrix Disk by Technology Performance
      12.  
        EMC Symmetrix Storage Tier Performance
      13.  
        EMC Symmetrix Thin Tier Performance
      14.  
        EMC Symmetrix Thin Pool Performance
      15.  
        EMC Symmetrix Enhanced Performance metrics
    9.  
      Hitachi Vantara array performance metrics
    10.  
      Host resources prerequisites and configurations
    11. Host access privileges, sudo commands, ports, and WMI proxy requirements
      1.  
        Access requirements by OS
    12.  
      WMI proxy requirements for Windows host Data Collection
    13.  
      Host resources supported configurations
    14.  
      Pure Storage Flash Array performance metrics
    15.  
      Supported host bus adapters (HBAs)
    16.  
      Compute Resources supported configurations
  5. Cloud configurations
    1.  
      Supported systems and access requirements
  6. Virtualization Manager configurations
    1.  
      Supported versions
    2. Virtualization Manager Data Collector requirements for VMware
      1.  
        Creating a VMware Read-Only user
    3.  
      Virtualization Manager Data Collector requirements for Microsoft Hyper-V
  7. File Analytics configurations
    1. Data Collector probes by storage type
      1.  
        CIFS shares
      2.  
        Host inventory probe
      3.  
        File Analytics probe
  8. Fabric Manager configurations
    1. Switch vendors
      1.  
        Download Cisco Data Center Network Manager
  9. Backup Manager configurations
    1.  
      Backup solutions and versions
    2.  
      Centralized NetBackup Data Collection requirements
    3. Veritas NetBackup 8.1 (and later) requirements for centralized collection
      1.  
        Required Software
  10. ServiceNow configurations
    1.  
      ServiceNow configurations
  11. Internal TCP port requirements
    1.  
      Internal TCP port requirements
    2.  
      Internal portal server ports
    3.  
      Internal data collector ports

Customize the Linux file handle setting for large collections

In Linux, a portion of memory is designated for file handles, which is the mechanism used to determine the number of files that can be open at one time. The default value is 1024. For large data collection policy environments, this number may need to be increased to 8192. A large environment is characterized as any collector that is collecting from 20 or more subsystems, such as 20+ TSM instances or 20+ unique arrays.

To change the number of file handles, take the following steps.

  1. On the Linux Data Collector server, edit:

    /etc/security/limits.conf

    At the end of the file, add the following lines:

    root soft nofile 8192
    root hard nofile 8192
    
  2. Log out and log back in as root to execute the following commands to validate all values have been set to 8192.

    ulimit -n
    ulimit -Hn
    ulimit -Sn
    
  3. Restart the Data Collector.