Veritas Information Map Installation and Administration Guide

Last Published:
Product(s): Information Map (1.0)
  1. Introduction to Veritas Information Map
    1.  
      Overview of Information Map
    2.  
      Deployment workflow
    3.  
      Information Map architecture
    4.  
      Creating new user accounts
  2. Planning installation of the Information Map Agent
    1.  
      System requirements
    2.  
      Supported browsers
    3.  
      Connectivity requirements
    4.  
      Security requirements
    5.  
      Generating a KeyStore file
    6.  
      Configuring access to a NetBackup master server
  3. Installing and configuring Information Map
    1.  
      Logging in to Veritas Information Map
    2.  
      Downloading the Agent installer
    3.  
      Configuring locations in Information Map
    4. Installing the Information Map Agent
      1.  
        Configuring proxy settings
    5.  
      Registering the Information Map Agent with Information Fabric
    6. Configuring credentials for share discovery and native scanning
      1.  
        Credentials required to configure share discovery
      2.  
        Configuring a non-administrator domain user on NetApp 7-mode filer
      3.  
        Configuring a non-administrator account on an EMC Isilon file server
    7.  
      Updating the Information Map Agent
  4. Cloud Agent configuration
    1.  
      About configuring the Amazon S3 Agent
    2.  
      Configuring metadata collection in Amazon Web Services (AWS)
    3.  
      Configuring Information Map to access Amazon S3 account
  5. Global settings
    1.  
      About configuring global settings
    2.  
      Configuring stale data definition
    3.  
      Configuring non-business data definition
    4.  
      Configuring storage tiers
    5.  
      Assigning storage tiers to storage
    6.  
      Customizing item types
  6. Managing Information Map settings
    1.  
      Configuring Information Map users
    2.  
      Managing Agents
    3.  
      Managing tasks
    4.  
      Managing content sources
  7. Troubleshooting
    1.  
      Veritas Information Map logging
    2.  
      Information Map Agent jobs
    3.  
      Information Map Agent issues
    4.  
      Information Map and data accuracy
    5.  
      Known limitations of Information Map Agent

Information Map and data accuracy

This section describes the specific scenarios which may present differences between the actual file system or the backups that are taken by NetBackup and the data that is shown in the Information Map.

Differences between physical and logical size

The data represented in the Information Map is always based upon logical sizes; before NetBackup compression, deduplication, or actual block size on storage. Thus, the size varies from the size-on-disk of the data.

NetBackup stores an item's logical size as opposed to the physical or on-disk size on the actual storage device. The physical size which is normally larger than the logical size. Also, in some cases, the logical size can in reality be greater than the physical size if the data on disk is compressed.

To validate the data size, follow the instructions at:

https://www.veritas.com/support/en_US/article.000115757

Processing of deleted items

The Information Map Agent can only discover and send deleted file information by comparing two full backups as NetBackup does not track deleted items between incremental backups. So, until the next full backup is done, Information Map cannot take account of the items that were deleted since the last full backup.

For example, the first full backup has a total size of 200 GB. The full backup is followed by three incremental backups of 50 GB of new files each. In this case total size that is displayed on the Information Map application is 350 GB. This size may differ with what is actually stored on file system, since the deleted files information is not captured by the incremental backups and these files are still present on the Map.

Difference between the current status on the file system and the time of processing the backup

Information Map displays the time as the time when the last full backup was completed and processed byInformation Map. The time is not the current time on the file system.

Incremental backups are processed as they become available. Thus, Information Map reflects the updated and new items (other than the deleted items) as each incremental backup is processed by the Information Fabric technology platform.

No data is displayed if snapshot-based backups are configured

If the NetBackup clients are configured for snapshot-based backups, the Information Map does not show any metadata for the files or folders.

In case of snapshot-based backups, the Agent returns only a single directory (typically, a volume name), and the size for such entities will always be zero. However, if the snapshot-based backups are further configured to be replicated on another system, then Information Map can publish the metadata for files and folders.

No data is displayed if granular recovery is not enabled for VMware policies

When backing up a virtual machine for which the granular recovery option is disabled, the Agent cannot get any metadata for the files and folders. In such cases, you will see a No entity found error for a given virtual image. Also, no data will be reported on the Map.

To work around this issue, you must enable granular discovery for VMware policies.

Certain files ignored by NetBackup during backup

NetBackup does not back up system files, such as pagefile.sys and files which are open or locked.

For Linux clients that are part of a Standard NetBackup policy, sys and proc containers are not displayed on the Workspace of the Information Map application. The agent creates containers for all folders under/(root). The containers, including those for sys and proc are displayed on the Content Containers page of the Administration portal, however these are empty containers. This leads to a mismatch in the number of containers displayed on the Administration portal and the Workspace tab.

Due to an inherent limitation of the Linux operating system, the data under the sys and proc containers is not provided to the Information Map agent, and hence not displayed on the Workspace tab.

There may be a discrepancy between the data size reported on Information Map and that on the actual file server. The discrepancy is evident especially in case of system drives, but can also happen with collaborative shares.

Unsupported operating system platforms

NetBackup versions before 7.7 do not support granular restores for virtual Red Hat Enterprise Linux 7 or SUSE Linux Enterprise Server 11 clients. Due to this restriction, Information Map cannot show any data for these operating system platforms.

You must upgrade to NetBackup 7.7 and enable granular recovery for VMware policies.

Information Map Agent ignores the DFSR data backed up by NetBackup

The Agent ignores scanning the Microsoft Distributed File System Replication (DFSR) data which is backed up by NetBackup via Windows policy. By design, the top-level DFSR shared folders are part of the Shadow Copy Components or Drive ID directive.

The Agent discovers the DFSR data from the NetBackup selection but since the path for the data has the Shadow Copy Components directive, the Agent does not consider the data as real and chooses to ignore it.

For information about the DFSR and NetBackup integration, see:

https://www.veritas.com/support/en_US/article.HOWTO65638

Data missing for content sources with multistreamed backup configuration

When multistreamed backup is configured and nested folders are mounted on different drives or partitions in a NetBackup policy configuration, then every stream creates a backup image for each folder in the backup selection. The Agent sequentially processes these backup images. After the first backup image of the folder is processed, the Agent continues to process the images of the subsequent folders. While processing the second image in the sequence, the Agent incorrectly assumes that the other folders on the mounted drive are deleted that leads to data inaccuracy issues on the Information Map.

You can also determine the data inaccuracy by reviewing the values from the path_counts.<version number>.db.

  1. Go to \data\nbu\scanner\path_counts.<version number>.db file in the <install>/bin folder.
  2. Select the scan_proc_counts table. The value in the size and size_deleted columns will be same. Ideally, they should be different.

Workaround: To overcome this issue, you can configure the Agent to consolidate the backup images from multistreamed policies in a single batch when processing them.

Note:

It is recommended to complete the in-progress scan jobs or mark them as obsolete before you begin the consolidation procedure.

To consolidate the back images, perform the following:

  1. Log on to the Agent server with Administrator credentials.
  2. Go to \<InstallDir>\Veritas\InformationMapAgent\data\conf.
  3. Stop the Information Map services.
  4. Open the data/conf/config.db file from the <install>/bin folder.
  5. Add the following properties and value in the attributes table:
    • name = node.nbu.collector.multiStream.NbuPem

    • value= 1

  6. Save the config.db file with the recent changes.
  7. Start the Information Map services.

Limitations:

The following limitations are applicable when consolidating the backup images:

  • If the NetBackup multistreamed policy or the storage environment is reconfigured, then the data stream count may get altered. Thus, the newly consolidated batch file may not include the backup images that match the new count, which leads to data inaccuracy issues.

  • When NetBackup purges its internal data, it may not give the Agent the data related to the number of maximum streams that are generated. This results in incorrect consolidation of data which leads to data inaccuracy.

  • On upgrading the Agent, the scan_jobs table may contain old stream values that are queued for processing. When the backup images are consolidated, these streams are marked as pending. The Agent may ignore processing these streams. It is recommended to delete the table after you upgrade the Agent.