Veritas NetBackup™ Flex Scale Release Notes

Last Published:
Product(s): Appliances (3.2)
Platform: NetBackup Flex Scale OS
  1. Getting help
    1.  
      About this document
    2.  
      NetBackup Flex Scale resources
  2. Features, enhancements, and changes
    1. What's new in this release
      1.  
        Simplifying cluster deployment with an integrated precheck
      2.  
        Configuring multifactor authentication
      3.  
        Configuring single sign-on (SSO)
      4.  
        Bonding management interfaces during initial configuration
      5.  
        Bonding operations on management network
      6.  
        Connecting eth7 network interface optional during initial configuration
      7.  
        Support for VMware and Tape out backups over Fibre Channel
      8.  
        Support for Data Domain storage
      9.  
        Collecting time-based logs
      10.  
        Forwarding logs to an external server
      11.  
        Changes to the licensing model
      12.  
        Support for parallel installation of EEBs
      13.  
        Support for only parallel upgrades
      14.  
        Including vendor packages on appliance nodes
      15.  
        Configuring the console FQDN
      16.  
        Configuring MTU on public interfaces
    2.  
      Support for NetBackup Client
  3. Limitations
    1.  
      Software limitations
    2.  
      Unsupported features of NetBackup in NetBackup Flex Scale
  4. Known issues
    1. Cluster configuration issues
      1.  
        Cluster configuration fails if there is a conflict between the cluster private network and any other network
      2.  
        Cluster configuration process may hang due to an ssh connection failure
      3.  
        Node discovery fails during initial configuration if the default password is changed
      4.  
        When NetBackup Flex Scale is configured, the size of NetBackup logs might exceed the /log partition size
      5.  
        Error message is not displayed when NTP server is added as FQDN during initial configuration in a non-DNS environment
      6.  
        During cluster configuration, sysadmin user is not detected on some of the nodes
    2. Disaster recovery issues
      1.  
        Backup data present on the primary site before the time Storage Lifecycle Policies (SLP) was applied is not replicated to the secondary site
      2.  
        If the replication link is down on a node, the replication IP does not fail over to another node
      3.  
        Disaster recovery configuration may take around 2.5 hours to complete when data-collect task runs in the backend
      4.  
        After disaster recovery takeover operation, the old recovery points or checkpoints for the primary server catalog file system are not visible in the GUI on the new primary site
      5.  
        Disaster recovery configuration hangs when eth5/bond0 interface is down on the node where management console and CVM services are online on one or both sites
    3. Miscellaneous issues
      1.  
        Red Hat Virtualization (RHV) VM discovery and backup and restore jobs fail if the Media server node that is selected as the discovery host, backup host, or recovery host is replaced
      2.  
        The file systems offline operation gets stuck for more than 2hrs after a reboot all operation
      3.  
        cvmvoldg agent causes resource faults because the database not updated
      4.  
        SQLite, MySQL, MariaDB, PostgreSQL database backups fail in pure IPv6 network configuration
      5.  
        Exchange GRT browse of Exchange-aware VMware policy backups may fail with a database error
      6.  
        Call Home test fails if a proxy server is configured without specifying a user
      7.  
        In a non-DNS NetBackup Flex Scale setup, performing a backup from a snapshot operation fails for NAS-Data-Protection policy
      8.  
        In a non-DNS environment, the CRL check does not work if CDP URL is not accessible
      9.  
        Unable to add multiple host entries against the same IP address and vice versa in a non-DNS IPv4 environment
      10.  
        Incorrect information is displayed for the support health check command in an IPv6 environment
      11.  
        Change in host time zone is not reflected within containers
      12.  
        Failed to sync certificate from NetBackup primary server
      13.  
        Password change for sysadmin user is partially successful
      14.  
        Frequent alerts related to sysadmin users are generated and resolved automatically
      15.  
        Universal Share backup does not happen when NetBackup Flex Scale is configured with IPv6
      16.  
        Unable to generate kernel crash
    4. NetBackup issues
      1.  
        The NetBackup web GUI does not list media or storage hosts in Security > Host mappings page
      2.  
        Media hosts do not appear in the search icon for Recovery host/target host during Nutanix AHV agentless files and folders restore
      3.  
        On the NetBackup media server, the ECA health check shows the warning, 'hostname missing'
      4.  
        If NetBackup Flex Scale is configured, the storage paths are not displayed under MSDP storage
      5.  
        Failure may be observed on STU if the Only use the following media servers is selected for Media server under Storage > Storage unit
      6.  
        NetBackup primary server services fail if an nfs share is mounted at /mnt mount path inside the primary server container
      7.  
        NetBackup primary container goes into unhealthy state
      8.  
        User login fails from the NetBackup GUI with authentication failed error
      9.  
        MSDP engine and media server fail to come up
      10.  
        Oracle Snapshot backup for Oracle workload fails with an error
      11.  
        MSDP storage server may go down in a multi-domain scenario if the same user is used by two or more NetBackup domain
      12.  
        Backup for Nutanix fails with status code 6
      13.  
        Operation to start NetBackup services fails on a NetBackup Flex Scale cluster with media only deployment
      14.  
        Sometimes the NetBackup media services in a container cannot start
      15.  
        All the added media servers are not reflected in the Data Domain OST STU
    5. Networking issues
      1.  
        Cluster configuration workflow may get stuck
      2.  
        Node panics when eth4 and eth6 network interfaces are disconnected
      3.  
        Add node fails during precheck when a secondary data network is configured over the management interface and the Automatic tab is used for providing input IPs for the new node to be added for the secondary data network over management interface
      4.  
        Static route does not get added if any node of the cluster is powered off or not up
      5.  
        Add secondary data network operation fails on the management interface of the secondary site of a cluster when the management network on the secondary site is not the same as the management network on primary site and disaster recovery is configured using a single virtual IP
      6.  
        Data network details are not visible on NetBackup Flex Scale UI after console IP change
      7.  
        Adding secondary data network at the secondary site using automatic mode fails
      8.  
        Adding secondary data network to the secondary site fails if ECA is configured on the cluster
      9.  
        If IP address is assigned to a VLAN over a bond on a node using the set network bond command, SSH using admin user does not work
      10.  
        Create/remove bond operations are possible on both data and management networks from the GUI when secondary data/management network is present
      11.  
        Add secondary data network operation does not fail on the secondary site if the same NetBackup primary IP/FQDN is used to add the secondary data network in both the primary and secondary site
    6. Node and disk management issues
      1.  
        Storage-related logs are not written to the designated log files
      2.  
        Arrival or recovery of the volume does not bring the file system back into online state making the file system unusable
      3.  
        Unable to replace a stopped node
      4.  
        Disk replacement might fail in certain situations
      5.  
        Replacing an NVMe disk fails with a data movement from source disk to destination disk error
      6.  
        Unable to detect a faulted disk that is brought online after some time
      7.  
        Nodes may go into an irrecoverable state if shut down and reboot operations are performed using IPMI-based commands
      8.  
        Replace node may fail if the new node is not reachable
      9.  
        Unable to collect logs from the node if the node where the management console is running is stopped
      10.  
        Log rotation does not work for files and directories in /log/VRTSnas
      11.  
        Unable to start or stop a cluster node
      12.  
        Backup jobs of the workload which uses SSL certificate fail during or post Add node operation
      13.  
        During an add node operation, the error shown on the Infrastructure page is not identical to the error seen when you view the task details
      14.  
        After a replace node operation is performed on a deployment in which ECA is enabled, the universal share is not mounted on the new node's engine
      15.  
        Health of the node does not change to unhealthy when disks are physically replaced
      16.  
        Incorrect error message shown when a node to be added restarts or panics
      17.  
        Add node operation shows NetBackup configuration failure when a newly added node restarts during rebalancing of data
      18.  
        Unhealthy disk are seen on the Infrastructure page after you delete a node from the cluster
      19.  
        Proxy and etcd services do not come online when node shutdown fails
      20.  
        After the management console node reboots, rollback of any running operations doesn't happen automatically
      21.  
        No value is displayed in the Used for column for faulted or excluded data disks
      22.  
        ECA deployment fails for a newly added node on a cluster on which both primary and media servers are configured
      23.  
        Sysadmin user password is not synced to a newly added node
      24.  
        Add node operation fails while rebalancing data
      25.  
        After upgrade from a prior release to 3.2, add node fails if MSDP has at least one Cloud LSU configured
      26.  
        Replace node fails with error "Failed to check connectivity with the gateway"
    7. Security and authentication issues
      1.  
        NetBackup certificates tab and the External certificates tab in the Certificate management page on the NetBackup UI show different hosts list
      2.  
        Replicated images do not have retention lock after the lockdown mode is changed from normal to any other mode
      3.  
        User account gets locked on a management or non-management console node
      4.  
        The changed password is not synchronized across the cluster
      5.  
        Certificate renewal alert is not generated automatically during deployment
      6.  
        During IPMI restriction enable/disable operation, some of the nodes operations may fail
      7.  
        After switching to FIPS security mode, all the users are deleted including the sysadmin user and the Administrator password is reset to default pull tag password
    8. Upgrade issues
      1.  
        EEB installation may fail if some of the NetBackup services are busy
      2.  
        During an upgrade the NetBackup Flex Scale UI shows incorrect status for some of the components
      3.  
        Unable to upgrade from version 3.0.0.1 to 3.2 when an AD/LDAP server contains a maintenance user account and you attempt to change the maintenance user password
      4.  
        Server busy error is displayed during an upgrade rollback
      5.  
        After upgrade, MSDP cloud operations fail on an IPv6 setup if the cluster has MSDP cloud configured with data network in a non-DNS environment
      6.  
        Pre-upgrade check fails intermittently on the primary cluster
      7.  
        Incorrect status is shown for the appliance firmware components after a firmware upgrade
      8.  
        Failed to replace a node after upgrading from 3.0 to 3.2
      9.  
        While upgrading and post upgrade to 3.2 on an ECA configured cluster, you may see '502 Bad Gateway' error while accessing the NetBackup Flex Scale UI using management gateway FQDN or IP
      10.  
        Upgrade progress continues to remain in the same state at 51% after all the node are rebooted
    9. UI issues
      1.  
        During the replace node operation, the UI wrongly shows that the replace operation failed because the data rebuild operation failed
      2.  
        Changes in the local user operations are not reflected correctly in the NetBackup GUI when the failover of the management console and the NetBackup primary occurs at the same time
      3.  
        Mozilla Firefox browser may display a security issue while accessing the infrastructure UI
      4.  
        Recent operations that were completed successfully are not reflected in the UI if the NetBackup Flex Scale management console fails over to another cluster node
      5.  
        Previously generated log packages are not displayed if the infrastructure management console fails over to another node.
      6.  
        Smart card authentication fails for a cluster that includes both primary and media servers with IPv6 configuration
      7.  
        Multiple tasks appear to be running in parallel during an add node operation
      8.  
        Incorrect search results are displayed when you search for EEBs on the Software management > Add-ons tab
      9.  
        Upgrade progress is not updated on the View details page of the GUI
      10.  
        Only three IPMI IP addresses are shown in the GUI post configuration for a four node iLO-FIPS enabled cluster
  5. Fixed issues
    1.  
      Fixed issues in version 3.2

If the replication link is down on a node, the replication IP does not fail over to another node

When you perform disaster recovery, if the replication link is down on the node on which replication IP is residing, the replication IP should fail over to the other node as it is a failover group. But that does not happen and replication is paused and goes into error state. (IA-37024)

Workaround:

In the GUI , go to Settings > Services Management. Select Run auto fix. The IP will become available.

Or

Run the shutdown -r command from the node-level CLI on the CVM master node. Restart the node so that CVM master and replication group, GRP_VVR_REP_VIP can failover.