NetBackup™ Web UI Apache Cassandra Administrator's Guide

Last Published:
Product(s): NetBackup & Alta Data Protection (10.2)

Add DSS Clusters

During a backup or restore, Cassandra key space are streamed in-parallel between the Cassandra cluster and the DSS cluster. Follow the procedure to add DSS cluster.

  1. On the left pane, click Apache Cassandra.
  2. Select DSS Cluster tab.
  3. Click Add to add a DSS cluster.

    Note:

    A prerequisite window appears to add cluster with a downloadable template.

  4. Click Start.
  5. On the Basic Properties tab, enter the following:
    • DSS Cluster name

      The DSS cluster name must follow the limit of 256 characters.

    • CBR node IP address

      IP address should be in IPV4 format only.

    • CBR node key

    Note:

    To obtain this node key run the cat /etc/ssh/ssh_host_rsa_key.pub |awk '{print $2}' |base64 -d |sha256sum |awk '{print $1}' command. This node key must contain 64 characters.

  6. Click Next.
  7. On the Cluster nodes tab, do one of the following:

    File Upload:

    • To upload the file, select File Upload.

      Note:

      A window appears with downloadable template. You can fill the node details in the downloaded template. The supported extensions are .csv, xls, and .xlsx.

    • Click Browse.

    • Select the file.

      Locate the file, that has all the required details.

    • Click Upload.

      All the nodes listed in the template are now added.

    Add manually:

    • To add manually, select Add manually.

    • Enter the IP address.

      IP address should be in IPV4 format only.

    • Click Add.

    • To add more IP address, click Add.

  8. Click Next.
  9. On the Credentials tab, do one of the following:

    Select existing credentials:

    • Search the desired credentials and select from the list.

    Add new credentials:

    • Select Add new credential and enter the following details.

      • Credential name

      • Tag

      • Description

      • Host username

      • Host password

      • Database username

      • Database password

      Note:

      Credential name must follow the limit of 256 characters. Tag and Description are optional.

    • Click Next.

    • Click Add.

    • On the Credential Permission tab, select a role to provide permissions for credential.

    • Select the Permission from the following options.

      Permissions varies per selected role.

      • View

      • Create

      • Update

      • Delete

      • Manage Access

      • Assign Credentials

    • Click Save.

  10. Click Next.
  11. On the Backup hosts tab, from Primary backup host, search and select the host.

    Note:

    Any RHEL media server or RHEL client can be used as the backup host.

  12. To add additional backup hosts, click Add, and select one or more host.

    Note:

    You can also use NetBackup client as a backup host.

  13. Click Next.
  14. On the Setting tab, select the following:
    • DSS distribution

      Thin-client distribution directory on the data staging servers. The path must be in UNIX format.

    • Script home

      The value is used for CBR package installation on the Apache Cassandra nodes.

    • Working directory

      The folder where the thin client would stage the data and process them.

    Note:

    Ensure that all the paths configured has read and write access for the credentials specified in the DSS cluster and Cassandra cluster.

  15. On the Advanced setting page, review and make the necessary changes in the following:
    • Job cleanup time out

      The time-out to the typical time it takes to back up the cluster.

    • DSS minimum RAM

      The minimum RAM requirement for Data optimization on data stage server.

    • DSS minimum storage per backup node

      The minimum storage requirement for Data optimization on data stage server.

    • Concurrent compaction

      The maximum number of compactions that can run concurrently.

    • Loader memory size

      The heap memory size for Cassandra table loader.

    • Concurrent transfer

      The value is used to transfer parallel data from production to the data stage server. Default value is 8.

  16. Click Next.
  17. Review the data and click Add.