NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster

Last Published:
Product(s): NetBackup (10.1)
  1. Introduction to NetBackup on EKS
    1.  
      About NetBackup deployment on Amazon Elastic Kubernetes (EKS) cluster
    2.  
      Required terminology
    3.  
      User roles and permissions
    4.  
      About MSDP Scaleout
    5.  
      About MSDP Scaleout components
    6.  
      Limitations in MSDP Scaleout
  2. Deployment with environment operators
    1. About deployment with the environment operator
      1.  
        Prerequisites
      2.  
        Contents of the TAR file
      3.  
        Known limitations
    2.  
      Deploying the operators manually
    3.  
      Deploying NetBackup and MSDP Scaleout manually
    4.  
      Configuring the environment.yaml file
    5.  
      Uninstalling NetBackup environment and the operators
    6.  
      Applying security patches
  3. Assessing cluster configuration before deployment
    1.  
      How does the webhook validation works
    2.  
      Webhooks validation execution details
    3.  
      How does the Config-Checker utility work
    4.  
      Config-Checker execution and status details
  4. Deploying NetBackup
    1.  
      Preparing the environment for NetBackup installation on EKS
    2.  
      Recommendations of NetBackup deployment on EKS
    3.  
      Limitations of NetBackup deployment on EKS
    4. About primary server CR and media server CR
      1.  
        After installing primary server CR
      2.  
        After Installing the media server CR
    5.  
      Monitoring the status of the CRs
    6.  
      Updating the CRs
    7.  
      Deleting the CRs
    8.  
      Configuring NetBackup IT Analytics for NetBackup deployment
    9.  
      Managing NetBackup deployment using VxUpdate
    10.  
      Migrating the node group for primary or media servers
  5. Upgrading NetBackup
    1.  
      Preparing for NetBackup upgrade
    2.  
      Upgrading NetBackup operator
    3.  
      Upgrading NetBackup application
    4.  
      Upgrade NetBackup during data migration
    5.  
      Procedure to rollback when upgrade fails
  6. Deploying MSDP Scaleout
    1.  
      Deploying MSDP Scaleout
    2.  
      Prerequisites
    3.  
      Installing the docker images and binaries
    4.  
      Initializing the MSDP operator
    5.  
      Configuring MSDP Scaleout
    6.  
      Using MSDP Scaleout as a single storage pool in NetBackup
    7.  
      Configuring the MSDP cloud in MSDP Scaleout
  7. Upgrading MSDP Scaleout
    1.  
      Upgrading MSDP Scaleout
  8. Monitoring NetBackup
    1.  
      Monitoring the application health
    2.  
      Telemetry reporting
    3.  
      About NetBackup operator logs
    4.  
      Expanding storage volumes
    5.  
      Allocating static PV for Primary and Media pods
  9. Monitoring MSDP Scaleout
    1.  
      About MSDP Scaleout status and events
    2.  
      Monitoring with Amazon CloudWatch
    3.  
      The Kubernetes resources for MSDP Scaleout and MSDP operator
  10. Managing the Load Balancer service
    1.  
      About the Load Balancer service
    2.  
      Notes for Load Balancer service
    3.  
      Opening the ports from the Load Balancer service
  11. Performing catalog backup and recovery
    1.  
      Backing up a catalog
    2.  
      Restoring a catalog
  12. Managing MSDP Scaleout
    1.  
      Adding MSDP engines
    2.  
      Adding data volumes
    3. Expanding existing data or catalog volumes
      1.  
        Manual storage expansion
    4.  
      MSDP Scaleout scaling recommendations
    5. MSDP Cloud backup and disaster recovery
      1.  
        About the reserved storage space
      2.  
        Cloud LSU disaster recovery
    6.  
      MSDP multi-domain support
    7.  
      Configuring Auto Image Replication
    8. About MSDP Scaleout logging and troubleshooting
      1.  
        Collecting the logs and the inspection information
  13. About MSDP Scaleout maintenance
    1.  
      Pausing the MSDP Scaleout operator for maintenance
    2.  
      Logging in to the pods
    3.  
      Reinstalling MSDP Scaleout operator
    4.  
      Migrating the MSDP Scaleout to another node group
  14. Uninstalling MSDP Scaleout from EKS
    1.  
      Cleaning up MSDP Scaleout
    2.  
      Cleaning up the MSDP Scaleout operator
  15. Troubleshooting
    1.  
      View the list of operator resources
    2.  
      View the list of product resources
    3.  
      View operator logs
    4.  
      View primary logs
    5.  
      Pod restart failure due to liveness probe time-out
    6.  
      Socket connection failure
    7.  
      Resolving an invalid license key issue
    8.  
      Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
    9.  
      Resolving the issue where the NetBackup server pod is not scheduled for long time
    10.  
      Resolving an issue where the Storage class does not exist
    11.  
      Resolving an issue where the primary server or media server deployment does not proceed
    12.  
      Resolving an issue of failed probes
    13.  
      Resolving token issues
    14.  
      Resolving an issue related to insufficient storage
    15.  
      Resolving an issue related to invalid nodepool
    16.  
      Resolving a token expiry issue
    17.  
      Resolve an issue related to KMS database
    18.  
      Resolve an issue related to pulling an image from the container registry
    19.  
      Resolving an issue related to recovery of data
    20.  
      Check primary server status
    21.  
      Pod status field shows as pending
    22.  
      Ensure that the container is running the patched image
    23.  
      Getting EEB information from an image, a running container, or persistent data
    24.  
      Resolving the certificate error issue in NetBackup operator pod logs
    25.  
      Resolving the primary server connection issue
    26.  
      Primary pod is in pending state for a long duration
    27.  
      Host mapping conflict in NetBackup
    28.  
      NetBackup messaging queue broker take more time to start
    29.  
      Local connection is getting treated as insecure connection
    30.  
      Issue with capacity licensing reporting which takes longer time
    31.  
      Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
  16. Appendix A. CR template
    1.  
      Secret
    2.  
      MSDP Scaleout CR

Config-Checker execution and status details

Note the following points.

  • Config-Checker is executed as a separate job in Kubernetes cluster for both the primary server and media server CRs respectively. Each job creates a pod in the cluster. Config-checker creates the pod in the operator namespace.

    Note:

    Config-checker pod gets deleted after 4 hours.

  • Execution summary of the Config-Checker can be retrieved from the Config-Checker pod logs using the kubectl logs <configchecker-pod-name> -n <operator-namespace> command.

    This summary can also be retrieved from the operator pod logs using the kubectl logs <operator-pod-name> -n <operator-namespace> command.

  • Following are the Config-Checker modes that can be specified in the Primary and Media CR:

    • Default: This mode executes the Config-Checker. If the execution is successful, the Primary and Media CRs deployment is started.

    • Dryrun: This mode only executes the Config-Checker to verify the configuration requirements but does not start the CR deployment.

    • Skip: This mode skips the Config-Checker execution of Config-Checker and directly start the deployment of the respective CR.

  • Status of the Config-Checker can be retrieved from the primary server and media server CRs by using the kubectl describe <PrimaryServer/MediaServer> <CR name> -n <namespace> command.

    For example, kubectl describe primaryservers environment-sample -n test

  • Following are the Config-Checker statuses:

    • Success: Indicates that all the mandatory config checks have successfully passed.

    • Failed: Indicates that some of the config checks have failed.

    • Running: Indicates that the Config-Checker execution is in progress.

    • Skip: Indicates that the Config-Checker is not executed because the configcheckmode specified in the CR is skipped.

  • If the Config-Checker execution status is Failed, you can check the Config-Checker job logs using kubectl logs <configchecker-pod-name> -n <operator-namespace>. Review the error codes and error messages pertaining to the failure and update the respective CR with the correct configuration details to resolve the errors.

    For more information about the error codes, refer to NetBackup™ Status Codes Reference Guide.

  • If Config-Checker ran in dryrun mode and if user wants to run Config-Checker again with same values in Primary or Media server YAML as provided earlier, then user needs to delete respective CR of Primary or Media server. And then apply it again.

    • If it is primary server CR, delete primary server CR using the kubectl delete -f <environment.yaml> command.

      Or

      If it is media server CR, edit the Environment CR by removing the media server section in the environment.yaml file. Before removing the mediaServer section, you must save the content and note the location of the content. After removing section apply environment CR using kubectl apply -f <environment.yaml> command.

    • Apply the CR again. Add the required data which was deleted earlier at correct location, save it and apply the yaml using kubectl apply -f <environment.yaml> command.