NetBackup™ Deployment Guide for Kubernetes Clusters

Last Published:
Product(s): NetBackup & Alta Data Protection (10.5)
  1. Introduction
    1. About Cloud Scale deployment
      1.  
        Decoupling of NetBackup web services from primary server
      2.  
        Decoupling of NetBackup Policy and Job Management from primary server
      3.  
        Logging strategy (fluentbit) in Cloud Scale
    2.  
      About NetBackup Snapshot Manager
    3.  
      About MSDP Scaleout
    4.  
      Required terminology
    5.  
      User roles and permissions
  2. Section I. Configurations
    1. Prerequisites
      1.  
        Preparing the environment for NetBackup installation on Kubernetes cluster
      2.  
        Prerequisites for MSDP Scaleout and Snapshot Manager (AKS/EKS)
      3. Prerequistes for Kubernetes cluster configuration
        1.  
          Config-Checker utility
        2.  
          Data-Migration for AKS
        3.  
          Webhooks validation for EKS
      4. Prerequisites for Cloud Scale configuration
        1.  
          Cluster specific settings
        2.  
          Cloud specific settings
      5.  
        Prerequisites for deploying environment operators
    2. Recommendations and Limitations
      1.  
        Recommendations of NetBackup deployment on Kubernetes cluster
      2.  
        Limitations of NetBackup deployment on Kubernetes cluster
      3.  
        Recommendations and limitations for Cloud Scale deployment
      4.  
        Limitations in MSDP Scaleout
    3. Configurations
      1.  
        Contents of the TAR file
      2.  
        Initial configurations
      3.  
        Configuring the environment.yaml file
      4. Loading docker images
        1.  
          Installing the docker images for NetBackup
        2.  
          Installing the docker images for Snapshot Manager
        3.  
          Installing the docker images and binaries for MSDP Scaleout
      5.  
        Configuring NetBackup IT Analytics for NetBackup deployment
      6. Configuring NetBackup
        1. Primary and media server CR
          1.  
            After installing primary server CR
          2.  
            After Installing the media server CR
        2.  
          Elastic media server
    4. Configuration of key parameters in Cloud Scale deployments
      1.  
        Tuning touch files
      2.  
        Setting maximum jobs per client
      3.  
        Setting maximum jobs per media server
      4.  
        Enabling intelligent catalog archiving
      5.  
        Enabling security settings
      6.  
        Configuring email server
      7.  
        Reducing catalog storage management
      8.  
        Configuring zone redundancy
      9.  
        Enabling client-side deduplication capabilities
      10.  
        Parameters for logging strategy (fluentbit)
  3. Section II. Deployment
    1. Deploying operators
      1.  
        Deploying the operators
    2. Deploying fluentbit for logging strategy
      1.  
        Deploying fluentbit for logging strategy
    3. Deploying Postgres
      1.  
        Deploying Postgres
      2.  
        Enable request logging, update configuration, and copying files from/to PostgreSQL pod
    4. Deploying Cloud Scale
      1.  
        Installing Cloud Scale
      2. Single node Cloud Scale Technology deployment
        1.  
          Steps to deploy Cloud Scale in single node
      3.  
        Restarting Cloud Scale Technology services
    5. Deploying MSDP Scaleout
      1. MSDP Scaleout configuration
        1.  
          Initializing the MSDP operator
        2.  
          Configuring MSDP Scaleout
        3.  
          Configuring the MSDP cloud in MSDP Scaleout
        4.  
          Using MSDP Scaleout as a single storage pool in NetBackup
        5.  
          Using S3 service in MSDP Scaleout
        6.  
          Enabling MSDP S3 service after MSDP Scaleout is deployed
      2.  
        Deploying MSDP Scaleout
    6. Verifying Cloud Scale deployment
      1.  
        Verifying Cloud Scale deployment
  4. Section III. Monitoring and Management
    1. Monitoring NetBackup
      1.  
        Monitoring the application health
      2.  
        Telemetry reporting
      3.  
        About NetBackup operator logs
      4.  
        Monitoring Primary/Media server CRs
      5.  
        Expanding storage volumes
      6. Allocating static PV for Primary and Media pods
        1.  
          Expanding log volumes for primary pods
        2.  
          Recommendation for media server volume expansion
        3.  
          (AKS-specific) Allocating static PV for Primary and Media pods
        4.  
          (EKS-specific) Allocating static PV for Primary and Media pods
    2. Monitoring Snapshot Manager
      1.  
        Overview
      2.  
        Configuration parameters
    3. Monitoring fluentbit
      1.  
        Monitoring fluentbit for logging strategy
    4. Monitoring MSDP Scaleout
      1.  
        About MSDP Scaleout status and events
      2.  
        Monitoring with Amazon CloudWatch
      3.  
        Monitoring with Azure Container insights
      4.  
        The Kubernetes resources for MSDP Scaleout and MSDP operator
    5. Managing NetBackup
      1.  
        Managing NetBackup deployment using VxUpdate
      2.  
        Updating the Primary/Media server CRs
      3.  
        Migrating the cloud node for primary or media servers
    6. Managing the Load Balancer service
      1.  
        About the Load Balancer service
      2.  
        Notes for Load Balancer service
      3.  
        Opening the ports from the Load Balancer service
    7. Managing PostrgreSQL DBaaS
      1.  
        Changing database server password in DBaaS
      2.  
        Updating database certificate in DBaaS
    8. Managing fluentbit
      1.  
        Managing fluentbit for logging strategy
    9. Performing catalog backup and recovery
      1.  
        Backing up a catalog
      2. Restoring a catalog
        1.  
          Primary server corrupted
        2.  
          MSDP-X corrupted
        3.  
          MSDP-X and Primary server corrupted
    10. Managing MSDP Scaleout
      1.  
        Adding MSDP engines
      2.  
        Adding data volumes
      3. Expanding existing data or catalog volumes
        1.  
          Manual storage expansion
      4.  
        MSDP Scaleout scaling recommendations
      5. MSDP Cloud backup and disaster recovery
        1.  
          About the reserved storage space
        2. Cloud LSU disaster recovery
          1.  
            Recovering MSDP S3 IAM configurations from cloud LSU
      6.  
        MSDP multi-domain support
      7.  
        Configuring Auto Image Replication
      8. About MSDP Scaleout logging and troubleshooting
        1.  
          Collecting the logs and the inspection information
  5. Section IV. Maintenance
    1. MSDP Scaleout Maintenance
      1.  
        Pausing the MSDP Scaleout operator for maintenance
      2.  
        Logging in to the pods
      3.  
        Reinstalling MSDP Scaleout operator
      4.  
        Migrating the MSDP Scaleout to another node pool
    2. PostgreSQL DBaaS Maintenance
      1.  
        Configuring maintenance window for PostgreSQL database in AWS
      2.  
        Setting up alarms for PostgreSQL DBaaS instance
    3. Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
      1.  
        Overview
      2.  
        Patching of containers
    4. Upgrading
      1. Upgrading Cloud Scale Technology
        1.  
          Upgrading Cloud Scale deployment for Postgres using Helm charts
        2.  
          Upgrading Cloud Scale deployment for logging strategy (fluentbit)
    5. Cloud Scale Disaster Recovery
      1.  
        Cluster backup
      2.  
        Environment backup
      3.  
        Cluster recovery
      4.  
        Cloud Scale recovery
      5.  
        Environment Disaster Recovery
      6.  
        DBaaS Disaster Recovery
    6. Uninstalling
      1.  
        Uninstalling NetBackup environment and the operators
      2.  
        Uninstalling Postgres using Helm charts
      3.  
        Uninstalling Snapshot Manager from Kubernetes cluster
      4. Uninstalling MSDP Scalout from Kubernetes cluster
        1.  
          Cleaning up MSDP Scaleout
        2.  
          Cleaning up the MSDP Scaleout operator
    7. Troubleshooting
      1. Troubleshooting AKS and EKS issues
        1.  
          View the list of operator resources
        2.  
          View the list of product resources
        3.  
          View operator logs
        4.  
          View primary logs
        5.  
          Socket connection failure
        6.  
          Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
        7.  
          Resolving the issue where the NetBackup server pod is not scheduled for long time
        8.  
          Resolving an issue where the Storage class does not exist
        9.  
          Resolving an issue where the primary server or media server deployment does not proceed
        10.  
          Resolving an issue of failed probes
        11.  
          Resolving token issues
        12.  
          Resolving an issue related to insufficient storage
        13.  
          Resolving an issue related to invalid nodepool
        14.  
          Resolving a token expiry issue
        15.  
          Resolve an issue related to KMS database
        16.  
          Resolve an issue related to pulling an image from the container registry
        17.  
          Resolving an issue related to recovery of data
        18.  
          Check primary server status
        19.  
          Pod status field shows as pending
        20.  
          Ensure that the container is running the patched image
        21.  
          Getting EEB information from an image, a running container, or persistent data
        22.  
          Resolving the certificate error issue in NetBackup operator pod logs
        23.  
          Pod restart failure due to liveness probe time-out
        24.  
          NetBackup messaging queue broker take more time to start
        25.  
          Host mapping conflict in NetBackup
        26.  
          Issue with capacity licensing reporting which takes longer time
        27.  
          Local connection is getting treated as insecure connection
        28.  
          Primary pod is in pending state for a long duration
        29.  
          Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
        30.  
          Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
        31.  
          Taint, Toleration, and Node affinity related issues in cpServer
        32.  
          Operations performed on cpServer in environment.yaml file are not reflected
        33.  
          Elastic media server related issues
        34.  
          Failed to register Snapshot Manager with NetBackup
        35.  
          Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
        36.  
          Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
        37.  
          Request router logs
        38.  
          Issues with NBPEM/NBJM
        39.  
          Issues with logging strategy for Cloud Scale
      2. Troubleshooting AKS-specific issues
        1.  
          Data migration unsuccessful even after changing the storage class through the storage yaml file
        2.  
          Host validation failed on the target host
        3.  
          Primary pod goes in non-ready state
      3. Troubleshooting EKS-specific issues
        1.  
          Resolving the primary server connection issue
        2.  
          NetBackup Snapshot Manager deployment on EKS fails
        3.  
          Wrong EFS ID is provided in environment.yaml file
        4.  
          Primary pod is in ContainerCreating state
        5.  
          Webhook displays an error for PV not found
      4.  
        Troubleshooting issue for bootstrapper pod
  6. Appendix A. CR template
    1.  
      Secret
    2. MSDP Scaleout CR
      1.  
        MSDP Scaleout CR template for AKS
      2.  
        MSDP Scaleout CR template for EKS

Upgrading Cloud Scale deployment for Postgres using Helm charts

Before upgrading Cloud Scale deployment for Postgres using Helm charts, ensure that:

  • Helm charts for operators and Postgres are available from a public or private registry.

  • Images for operators, and Cloud Scale services are available form a public or private registry.

Note:

During the upgrade process, ensure that the cluster nodes are not scaled down to 0 or restarted.

To upgrade Cloud Scale deployment

  1. Upgrade the add-ons as follows:
    • Run the following command to deploy the cert-manager

      helm repo add jetstack https://charts.jetstack.io helm repo update jetstack helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager \ --version 1.13.3 \ --set webhook.timeoutSeconds=30 \ --set installCRDs=true \ --wait --create-namespace

    • Run the following command for deploying the trust manager:

      helm repo add jetstack https://charts.jetstack.io --force-update kubectl create namespace trust-manager helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.7.0 --wait

  2. Upload the new images to your private registry.

    Note:

    Skip this step when using Veritas registry.

  3. Use the following command to suspend the backup job processing:

    nbpemreq -suspend_scheduling

  4. Perform the following steps to upgrade the operators:
    • Use the following command to save the operators chart values to a file:

      # helm show values operators-<version>.tgz > operators-values.yaml

    • Use the following command to edit the chart values to match your deployment scenario:

      # vi operators-values.yaml

    • Execute the following command to upgrade the operators:

      helm upgrade --install operators operators-<version>.tgz -f operators-values.yaml -n netbackup-operator-system

      Or

      If using the OCI container registry, use the following command:

      helm upgrade --install operators oci://abcd.veritas.com:5000/helm-charts/operators --version <version> -f operators-values.yaml -n netbackup-operator-system

    Following is an example for operators-values.yaml file:

    # Default values for operators.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
     
    global:
      # Toggle for platform-specific features & settings
      # Microsoft AKS: "aks"
      # Amazon EKS: "eks"
      platform: "eks"
      # This specifies a container registry that the cluster has access to.
      # NetBackup images should be pushed to this registry prior to applying this
      # Environment resource.
      # Example Azure Container Registry name:
      # example.azurecr.io
      # Example AWS Elastic Container Registry name:
      # 123456789012.dkr.ecr.us-east-1.amazonaws.com
      containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com/engdev"
      operatorNamespace: "netbackup-operator-system"
      # By default pods will get spun up in timezone of node, timezone of node is UTC in AKS/EKS
      # through this field one can specify the different timezone
      # example : /usr/share/zoneinfo/Asia/Kolkata
      timezone: null
     
      storage:
        eks:
          fileSystemId: fs-0411809d90c60aed6
        aks:
          #storageAccountName and storageAccountRG required if use wants to use existing storage account
          storageAccountName: null
          storageAccountRG: null
     
    msdp-operator:
      image:
        name: msdp-operator
        # Provide tag value in quotes eg: '17.0'
        tag: "20.5-0027"
        pullPolicy: Always
     
      namespace:
        labels:
          control-plane: controller-manager
     
      # This determines the path used for storing core files in the case of a crash.
      corePattern: "/core/core.%e.%p.%t"
     
      # This specifies the number of replicas of the msdp-operator controllers
      # to create. Minimum number of supported replicas is 1.
      replicas: 2
     
      # Optional: provide label selectors to dictate pod scheduling on nodes.
      # By default, when given an empty {} all nodes will be equally eligible.
      # Labels should be given as key-value pairs, ex:
      #   agentpool: mypoolname
      nodeSelector:
        agentpool: nbupool
     
      # Storage specification to be used by underlying persistent volumes.
      # References entries in global.storage by default, but can be replaced
      storageClass:
        name: nb-disk-premium
        size: 5Gi
     
      # Specify how much of each resource a container needs.
      resources:
        # Requests are used to decide which node(s) should be scheduled for pods.
        # Pods may use more resources than specified with requests.
        requests:
          cpu: 150m
          memory: 150Mi
        # Optional: Limits can be implemented to control the maximum utilization by pods.
        # The runtime prevents the container from using more than the configured resource limits.
        limits: {}
     
      logging:
        # Enable verbose logging
        debug: false
        # Maximum age (in days) to retain log files, 1 <= N <= 365
        age: 28
        # Maximum number of log files to retain, 1 <= N =< 20
        num: 20
     
    nb-operator:
      image:
        name: "netbackup/operator"
        tag: "10.5-0036"
        pullPolicy: Always
     
      # nb-operator needs to know the version of msdp and flexsnap operators for webhook
      # to do version checking
      msdp-operator:
        image:
          tag: "20.5-0027"
     
      flexsnap-operator:
        image:
          tag: "10.5.0.0-1022"
     
      namespace:
        labels:
          nb-control-plane: nb-controller-manager
     
      nodeSelector:
        node_selector_key: agentpool
        node_selector_value: nbupool
     
      #loglevel:
      #  "-1" - Debug (not recommended for production)
      #  "0"  - Info
      #  "1"  - Warn
      #  "2"  - Error
     
      loglevel:
        value: "0"
     
    flexsnap-operator:
      replicas: 1
     
      namespace:
        labels: {}
     
      image:
        name: "veritas/flexsnap-deploy"
        tag: "10.5.0.0-1022"
        pullPolicy: Always
     
      nodeSelector:
        node_selector_key: agentpool
        node_selector_value: nbupool
  5. Perform the following steps to install/upgrade fluentbit:

    Note:

    It is recommended to copy and check the differences between the sample and the default fluentbit-values.yaml file.

    • Use the following command to save the fluentbit chart values to a file:

      helm show values fluentbit-<version>.tgz > fluentbit-values.yaml

    • Use the following command to edit the chart values:

      vi fluentbit-values.yaml

    • Execute the following command to upgrade the fluentbit deployment:

      helm upgrade --install fluentbit fluentbit-<version>.tgz -f fluentbit-values.yaml -n netbackup

      If using the OCI container registry, use the following command:

      helm install --upgrade fluentbit oci://abcd.veritas.com:5000/helm-charts/fluentbit --version <version> -f fluentbit-values.yaml -n netbackup

    Following is an example for fluentbit-values.yaml file:

    global:
      environmentNamespace: "netbackup"
      containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com"
      timezone: null
     
    fluentbit:
      image:
        name: "netbackup/fluentbit"
        tag: 10.5-0036
        pullPolicy: IfNotPresent
     
      volume:
        pvcStorage: "100Gi"
        storageClassName: nb-disk-premium
     
      metricsPort: 2020
     
      cleanup:
        image:
          name: "netbackup/fluentbit-log-cleanup"
          tag: 10.5-0036
     
        retentionDays: 7
        retentionCleanupTime: '04:00'
     
        # Frequency in minutes
        utilizationCleanupFrequency: 60
     
        # Storage % filled
        highWatermark: 90
        lowWatermark: 60
     
    # Collector node selector value
    collectorNodeSelector:
        node_selector_key: agentpool
        node_selector_value: nbupool
     
    # Toleraions Values (key=value:NoSchedule)
    tolerations:
      - key: agentpool
        value: nbupool
      - key: agentpool
        value: mediapool
      - key: agentpool
        value: primarypool
      - key: storage-pool
        value: storagepool
      - key: data-plane-pool
        value: dataplanepool
  6. (Applicable only for upgrade of DBaaS 10.4 to 10.5) Upgrade PostgreSQL DBaaS version from 14 to 16:

    Note:

    This step is not applicable when using containerized Postgres.

    • For Azure: Execute the kubectl command into 10.4 primary pod and create the /tmp/grant_admin_option_to_roles.sql file.

      Execute the following command to execute grant_admin_option_to_roles.sql file:

      /usr/openv/db/bin/psql "host=$(< /tmp/.nb-pgdb/dbserver) port=$(< /tmp/.nb-pgdb/dbport) dbname=NBDB user=$(< /tmp/.nb-pgdb/dbadminlogin) password=$(< /tmp/.nb-pgdb/dbadminpassword) sslmode=verify-full sslrootcert='/tmp/.db-cert/dbcertpem'" -f /tmp/grant_admin_option_to_roles.sql

      /*
      Azure PostgreSQL upgrade from 14 to 16 does not grant the NetBackup database administrator role the ADMIN OPTION for NetBackup roles.
      This script will grant the NetBackup database administrator role the ADMIN OPTION so that it can manage NetBackup roles.
      */
        
      GRANT ADTR_MAIN TO current_user WITH ADMIN OPTION;
      GRANT AUTH_MAIN TO current_user WITH ADMIN OPTION;
      GRANT DARS_MAIN TO current_user WITH ADMIN OPTION;
      GRANT DBM_MAIN TO current_user WITH ADMIN OPTION;
      GRANT EMM_MAIN TO current_user WITH ADMIN OPTION;
      GRANT JOBD_MAIN TO current_user WITH ADMIN OPTION;
      GRANT PEM_MAIN TO current_user WITH ADMIN OPTION;
      GRANT RB_MAIN TO current_user WITH ADMIN OPTION;
      GRANT SLP_MAIN TO current_user WITH ADMIN OPTION;
      GRANT NBPGBOUNCER TO current_user WITH ADMIN OPTION;
      GRANT NBWEBSVC TO current_user WITH ADMIN OPTION;
      GRANT AZ_DBA TO current_user WITH  ADMIN OPTION;

      Exit 10.4 primary pod. Ready for 10.4 with PostgreSQL to 10.5 with PostgreSQL 16 upgrade.

      Upgrade Azure PostgreSQL version from 14 to 16 using Azure portal.

    • For AWS: Upgrade AWS PostgreSQL RDS version from 14 to 16 using AWS Management Console. Navigate to RDS page, select the database instance and click Modify to change the engine version.

      For more information, see Upgrading the PostgreSQL DB engine for Amazon RDS.

  7. Perform the following steps when installing/upgrading the PostgreSQL database.

    Note:

    This step is not applicable when using DBaaS.

    It is recommended to copy and check the differences between the sample and the default postgres-values.yaml file.

    • Use the following command to save the PostgreSQL chart values to a file:

      helm show values postgresql-<version>.tgz > postgres-values.yaml

    • Use the following command to edit the chart values:

      logDestination: stderr

      vi postgres-values.yaml

    • Execute the following command to upgrade the PostgreSQL database:

      helm upgrade --install postgresql postgresql-<version>.tgz -f postgres-values.yaml -n netbackup

      Or

      If using the OCI container registry, use the following command:

      helm upgrade --install postgresql oci://abcd.veritas.com:5000/helm-charts/netbackup-postgresql --version <version> -f postgres-values.yaml -n netbackup

    Following is an example for postgres-values.yaml file:

    # Default values for postgresql.
    global:
      environmentNamespace: "netbackup"
      containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com"
      timezone: null
     
    postgresql:
      replicas: 1
      # The values in the image (name, tag) are placeholders. These will be set
      # when the deploy_nb_cloudscale.sh runs.
      image:
        name: "netbackup/postgresql"
        tag: "16.3-0036"
        pullPolicy: Always
      service:
        serviceName: nb-postgresql
      volume:
        volumeClaimName: nb-psql-pvc
        volumeDefaultMode: 0640
        pvcStorage: 30Gi
        # configMapName: nbpsqlconf
        storageClassName: nb-disk-premium
        mountPathData: /netbackup/postgresqldb
        secretMountPath: /netbackup/postgresql/keys/server
        # mountConf: /netbackup
      securityContext:
        runAsUser: 0
      createCerts: true
      # pgbouncerIniPath: /netbackup/pgbouncer.ini
      nodeSelector:
        key: agentpool
        value: nbupool
     
      # Resource requests (minima) and limits (maxima). Requests are used to fit
      # the database pod onto a node that has sufficient room. Limits are used to
      # throttle (for CPU) or terminate (for memory) containers that exceed the
      # limit. For details, refer to Kubernetes documentation:
      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes
      # Other types of resources are documented, but only `memory` and `cpu` are
      # recognized by NetBackup.
      #
      # resources:
      #   requests:
      #     memory: 2Gi
      #     cpu: 500m
      #   limits:
      #     memory: 3Gi
      #     cpu: 3
     
      # Example tolerations. Check taints on the desired nodes and update keys and
      # values.
      #
      tolerations:
      - key: agentpool
        value: nbupool
      - key: agentpool
        value: mediapool
      - key: agentpool
        value: primarypool
      - key: storage-pool
        value: storagepool
      - key: data-plane-pool
        value: dataplanepool
      serverSecretName: postgresql-server-crt
      clientSecretName: postgresql-client-crt
      dbSecretName: dbsecret
      dbPort: 13785
      pgbouncerPort: 13787
      dbAdminName: postgres
      initialDbAdminPassword: postgres
      dataDir: /netbackup/postgresqldb
      # postgresqlConfFilePath: /netbackup/postgresql.conf
      # pgHbaConfFilePath: /netbackup/pg_hba.conf
      defaultPostgresqlHostName: nb-postgresql
     
      # file   => log postgresdb in file the default
      # stderr => log postgresdb in stderr so that fluentbit daemonset collect the logs.
      logDestination: file
     
    postgresqlUpgrade:
      replicas: 1
      image:
        name: "netbackup/postgresql-upgrade"
        tag: "16.3-0036"
        pullPolicy: Always
      volume:
        volumeClaimName: nb-psql-pvc
        mountPathData: /netbackup/postgresqldb
        timezone: null
      securityContext:
        runAsUser: 0
      env:
        dataDir: /netbackup/postgresqldb
    To save $$ you can set storageClassName to nb-disk-standardssd for non-production environments.

    If primary node pool has taints applied and they are not added to postgres-values.yaml file above, then manually add tolerations to the PostgreSQL StatefulSet as follows:

    • To verify that node pools use taints, run the following command:

      kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect

      NodeName TaintKey TaintValue TaintEffect
      ip-10-248-231-149.ec2.internal <none> <none> <none>
      ip-10-248-231-245.ec2.internal <none> <none> <none>
      ip-10-248-91-105.ec2.internal nbupool agentpool NoSchedule
    • To view StatefulSets, run the following command:

      kubectl get statefulsets -n netbackup

      NAME READY AGE
      nb-postgresql 1/1 76m
      nb-primary 0/1 51m
    • Edit the PostgreSQL StatefulSets and add tolerations as follows:

      kubectl edit statefulset nb-postgresql -n netbackup

    Following is an example of the modified PostgreSQL StatefulSets:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      annotations:
        meta.helm.sh/release-name: postgresql
        meta.helm.sh/release-namespace: netbackup
      creationTimestamp: "2024-03-25T15:11:59Z"
      generation: 1
      labels:
        app: nb-postgresql
        app.kubernetes.io/managed-by: Helm
      name: nb-postgresql
    ...
    spec:
      template:
        spec:
          containers:
          ...
     
     
          nodeSelector:
            nbupool: agentool
          tolerations:
          - effect: NoSchedule
            key: nbupool
            operator: Equal
            value: agentpool
  8. (For DBaaS only) Perform the following to create Secret containing DBaaS CA certificates:

    Note:

    This step is not applicable when using containerized Postgres.

    • For AWS:

      TLS_FILE_NAME='/tmp/tls.crt'
      PROXY_FILE_NAME='/tmp/proxy.pem'
       
      rm -f ${TLS_FILE_NAME} ${PROXY_FILE_NAME}
       
      DB_CERT_URL="https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"
      DB_PROXY_CERT_URL="https://www.amazontrust.com/repository/AmazonRootCA1.pem"
       
      curl ${DB_CERT_URL} --output ${TLS_FILE_NAME}
      curl ${DB_PROXY_CERT_URL} --output ${PROXY_FILE_NAME}
       
      cat ${PROXY_FILE_NAME} >> ${TLS_FILE_NAME}
       
      kubectl -n netbackup create secret generic postgresql-netbackup-ca --from-file ${TLS_FILE_NAME}
    • For Azure:

      DIGICERT_ROOT_CA='/tmp/root_ca.pem'
      DIGICERT_ROOT_G2='/tmp/root_g2.pem'
      MS_ROOT_CRT='/tmp/ms_root.crt'
      COMBINED_CRT_PEM='/tmp/tls.crt'
        
      DIGICERT_ROOT_CA_URL="https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem";
      DIGICERT_ROOT_G2_URL="https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem";
      MS_ROOT_CRT_URL="http://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt";
        
      curl ${DIGICERT_ROOT_CA_URL} --output ${DIGICERT_ROOT_CA}
      curl ${DIGICERT_ROOT_G2_URL} --output ${DIGICERT_ROOT_G2}
      curl ${MS_ROOT_CRT_URL} --output ${MS_ROOT_CRT}
        
      openssl x509 -inform DER -in ${MS_ROOT_CRT} -out ${COMBINED_CRT_PEM} -outform PEM
      cat ${DIGICERT_ROOT_CA} ${DIGICERT_ROOT_G2} >> ${COMBINED_CRT_PEM}
       
      kubectl -n netbackup create secret generic postgresql-netbackup-ca --from-file ${COMBINED_CRT_PEM}
  9. Create db cert bundle if it does not exists as follows:
    cat <<EOF | kubectl apply -f -
    apiVersion: trust.cert-manager.io/v1alpha1
    kind: Bundle
    metadata:
      name: db-cert
    spec:
      sources:
      - secret:
          name: "postgresql-netbackup-ca"
          key: "tls.crt"
      target:
        namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: "$ENVIRONMENT_NAMESPACE"
        configMap:
          key: "dbcertpem"
    EOF

    After installing db-cert bundle, ensure that you have db-cert configMap present in netbackup namespace with size 1 as follows:

    bash-5.1$ kubectl get configmap db-cert -n $ENVIRONMENT_NAMESPACE
    NAME      DATA   AGE
    db-cert   1      11h

    Note:

    If the configMap is showing the size as 0, then delete it and ensure that the trust-manager recreates it before proceeding further.

  10. Perform the following steps to upgrade the Cloud Scale deployment:
    • Use the following command to obtain the environment name:

      $ kubectl get environments -n netbackup

    • Navigate to the directory containing the patch file and upgrade the Cloud Scale deployment as follows:

      $ cd scripts/

      $ kubectl patch environment <env-name> --type json -n netbackup --patch-file cloudscale_patch.json

    Modify the patch file if your current environment CR specifies spec.primary.tag or spec.media.tag. The patch file listed below assumes the default deployment scenario where only spec.tag and spec.msdpScaleouts.tag are listed.

    Note the following:

    • When upgrading from embedded Postgres to containerized Postgres, add dbSecretName to the patch file.

    • If the images for the new release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.

    • In case of Cloud Scale upgrade, if the capacity of the primary server log volume is greater than the default value, you need to modify the primary server log volume capacity (spec.primary.storage.log.capacity) to default value that is, 30Gi . After upgrading to version 10.5, the decoupled services log volume should use the default log volume, while the primary pod log volume will continue to use the previous log size.

    Examples of .json files:

    • For containerized_cloudscale_patch.json upgrade from 10.4:

      [
        {
          "op" : "replace" ,
          "path" : "/spec/tag" ,
          "value" : "10.5-0036"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/msdpScaleouts/0/tag" ,
          "value" : "20.5-0027"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/cpServer/0/tag" ,
          "value" : "10.5.0.0-1022"
        }
      ]
    • For containerized_cloudscale_patch.json with primary, media tags and global tag:

      [
        {
          "op": "replace",
          "path": "/spec/dbSecretName",
          "value": "dbsecret"
        }, 
        {
          "op" : "replace" ,
          "path" : "/spec/primary/tag" ,
          "value" : "10.5"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/mediaServers/0/tag" ,
          "value" : "10.5"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/msdpScaleouts/0/tag" ,
          "value" : "20.4"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/cpServer/0/tag" ,
          "value" : "10.5.x.xxxxx"
        }
      ]
    • For DBAAS_cloudscale_patch.json:

      Note:

      This patch file is to be used only during DBaaS to DBaaS migration.

      [
        {
          "op" : "replace" ,
          "path" : "/spec/dbSecretProviderClass" ,
          "value" : "dbsecret-spc"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/tag" ,
          "value" : "10.5"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/msdpScaleouts/0/tag" ,
          "value" : "20.4"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/cpServer/0/tag" ,
          "value" : "10.5.x.xxxxx"
        }
      ]
    • For Containerized_cloudscale_patch.json with new container registry:

      Note:

      If the images for the latest release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.

      [
        {
          "op" : "replace" ,
          "path" : "/spec/dbSecretName" ,
          "value" : "dbsecret"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/tag" ,
          "value" : "10.5"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/msdpScaleouts/0/tag" ,
          "value" : "20.4"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/cpServer/0/tag" ,
          "value" : "10.5.x.xxxxx"
        }
        {
          "op" : "replace" ,
          "path" : "/spec/containerRegistry" ,
          "value" : "newacr.azurecr.io"
        },
        {
          "op" : "replace" ,
          "path" : "/spec/cpServer/0/containerRegistry" ,
          "value" : "newacr.azurecr.io"
        } 
      ]
  11. Wait until Environment CR displays the status as ready. During this time pods are expected to restart and any new services to start.
  12. Resume the backup job processing by using the following command:

    # nbpemreq -resume_scheduling