NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- Section IV. Maintenance
- MSDP Scaleout Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Upgrading Cloud Scale deployment for Postgres using Helm charts
Before upgrading Cloud Scale deployment for Postgres using Helm charts, ensure that:
Helm charts for operators and Postgres are available from a public or private registry.
Images for operators, and Cloud Scale services are available form a public or private registry.
Note:
During the upgrade process, ensure that the cluster nodes are not scaled down to 0 or restarted.
To upgrade Cloud Scale deployment
- Upgrade the add-ons as follows:
Run the following command to deploy the cert-manager
helm repo add jetstack https://charts.jetstack.io helm repo update jetstack helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager \ --version 1.13.3 \ --set webhook.timeoutSeconds=30 \ --set installCRDs=true \ --wait --create-namespace
Run the following command for deploying the trust manager:
helm repo add jetstack https://charts.jetstack.io --force-update kubectl create namespace trust-manager helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.7.0 --wait
- Upload the new images to your private registry.
Note:
Skip this step when using Veritas registry.
- Use the following command to suspend the backup job processing:
nbpemreq -suspend_scheduling
- Perform the following steps to upgrade the operators:
Use the following command to save the operators chart values to a file:
# helm show values operators-<version>.tgz > operators-values.yaml
Use the following command to edit the chart values to match your deployment scenario:
# vi operators-values.yaml
Execute the following command to upgrade the operators:
helm upgrade --install operators operators-<version>.tgz -f operators-values.yaml -n netbackup-operator-system
Or
If using the OCI container registry, use the following command:
helm upgrade --install operators oci://abcd.veritas.com:5000/helm-charts/operators --version <version> -f operators-values.yaml -n netbackup-operator-system
Following is an example for
operators-values.yaml
file:# Default values for operators. # This is a YAML-formatted file. # Declare variables to be passed into your templates. global: # Toggle for platform-specific features & settings # Microsoft AKS: "aks" # Amazon EKS: "eks" platform: "eks" # This specifies a container registry that the cluster has access to. # NetBackup images should be pushed to this registry prior to applying this # Environment resource. # Example Azure Container Registry name: # example.azurecr.io # Example AWS Elastic Container Registry name: # 123456789012.dkr.ecr.us-east-1.amazonaws.com containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com/engdev" operatorNamespace: "netbackup-operator-system" # By default pods will get spun up in timezone of node, timezone of node is UTC in AKS/EKS # through this field one can specify the different timezone # example : /usr/share/zoneinfo/Asia/Kolkata timezone: null storage: eks: fileSystemId: fs-0411809d90c60aed6 aks: #storageAccountName and storageAccountRG required if use wants to use existing storage account storageAccountName: null storageAccountRG: null msdp-operator: image: name: msdp-operator # Provide tag value in quotes eg: '17.0' tag: "20.5-0027" pullPolicy: Always namespace: labels: control-plane: controller-manager # This determines the path used for storing core files in the case of a crash. corePattern: "/core/core.%e.%p.%t" # This specifies the number of replicas of the msdp-operator controllers # to create. Minimum number of supported replicas is 1. replicas: 2 # Optional: provide label selectors to dictate pod scheduling on nodes. # By default, when given an empty {} all nodes will be equally eligible. # Labels should be given as key-value pairs, ex: # agentpool: mypoolname nodeSelector: agentpool: nbupool # Storage specification to be used by underlying persistent volumes. # References entries in global.storage by default, but can be replaced storageClass: name: nb-disk-premium size: 5Gi # Specify how much of each resource a container needs. resources: # Requests are used to decide which node(s) should be scheduled for pods. # Pods may use more resources than specified with requests. requests: cpu: 150m memory: 150Mi # Optional: Limits can be implemented to control the maximum utilization by pods. # The runtime prevents the container from using more than the configured resource limits. limits: {} logging: # Enable verbose logging debug: false # Maximum age (in days) to retain log files, 1 <= N <= 365 age: 28 # Maximum number of log files to retain, 1 <= N =< 20 num: 20 nb-operator: image: name: "netbackup/operator" tag: "10.5-0036" pullPolicy: Always # nb-operator needs to know the version of msdp and flexsnap operators for webhook # to do version checking msdp-operator: image: tag: "20.5-0027" flexsnap-operator: image: tag: "10.5.0.0-1022" namespace: labels: nb-control-plane: nb-controller-manager nodeSelector: node_selector_key: agentpool node_selector_value: nbupool #loglevel: # "-1" - Debug (not recommended for production) # "0" - Info # "1" - Warn # "2" - Error loglevel: value: "0" flexsnap-operator: replicas: 1 namespace: labels: {} image: name: "veritas/flexsnap-deploy" tag: "10.5.0.0-1022" pullPolicy: Always nodeSelector: node_selector_key: agentpool node_selector_value: nbupool
- Perform the following steps to install/upgrade fluentbit:
Note:
It is recommended to copy and check the differences between the sample and the default
fluentbit-values.yaml
file.Use the following command to save the fluentbit chart values to a file:
helm show values fluentbit-<version>.tgz > fluentbit-values.yaml
Use the following command to edit the chart values:
vi fluentbit-values.yaml
Execute the following command to upgrade the fluentbit deployment:
helm upgrade --install fluentbit fluentbit-<version>.tgz -f fluentbit-values.yaml -n netbackup
If using the OCI container registry, use the following command:
helm install --upgrade fluentbit oci://abcd.veritas.com:5000/helm-charts/fluentbit --version <version> -f fluentbit-values.yaml -n netbackup
Following is an example for
fluentbit-values.yaml
file:global: environmentNamespace: "netbackup" containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com" timezone: null fluentbit: image: name: "netbackup/fluentbit" tag: 10.5-0036 pullPolicy: IfNotPresent volume: pvcStorage: "100Gi" storageClassName: nb-disk-premium metricsPort: 2020 cleanup: image: name: "netbackup/fluentbit-log-cleanup" tag: 10.5-0036 retentionDays: 7 retentionCleanupTime: '04:00' # Frequency in minutes utilizationCleanupFrequency: 60 # Storage % filled highWatermark: 90 lowWatermark: 60 # Collector node selector value collectorNodeSelector: node_selector_key: agentpool node_selector_value: nbupool # Toleraions Values (key=value:NoSchedule) tolerations: - key: agentpool value: nbupool - key: agentpool value: mediapool - key: agentpool value: primarypool - key: storage-pool value: storagepool - key: data-plane-pool value: dataplanepool
- (Applicable only for upgrade of DBaaS 10.4 to 10.5) Upgrade PostgreSQL DBaaS version from 14 to 16:
Note:
This step is not applicable when using containerized Postgres.
For Azure: Execute the kubectl command into 10.4 primary pod and create the
/tmp/grant_admin_option_to_roles.sql
file.Execute the following command to execute
grant_admin_option_to_roles.sql
file:/usr/openv/db/bin/psql "host=$(< /tmp/.nb-pgdb/dbserver) port=$(< /tmp/.nb-pgdb/dbport) dbname=NBDB user=$(< /tmp/.nb-pgdb/dbadminlogin) password=$(< /tmp/.nb-pgdb/dbadminpassword) sslmode=verify-full sslrootcert='/tmp/.db-cert/dbcertpem'" -f /tmp/grant_admin_option_to_roles.sql
/* Azure PostgreSQL upgrade from 14 to 16 does not grant the NetBackup database administrator role the ADMIN OPTION for NetBackup roles. This script will grant the NetBackup database administrator role the ADMIN OPTION so that it can manage NetBackup roles. */ GRANT ADTR_MAIN TO current_user WITH ADMIN OPTION; GRANT AUTH_MAIN TO current_user WITH ADMIN OPTION; GRANT DARS_MAIN TO current_user WITH ADMIN OPTION; GRANT DBM_MAIN TO current_user WITH ADMIN OPTION; GRANT EMM_MAIN TO current_user WITH ADMIN OPTION; GRANT JOBD_MAIN TO current_user WITH ADMIN OPTION; GRANT PEM_MAIN TO current_user WITH ADMIN OPTION; GRANT RB_MAIN TO current_user WITH ADMIN OPTION; GRANT SLP_MAIN TO current_user WITH ADMIN OPTION; GRANT NBPGBOUNCER TO current_user WITH ADMIN OPTION; GRANT NBWEBSVC TO current_user WITH ADMIN OPTION; GRANT AZ_DBA TO current_user WITH ADMIN OPTION;
Exit 10.4 primary pod. Ready for 10.4 with PostgreSQL to 10.5 with PostgreSQL 16 upgrade.
Upgrade Azure PostgreSQL version from 14 to 16 using Azure portal.
For AWS: Upgrade AWS PostgreSQL RDS version from 14 to 16 using AWS Management Console. Navigate to RDS page, select the database instance and click Modify to change the engine version.
For more information, see Upgrading the PostgreSQL DB engine for Amazon RDS.
- Perform the following steps when installing/upgrading the PostgreSQL database.
Note:
This step is not applicable when using DBaaS.
It is recommended to copy and check the differences between the sample and the default
postgres-values.yaml
file.Use the following command to save the PostgreSQL chart values to a file:
helm show values postgresql-<version>.tgz > postgres-values.yaml
Use the following command to edit the chart values:
logDestination: stderr
vi postgres-values.yaml
Execute the following command to upgrade the PostgreSQL database:
helm upgrade --install postgresql postgresql-<version>.tgz -f postgres-values.yaml -n netbackup
Or
If using the OCI container registry, use the following command:
helm upgrade --install postgresql oci://abcd.veritas.com:5000/helm-charts/netbackup-postgresql --version <version> -f postgres-values.yaml -n netbackup
Following is an example for
postgres-values.yaml
file:# Default values for postgresql. global: environmentNamespace: "netbackup" containerRegistry: "364956537575.dkr.ecr.us-east-1.amazonaws.com" timezone: null postgresql: replicas: 1 # The values in the image (name, tag) are placeholders. These will be set # when the deploy_nb_cloudscale.sh runs. image: name: "netbackup/postgresql" tag: "16.3-0036" pullPolicy: Always service: serviceName: nb-postgresql volume: volumeClaimName: nb-psql-pvc volumeDefaultMode: 0640 pvcStorage: 30Gi # configMapName: nbpsqlconf storageClassName: nb-disk-premium mountPathData: /netbackup/postgresqldb secretMountPath: /netbackup/postgresql/keys/server # mountConf: /netbackup securityContext: runAsUser: 0 createCerts: true # pgbouncerIniPath: /netbackup/pgbouncer.ini nodeSelector: key: agentpool value: nbupool # Resource requests (minima) and limits (maxima). Requests are used to fit # the database pod onto a node that has sufficient room. Limits are used to # throttle (for CPU) or terminate (for memory) containers that exceed the # limit. For details, refer to Kubernetes documentation: # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes # Other types of resources are documented, but only `memory` and `cpu` are # recognized by NetBackup. # # resources: # requests: # memory: 2Gi # cpu: 500m # limits: # memory: 3Gi # cpu: 3 # Example tolerations. Check taints on the desired nodes and update keys and # values. # tolerations: - key: agentpool value: nbupool - key: agentpool value: mediapool - key: agentpool value: primarypool - key: storage-pool value: storagepool - key: data-plane-pool value: dataplanepool serverSecretName: postgresql-server-crt clientSecretName: postgresql-client-crt dbSecretName: dbsecret dbPort: 13785 pgbouncerPort: 13787 dbAdminName: postgres initialDbAdminPassword: postgres dataDir: /netbackup/postgresqldb # postgresqlConfFilePath: /netbackup/postgresql.conf # pgHbaConfFilePath: /netbackup/pg_hba.conf defaultPostgresqlHostName: nb-postgresql # file => log postgresdb in file the default # stderr => log postgresdb in stderr so that fluentbit daemonset collect the logs. logDestination: file postgresqlUpgrade: replicas: 1 image: name: "netbackup/postgresql-upgrade" tag: "16.3-0036" pullPolicy: Always volume: volumeClaimName: nb-psql-pvc mountPathData: /netbackup/postgresqldb timezone: null securityContext: runAsUser: 0 env: dataDir: /netbackup/postgresqldb To save $$ you can set storageClassName to nb-disk-standardssd for non-production environments.
If primary node pool has taints applied and they are not added to
postgres-values.yaml
file above, then manually add tolerations to the PostgreSQL StatefulSet as follows:To verify that node pools use taints, run the following command:
kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect
NodeName TaintKey TaintValue TaintEffect ip-10-248-231-149.ec2.internal <none> <none> <none> ip-10-248-231-245.ec2.internal <none> <none> <none> ip-10-248-91-105.ec2.internal nbupool agentpool NoSchedule
To view StatefulSets, run the following command:
kubectl get statefulsets -n netbackup
NAME READY AGE nb-postgresql 1/1 76m nb-primary 0/1 51m
Edit the PostgreSQL StatefulSets and add tolerations as follows:
kubectl edit statefulset nb-postgresql -n netbackup
Following is an example of the modified PostgreSQL StatefulSets:
apiVersion: apps/v1 kind: StatefulSet metadata: annotations: meta.helm.sh/release-name: postgresql meta.helm.sh/release-namespace: netbackup creationTimestamp: "2024-03-25T15:11:59Z" generation: 1 labels: app: nb-postgresql app.kubernetes.io/managed-by: Helm name: nb-postgresql ... spec: template: spec: containers: ... nodeSelector: nbupool: agentool tolerations: - effect: NoSchedule key: nbupool operator: Equal value: agentpool
- (For DBaaS only) Perform the following to create Secret containing DBaaS CA certificates:
Note:
This step is not applicable when using containerized Postgres.
For AWS:
TLS_FILE_NAME='/tmp/tls.crt' PROXY_FILE_NAME='/tmp/proxy.pem' rm -f ${TLS_FILE_NAME} ${PROXY_FILE_NAME} DB_CERT_URL="https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem" DB_PROXY_CERT_URL="https://www.amazontrust.com/repository/AmazonRootCA1.pem" curl ${DB_CERT_URL} --output ${TLS_FILE_NAME} curl ${DB_PROXY_CERT_URL} --output ${PROXY_FILE_NAME} cat ${PROXY_FILE_NAME} >> ${TLS_FILE_NAME} kubectl -n netbackup create secret generic postgresql-netbackup-ca --from-file ${TLS_FILE_NAME}
For Azure:
DIGICERT_ROOT_CA='/tmp/root_ca.pem' DIGICERT_ROOT_G2='/tmp/root_g2.pem' MS_ROOT_CRT='/tmp/ms_root.crt' COMBINED_CRT_PEM='/tmp/tls.crt' DIGICERT_ROOT_CA_URL="https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem"; DIGICERT_ROOT_G2_URL="https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem"; MS_ROOT_CRT_URL="http://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt"; curl ${DIGICERT_ROOT_CA_URL} --output ${DIGICERT_ROOT_CA} curl ${DIGICERT_ROOT_G2_URL} --output ${DIGICERT_ROOT_G2} curl ${MS_ROOT_CRT_URL} --output ${MS_ROOT_CRT} openssl x509 -inform DER -in ${MS_ROOT_CRT} -out ${COMBINED_CRT_PEM} -outform PEM cat ${DIGICERT_ROOT_CA} ${DIGICERT_ROOT_G2} >> ${COMBINED_CRT_PEM} kubectl -n netbackup create secret generic postgresql-netbackup-ca --from-file ${COMBINED_CRT_PEM}
- Create db cert bundle if it does not exists as follows:
cat <<EOF | kubectl apply -f - apiVersion: trust.cert-manager.io/v1alpha1 kind: Bundle metadata: name: db-cert spec: sources: - secret: name: "postgresql-netbackup-ca" key: "tls.crt" target: namespaceSelector: matchLabels: kubernetes.io/metadata.name: "$ENVIRONMENT_NAMESPACE" configMap: key: "dbcertpem" EOF
After installing db-cert bundle, ensure that you have db-cert configMap present in
netbackup
namespace with size 1 as follows:bash-5.1$ kubectl get configmap db-cert -n $ENVIRONMENT_NAMESPACE NAME DATA AGE db-cert 1 11h
Note:
If the configMap is showing the size as 0, then delete it and ensure that the trust-manager recreates it before proceeding further.
- Perform the following steps to upgrade the Cloud Scale deployment:
Use the following command to obtain the environment name:
$ kubectl get environments -n netbackup
Navigate to the directory containing the patch file and upgrade the Cloud Scale deployment as follows:
$ cd scripts/
$ kubectl patch environment <env-name> --type json -n netbackup --patch-file cloudscale_patch.json
Modify the patch file if your current environment CR specifies spec.primary.tag or spec.media.tag. The patch file listed below assumes the default deployment scenario where only spec.tag and spec.msdpScaleouts.tag are listed.
Note the following:
When upgrading from embedded Postgres to containerized Postgres, add dbSecretName to the patch file.
If the images for the new release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.
In case of Cloud Scale upgrade, if the capacity of the primary server log volume is greater than the default value, you need to modify the primary server log volume capacity (spec.primary.storage.log.capacity) to default value that is, 30Gi . After upgrading to version 10.5, the decoupled services log volume should use the default log volume, while the primary pod log volume will continue to use the previous log size.
Examples of
.json
files:For
containerized_cloudscale_patch.json
upgrade from 10.4:[ { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.5-0036" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.5-0027" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.0.0-1022" } ]
For
containerized_cloudscale_patch.json
with primary, media tags and global tag:[ { "op": "replace", "path": "/spec/dbSecretName", "value": "dbsecret" }, { "op" : "replace" , "path" : "/spec/primary/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/mediaServers/0/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.4" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.xxxxx" } ]
For
DBAAS_cloudscale_patch.json
:Note:
This patch file is to be used only during DBaaS to DBaaS migration.
[ { "op" : "replace" , "path" : "/spec/dbSecretProviderClass" , "value" : "dbsecret-spc" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.4" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.xxxxx" } ]
For
Containerized_cloudscale_patch.json with new container registry
:Note:
If the images for the latest release that you are upgrading to are in a different container registry, modify the patch file to change the container registry.
[ { "op" : "replace" , "path" : "/spec/dbSecretName" , "value" : "dbsecret" }, { "op" : "replace" , "path" : "/spec/tag" , "value" : "10.5" }, { "op" : "replace" , "path" : "/spec/msdpScaleouts/0/tag" , "value" : "20.4" }, { "op" : "replace" , "path" : "/spec/cpServer/0/tag" , "value" : "10.5.x.xxxxx" } { "op" : "replace" , "path" : "/spec/containerRegistry" , "value" : "newacr.azurecr.io" }, { "op" : "replace" , "path" : "/spec/cpServer/0/containerRegistry" , "value" : "newacr.azurecr.io" } ]
- Wait until Environment CR displays the status as ready. During this time pods are expected to restart and any new services to start.
- Resume the backup job processing by using the following command:
# nbpemreq -resume_scheduling