NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring fluentbit
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Managing fluentbit
- Performing catalog backup and recovery
- Section IV. Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for primary, media servers, fluentbit pods, and postgres pods
- Upgrading
- Cloud Scale Disaster Recovery
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
- Appendix B. MSDP Scaleout
- MSDP Scaleout configuration
- Managing MSDP Scaleout
- MSDP Scaleout maintenance
Environment Disaster Recovery
Ensure that the Cloud Scale deployment has been cleaned up in the cluster.
Perform the following to verify the cleanup process:
Ensure that the namespace associated with Cloud Scale deployment are deleted by using the following command:
kubectl get ns
Confirm that storageclass, pv, clusterroles, clusterrolebindings, crd's associated with Cloud Scale deployment are deleted by using the following command:
kubectl get sc,pv,crd,clusterrolebindings,clusterroles
(For EKS) If deployment is in different AZ, update the subnet name in
environment_backup.yaml
file.For example, if earlier subnet name was
and new subnet is , then inenvironment_backup.yaml
file, there would be a section forloadBalancerAnnotations
as follows:loadBalancerAnnotations: service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az1
Update the name to new subnet name as follows:
loadBalancerAnnotations: service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az2
Update all IPs used for Primary, MSDP, Media and Snapshot Manager server in respective section.
Note:
Change of FQDN is not supported.
The following example shows how to change the IP for Primary server:
Old entry in
environment_backup.yaml
file:ipList: - ipAddr: 12.123.12.123 fqdn: primary.netbackup.com
Update the above old entry as follows:
ipList: - ipAddr: 34.245.34.234 fqdn: primary.netbackup.com
Similarly perform the above given procedure in the example (Primary server) for MSDP, Media and Snapshot Manager server.
Ensure that the iplist listed in Primary, Media, MSDP and Snapshot Manager server sections of
environment_backup.yaml
file that was saved during backup must be free and resolvable. If deployment is in different AZ, then FQDN must be same, but IP can be changed, hence ensure that same FQDN's can map to different IP.(For EKS) Update spec > priamryServer > storage > catalog > storageClassName with new EFS ID which is created for primary.
Search and delete the following sections from the backed up copy of
environment_backup.yaml
file:, , , ,
For example:
Sample environment_backup.yaml file before deleting the above sections:
apiVersion: netbackup.veritas.com/v2 kind: Environment metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"netbackup.veritas.com/v2","kind":"Environment","metadata":{"annotations":{},"name":"environment-sample","namespace":"nb-namespace"},"spec":{"configCheckMode":"skip","containerRegistry":"nbuk8sreg.azurecr.io","cpServer":[{"credential":{"secretName":"cp-creds"},"name":"cpserver-1","networkLoadBalancer":{"fqdn":"nbux-10-244-33-78.vxindia.veritas.com","ipAddr":"10.244.33.78"},"nodeSelector":{"controlPlane":{"labelKey":"agentpool","labelValue":"nbuxpool","nodepool":"nbuxpool"},"dataPlane":{"labelKey":"cp-data-pool","labelValue":"cpdata","nodepool":"cpdata"}},"storage":{"data":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"},"log":{"capacity":"5Gi","storageClassName":"azurefile-csi-retain"}},"tag":"10.3-0003"}],"drInfoSecretName":"dr-info-secret","loadBalancerAnnotations":{"service.beta.kubernetes.io/azure-load-balancer-internal-subnet":"LB-RESERVED"},"mediaServers":[{"minimumReplicas":1,"name":"media1","networkLoadBalancer":{"ipList":[{"fqdn":"nbux-10-244-33-75.vxindia.veritas.com","ipAddr":"10.244.33.75"}]},"nodeSelector":{"labelKey":"agentpool","labelValue":"nbuxpool"},"replicas":1,"storage":{"data":{"capacity":"50Gi","storageClassName":"managed-csi-hdd"},"log":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"}}}],"msdpScaleouts":[{"credential":{"secretName":"msdp-secret1"},"ipList":[{"fqdn":"nbux-10-244-33-76.vxindia.veritas.com","ipAddr":"10.244.33.76"}],"kms":{"keyGroup":"example-key-group","keySecret":"example-key-secret"},"loadBalancerAnnotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"name":"dedupe1","nodeSelector":{"labelKey":"agentpool","labelValue":"nbuxpool"},"replicas":1,"storage":{"dataVolumes":[{"capacity":"50Gi","storageClassName":"managed-csi-hdd"}],"log":{"capacity":"5Gi","storageClassName":"managed-csi-hdd"}},"tag":"19.0-0003"}],"primary":{"credSecretName":"primary-credential-secret","kmsDBSecret":"kms-secret","networkLoadBalancer":{"ipList":[{"fqdn":"nbux-10-244-33-74.vxindia.veritas.com","ipAddr":"10.244.33.74"}]},"nodeSelector":{"labelKey":"agentpool","labelValue":"nbuxpool"},"storage":{"catalog":{"autoVolumeExpansion":false,"capacity":"100Gi","storageClassName":"azurefile-csi-retain"},"data":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"},"log":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"}}},"tag":"10.3-0003"}} creationTimestamp: "2023-08-01T06:40:34Z" generation: 1 name: environment-sample namespace: nb-namespace resourceVersion: "96785" uid: 7bf36bb2-2291-4a58-b72c-0bc85b60385b spec: configCheckMode: skip containerRegistry: nbuk8sreg.azurecr.io corePattern: /core/core.%e.%p.%t ....
Sample environment_backup.yaml file after deleting the above sections:
apiVersion: netbackup.veritas.com/v2 kind: Environment metadata: name: environment-sample namespace: nb-namespace spec: configCheckMode: skip containerRegistry: nbuk8sreg.azurecr.io corePattern: /core/core.%e.%p.%t ....
Ensure that nodeSelector is present in the
environment_backup.yaml
file and operators that were noted down during backup must be present in the cluster with required configurations.Perform the steps in the following section for deploying DBaaS:
Create namespace that is present in
environment_backup.yaml
file:kubectl create ns <sample-namespace>
(For 10.5 and above) Deploy operator, fluentbit, postgres, by performing the steps mentioned in the following sections:
See Deploying fluentbit for logging.
See Deploying Postgres.
Note:
If the values for
fluentbit-values.yaml
,operators-values.yaml
andpostgres-values.yaml
files are not saved, then use the saved data to populate the files when creating these files as mentioned in the above sections.Deploy the
dbtrust.yaml
file as follows:Create
dbtrust.yaml
file and add below to it:apiVersion: trust.cert-manager.io/v1alpha1 kind: Bundle metadata: name: db-cert namespace: netbackup spec: sources: - secret: name: "postgresql-netbackup-ca" key: "tls.crt" target: namespaceSelector: matchLabels: kubernetes.io/metadata.name: netbackup configMap: key: "dbcertpem"
Run the following command:
kubectl apply -f dbtrust.yaml
Create secrets as follows using
secret_backup.yaml
file that was backed up:kubectl apply -f secret_backup.yaml
Verify all secrets are created using the following command:
kubectl get secrets -n <sample-namespace>
Note:
This step requires the backed up data in step 7 for
secretName
(MSDP credential) anddrInfoSecretName
file.Create configmaps and internal configmaps as follows:
kubectl apply -f configmap_backup.yaml
kubectl apply -f internalconfigmap_backup.yaml
Verify if all configmaps are created by using the following command:
kubectl get configmaps -n <sample-namespace>
Note:
This step requires the backed up data in step 10 for
emailServerConfigmap
file.If your setup is upgraded from earlier version to NetBackup version 10.5 and not yet moved to no LB mode, then create
cs-configmap
with entry DR_MULTIPLE_MEDIA_LB_MODE = "1".For example,
cs-config
configmapapiVersion: v1 kind: ConfigMap metadata: name: "cs-config" namespace: nb-namespace data: DR_MULTIPLE_MEDIA_LB_MODE: "1"
Note:
If cs-config configmap is already backed up during backup, then add the DR_MULTIPLE_MEDIA_LB_MODE = "1" entry in data section by using the following command:
kubectl edit configmap cs-config -n <sample-namespace>
(Required only for DBaaS deployment) Snapshot Manager restore steps:
For AKS
Navigate to the snapshot resource created during backup and
under the recovered cluster infra resource group (for example,MC_<clusterRG>_<cluster name>_<cluster_region>
).Note down the resource ID of this disk (navigate to the
of the disk). It can be obtained from portal/az cli.Format of resource ID:
/subscriptions/<subscription id>/resourceGroups/<MC_<clusterRG>_<cluster name>_<cluster_region>/providers/Microsoft.Compute/disks>/<disk name>
Create static PV using the resource ID of backed up disk. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in
pgsql-pv.yaml
file and apply the yaml:pgsql-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: <pv name> spec: capacity: storage: <size of the disk> accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: <storage class name> claimRef: name: psql-pvc namespace: <environment namespace> csi: driver: disk.csi.azure.com readOnly: false volumeHandle: <Resorce ID of the Disk>
Example of
pgsql-pv.yaml
file:apiVersion: v1 kind: PersistentVolume metadata: name: psql-pv spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: gp2-immediate claimRef: name: psql-pvc namespace: nbux csi: driver: disk.csi.azure.com readOnly: false volumeHandle: /subscriptions/a332d749-22d8-48f6-9027-ff04b314e840/resourceGroups/MC_vibha-vasantraohadule-846288_auto_aks-vibha-vasantraohadule-846288_eastus2/providers/Microsoft.Compute/disks/psql-disk
Create psql-pv using the following command:
kubectl apply -f <path_to_psql_pv.yaml> -n <environment-namespace>
Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:
kubectl get pv | grep psql-pvc
>> psql-pv 30Gi RWO managed-premium-disk Available nbu/psql-pvc 50s
For EKS
Navigate to the Actions drop down) which is taken in backup step 2 in same availability zone (AZ) of volume attached to psql-pvc (mentioned in step 1 of backup steps).
in AWS Console and click on the (expand theNote down the volumeID (for example,
).In case deployment is in different availability zone (AZ), user must change the availability zone (AZ) for volume and update the volumeID accordingly.
Create static PV using the backed up volumeID. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in
pgsql-pv.yaml
file and apply the yaml:pgsql-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: <pv name> spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: <fs type> volumeID: <backed up volumeID> # append this aws://az-code/ , for e.g. aws://us-east-2b/ at the starting capacity: storage: 30Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: psql-pvc namespace: <netbackup namespace> persistentVolumeReclaimPolicy: Retain storageClassName: <storage class name> volumeMode: Filesystem
Sample yaml for
pgsql-pv.yaml
file:apiVersion: v1 kind: PersistentVolume metadata: name: psql-pv spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: ext4 volumeID: aws://us-east-2b/vol-0d86d2ca38f231ede capacity: storage: 30Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: psql-pvc namespace: nbu persistentVolumeReclaimPolicy: Retain storageClassName: gp2-immediate volumeMode: Filesystem
Create psql-pv using the following command:
kubectl apply -f <path_to_psql_pv.yaml> -n <netbackup-namespace>
kubectl get pv | grep psql-pvc
Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:
kubectl get pv | grep psql-pvc
>>> psql-pv 30Gi RWO gp2-immediate Available nbu/psql-pvc 50s
Perform the following steps to recover the environment:
Make a copy of environment CR yaml (
environment_backup.yaml
) file with nameenvironment_backup_copy.yaml
and save it for later use.Remove CP-server section from the original
environment_backup.yaml
file.Modify the environment with the paused: true field in MSDP and Media sections. Modify the following and save it:
Only primary server must get deployed in this case. Now apply this modified
environment.yaml
file using the following command:kubectl apply -f <environment.yaml file name>
Once primary server is up and running:
Perform the following steps for NBATD pod recovery:
Create the
DRPackages
directory on persisted location/mnt/nblogs/
in nbatd pod by executing the following command:kubectl exec -it -n <namespace> <nbatd-pod-name> --/bin/bash
mkdir /mnt/nblogs/DRPackages
Copy DR files which were saved when performing DR backup to nbatd pod at
/mnt/nblogs/DRPackages
using the following command:kubectl cp <Path_of_DRPackages_on_host_machine> <nbatd-pod-namespace>/<nbatd-pod-name>:/mnt/nblogs/DRPackages
Execute the following steps in the nbatd pod:
Execute the kubectl exec -it -n <namespace> <nbatd-pod-name> --/bin/bash command.
Deactivate nbatd health probes using the /opt/veritas/vxapp-manage/nbatd_health.sh disable command.
Stop the nbatd service using /opt/veritas/vxapp-manage/nbatd_stop.sh 0 command.
Execute the /opt/veritas/vxapp-manage/nbatd_identity_restore.sh -infile /mnt/nblogs/DRPackages/ (DR package name) command.
Execute # kubectl exec -it -n <namespace> <primary-pod-name> -- /bin/bash command to exec into the primary pod.
Increase the debug logs level on primary server.
Create a directory mkdir /mnt/nbdb/usr/openv/drpackage command and provide the permission as .
at persisted location using
Copy back the earlier copied DR files to primary pod at
/mnt/nbdb/usr/openv/drpackage
file using the following command:kubectl cp <Path_of_DRPackages_on_host_machine> <primary-pod-namespace>/<primary-pod-name>:/mnt/nbdb/usr/openv/drpackage
Execute the following steps after executing into the primary server pod:
Change the ownership of files in
/mnt/nbdb/usr/openv/drpackage
using the chown nbsvcusr:nbsvcusr <file-name> command.Deactivate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health deactivate command.
Stop the NetBackup services using /usr/openv/netbackup/bin/bp.kill_all command.
Execute the /usr/openv/netbackup/bin/admincmd/nbhostidentity -import -infile /mnt/nbdb/usr/openv/drpackage/.drpkg command.
Clear NetBackup host cache, run the bpclntcmd -clear_host_cache command.
Restart the pods as follows:
Navigate to the
VRTSk8s-netbackup-<version>/scripts
folder.Run the
cloudscale_restart.sh
script with Restart option as follows:./cloudscale_restart.sh <action> <namespace>
Provide the namespace and the required action:
: Stops all the services under primary server (waits until all the services are stopped).
: Starts all the services and waits until the services are up and running under primary server.
: Stops the services and waits until all the services are down. Then starts all the services and waits until the services are up and running.
Note:
Ignore if policy job pod does not come up in running state. Policy job pod would start once primary services start.
Refresh the certificate revocation list using the /usr/openv/netbackup/bin/nbcertcmd -getcrl command.
Run the primary server reconciler.
This can be done by editing the environment (using kubectl edit environment -n <namespace> command) and changing primary spec's field to and save it.
Then, to enable the reconciler to run, the environment needs to be edited again and the primary's paused field in spec should again be set to false.
The SHA fingerprint will get updated in the primary CR's status.
Allow Auto reissue certificate from primary for MSDP, Media and Snapshot Manager server from Web UI.
In Web UI, navigate to Security > Host Mappings > for the MSDP Storage Server, click on the 3 dots on the right > check Allow Auto reissue Certificate. Repeat this for media servers and Snapshot Manager server entries also.
Edit the environment using the kubectl edit environment -n <namespace> command and change field to for MSDP.
Redeploy MSDP Scaleout on a cluster by using the same CR parameters and NetBackup re-issue token.
If the LSU cloud alias does not exist, you can use the following command to add it.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in <instance-name> -sts <storage-server-name> -lsu_name <lsu-name>
When MSDP Scaleout is up and running, re-use the cloud LSU on NetBackup primary server.
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig -storage_server <STORAGESERVERNAME> -stype PureDisk -configlist <configuration file>
Credentials, bucket name, and sub bucket name must be the same as the recovered Cloud LSU configuration in the previous MSDP Scaleout deployment.
Configuration file template:
V7.5 "operation" "reuse-lsu-cloud" string V7.5 "lsuName" "LSUNAME" string V7.5 "cmsCredName" "XXX" string V7.5 "lsuCloudAlias" "<STORAGESERVERNAME_LSUNAME>" string V7.5 "lsuCloudBucketName" "XXX" string V7.5 "lsuCloudBucketSubName" "XXX" string V7.5 "lsuKmsServerName" "XXX" string
Note:
For Veritas Alta Recovery Vault Azure storage, the cmsCredName is a credential name and cmsCredName can be any string. Add recovery vault credential in the CMS using the NetBackup Web UI and provide the credential name for cmsCredName. For more information, see About Veritas Alta Recovery Vault Azure topic in NetBackup Deduplication Guide.
On the first MSDP Engine of MSDP Scaleout, run the following command for each cloud LSU:
sudo -E -u msdpsvc /usr/openv/pdde/pdcr/bin/cacontrol --catalog clouddr <LSUNAME>
Restart the MSDP services in the MSDP Scaleout.
Option 1: Manually delete all the MSDP engine pods.
kubectl delete pod <sample-engine-pod> -n <sample-cr-namespace>
Option 2: Stop MSDP services in each MSDP engine pod. MSDP service starts automatically.
kubectl exec <sample-engine-pod> -n <sample-cr-namespace> -c uss-engine -- /usr/openv/pdde/pdconfigure/pdde stop
Edit environment CR and change paused = false for media server.
Perform full Catalog Recovery using either of the options listed below:
Trigger a Catalog Recovery from the Web UI.
Or
Exec into primary pod and run the bprecover -wizard command.
Once recovery is completed, restart the pods as follows:
Navigate to the
VRTSk8s-netbackup-<version>/scripts
folder.Run the
cloudscale_restart.sh
script with Restart option as follows:./cloudscale_restart.sh <action> <namespace>
Provide the namespace and the required action:
: Stops all the services under primary server (waits until all the services are stopped).
: Starts all the services and waits until the services are up and running under primary server.
: Stops the services and waits until all the services are down. Then starts all the services and waits until the services are up and running.
Activate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health activate command.
Apply the
backup_environment.yaml
file and install Snapshot Manager server. Wait for Snapshot Manager pods to come up and in running state.Note:
Ensure that cpServer section in CR is enabled.
Post Disaster Recovery:
If on host agent fails, run the following respective commands:
For Windows: From the command prompt navigate to the agent installation directory (
C:\Program Files\Veritas\CloudPoint\
) and run the following command:#flexsnap-agent.exe --renew --token <auth_token> renew
This command fails in the first attempt. Rerun the command for successful attempt.
For Linux: Rerun the following command on Linux host:
sudo flexsnap-agent --renew --token <auth_token>{}
After Snapshot Manager recovery, if some SLP jobs are failing repetitively due to some pending operations before disaster recovery, then cancel the Storage Lifecycle Policy (SLP) jobs using the nbstlutil command.
For more information on the nbstlutil command, refer to the NetBackup™ Commands Reference Guide.