NetBackup™ for Kubernetes Administrator's Guide
- Overview of NetBackup for Kubernetes
- Deploying and configuring the NetBackup Kubernetes operator
- Customize Kubernetes workload
- Deploying certificates on NetBackup Kubernetes operator
- Managing Kubernetes assets
- Managing Kubernetes intelligent groups
- Managing Kubernetes policies
- Protecting Kubernetes assets
- Managing image groups
- Protecting Rancher managed clusters in NetBackup
- Recovering Kubernetes assets
- About incremental backup and restore
- Enabling accelerator based backup
- Enabling FIPS mode in Kubernetes
- About Openshift Virtualization support
- Troubleshooting Kubernetes issues
Datamover pods exceed the Kubernetes resource limit
NetBackup controls the total number of in-progress backup jobs on Kubernetes workload using the two resource limit properties. In NetBackup version 10.0, datamover pods exceeds the
and resource limits set for per Kubernetes cluster.Scenario no 1
Resource limit for Backup from Snapshot jobs per Kubernetes cluster is set to 1.
Job IDs 3020 and 3021 are the parent jobs for Backup from snapshot. The creation of the data mover pod and its cleanup process are part of the backup job life cycle.
Job ID 3022 is the child job, where the data movement takes place from the cluster to the storage unit.
Based on the resource limit setting, while job ID 3022 is in the running state, job ID 3021 will continue to be in queued state. Once, the backup job ID 3022 is completed, then the parent Job ID 3021 will start.
Notice that the job ID 3020 is still in progress, since we are in process to clean up the data mover pod and complete the life cycle of the parent job ID 3020.
Scenario no 2
At this stage, we may encounter that there are 2 data mover pods running simultaneously in the NetBackup Kubernetes operator deployment namespace. Because the data mover pod created as part of job ID 3020 is still not cleaned up, but we started with creation of data mover pod for job 3021.
In a busy environment, where multiple Backup from Snapshot jobs are triggered, a low resource limit value setting may lead to backup jobs spending most of the time in the queued state.
But if we have a higher resource limit setting, we may observe that the data mover pods might exceed the count specified in the resource limit. This may lead to resource starvation in the Kubernetes cluster.
While the data movement job like 3022 runs in parallel, cleanup activities are handled sequentially. This when combined with the time it takes to cleanup the datamover resource, if closer to the time it takes to backup the pvc/namespace data leads to longer delay in the completion of the jobs.
If the combined time duration for data movement and clean up resources is like the backup job. Then, the backup job of persistent volume or namespace data may lead to delay in the job completion.
Recommended action: Ensure to review your system resources and performance, to set the resource limit value accordingly. This measure will help you achieve the best performance for all backup jobs.