Please enter search query.
Search <book_title>...
NetBackup™ Deployment Guide for Kubernetes Clusters
Last Published:
2023-04-24
Product(s):
NetBackup (10.2)
- Introduction
- Section I. Deployment
- Prerequisites for Kubernetes cluster configuration
- Deployment with environment operators
- Deploying NetBackup
- Primary and media server CR
- Deploying NetBackup using Helm charts
- Deploying MSDP Scaleout
- Deploying Snapshot Manager
- Section II. Monitoring and Management
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager
- Managing the Load Balancer service
- Managing MSDP Scaleout
- Performing catalog backup and recovery
- Section III. Maintenance
- MSDP Scaleout Maintenance
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Resolving an issue of failed probes
If a pod is not in a ready state for a long time, the kubectl describe pod/<podname> -n <namespace> command displays the following errors:
Readiness probe failed: The readiness of the external dependencies is not set.
Server setup is still in progress.
Liveness probe failed: bpps command did not list nbwmc process. nbwmc is not alive.
The Primary server is unhealthy.
To resolve an issue of failed probes
- If you are deploying NetBackup on Kubernetes Cluster for the first time, check the installation logs for detailed error.
Use any of the following methods:
Execute the following command in the respective primary server or media server pod and check the logs in
/mnt/nblogs/setup-server.logs
:kubectl exec -it -n <namespace> <pod-name> -- /bin/bash
Run the kubectl logs pod/<podname> -n <namespace> command.
- Check pod events for obtaining more details for probe failure using the following command:
kubectl describe pod/<podname> -n <namespace>
Kubernetes will automatically try to resolve the issue by restarting the pod after liveness probe times out.
- Depending on the error in the pod logs, perform the required steps or contact technical support.